Apply

Data Engineer

Posted 2024-12-03

View full description

πŸ’Ž Seniority level: Junior, 2 years at least

πŸ“ Location: Argentina

πŸ” Industry: Healthcare, manufacturing, government, education, financial services, and legal industries

🏒 Company: Netrix Global

πŸ—£οΈ Languages: English

⏳ Experience: 2 years at least

πŸͺ„ Skills: AWSPythonAgileETLGitSCRUMDevOpsTerraform

Requirements:
  • Experience in Data Engineer role with tasks including ETL, Big Data, data ingest, and data streaming.
  • At least 2 years of experience working in Cloud environments, particularly AWS.
  • Knowledge of Python, R, or other data processing tools.
  • Intermediate or higher English level.
  • Experience using AWS Data Services like Amazon Redshift, S3, AWS Glue, Athena, Sagemaker, etc.
  • Experience with DevOps tools such as Git and pipelines.
  • Experience working in an Agile Scrum team.
  • Experience with infrastructure as code tools like Terraform or Cloudformation.
Responsibilities:
  • Identifying, creating, and preparing data for modern BI solutions.
  • Designing ETLs and ELTs for data transformations.
  • Designing and managing Datalake/Lakehouse architectures.
  • Integrating and testing BI solutions.
  • Creating and documenting tests to meet requirements.
  • Managing Data Cloud services, including data access, quality, security, and governance.
  • Managing monitoring and logs of applications and services.
Apply

Related Jobs

Apply

πŸ“ Argentina

🧭 Full-Time

πŸ” Nonprofit fundraising technology

🏒 Company: GoFundMeπŸ‘₯ 251-500πŸ’° $ Series A on 2015-07-02πŸ«‚ on 2022-10-26InternetCrowdfundingPeer to Peer

  • 5+ years as a data engineer crafting, developing and maintaining business data warehouse alternatives consisting of structured and unstructured data.
  • Proficiency with building and orchestrating data pipelines using ETL/data preparation tools.
  • Expertise in orchestration tools like Airflow or Prefect.
  • Proficiency in connecting data through web APIs.
  • Proficiency in writing and optimizing SQL queries.
  • Solid knowledge of Python and other programming languages.
  • Experience with Snowflake is required.
  • Good understanding of database architecture and best practices.

  • Develop and maintain enterprise data warehouse (Snowflake).
  • Develop and orchestrate ELT data pipelines, sourcing data from databases and web APIs.
  • Integrate data from warehouse into third-party tools for actionable insights.
  • Develop and sustain REST API endpoints for data science products.
  • Provide ongoing maintenance and improvements to existing data solutions.
  • Monitor and optimize Snowflake usage for performance and cost-effectiveness.

AWSPythonSQLAWS EKSETLJavaKubernetesSnowflakeC++AirflowREST APICollaborationTerraform

Posted 2024-12-03
Apply
Apply

πŸ“ United States, Latin America, India

πŸ” Cloud Data Technologies

🏒 Company: phData

  • At least 4+ years experience as a Software Engineer, Data Engineer or Data Analyst.
  • Programming expertise in Java, Python and/or Scala.
  • Core cloud data platforms: Snowflake, AWS, Azure, Databricks, and GCP.
  • SQL proficiency and the ability to write, debug, and optimize SQL queries.
  • Experience creating and delivering detailed presentations.

  • Develop end-to-end technical solutions into production.
  • Ensure performance, security, scalability, and robust data integration.
  • Client-facing communication and presentation delivery.
  • Create detailed solution documentation.

AWSPythonSoftware DevelopmentSQLElasticSearchGCPHadoopJavaKafkaSnowflakeAirflowAzureCassandraElasticsearchNosqlSparkCommunication SkillsDocumentation

Posted 2024-12-03
Apply
Apply

πŸ“ LATAM

πŸ” Staff augmentation

🏒 Company: Nearsure

  • Bachelor's Degree in Computer Science, Engineering, or related field.
  • 5+ Years of experience working in Data Engineering.
  • 3+ Years of experience working with SQL.
  • 3+ Years of experience working with Python.
  • 2+ Years of experience working with Cloud.
  • 2+ Years of experience working with Microservices.
  • 2+ Years of experience working with Databases.
  • 2+ Years of experience working with CI/CD tools.
  • Advanced English Level is required.

  • Write code to test and validate business logic in SQL and code repositories.
  • Test back-end API interfaces for the financial platform.
  • Build developer productivity tools.
  • Provide risk assessments on identified defects.
  • Define quality engineering scope and maintain automated scripts.
  • Participate in Scrum meetings for status updates.
  • Identify and clarify information needs.
  • Organize and resolve current backlog issues.

PythonSQLSCRUMData engineeringCI/CDMicroservices

Posted 2024-11-26
Apply
Apply

πŸ“ United States, Latin America, India

🧭 Full-Time

πŸ” Modern data stack and cloud data services

  • At least 4+ years experience as a Software Engineer, Data Engineer, or Data Analyst.
  • Ability to develop end-to-end technical solutions in production environments.
  • Proficient in Java, Python, or Scala.
  • Experience with core cloud data platforms like Snowflake, AWS, Azure, Databricks, and GCP.
  • Strong SQL skills, capable of writing, debugging, and optimizing queries.
  • Client-facing communication skills for presentations and documentation.
  • Bachelor's degree in Computer Science or a related field.

  • Develop end-to-end technical solutions and ensure their performance, security, and scalability.
  • Help with robust data integration and contribute to the production deployment of solutions.
  • Create and deliver detailed presentations to clients.
  • Document solutions including POCs, roadmaps, diagrams, and logical system views.

AWSPythonSoftware DevelopmentSQLElasticSearchGCPHadoopJavaKafkaSnowflakeAirflowAzureCassandraElasticsearchNosqlSparkCommunication SkillsDocumentation

Posted 2024-11-24
Apply
Apply

πŸ“ Costa Rica, LATAM

🧭 Full-Time

πŸ” IT solutions and consulting

  • 5+ years of Data Engineering experience including 2+ years designing and building Databricks data pipelines is REQUIRED.
  • 2+ years of hands-on Python/Pyspark/SparkSQL and/or Scala experience is REQUIRED.
  • 2+ years of experience with Big Data pipelines or DAG tools is REQUIRED.
  • 2+ years of Spark experience, especially Databricks Spark and Delta Lake, is REQUIRED.
  • 2+ years of hands-on experience implementing Big Data solutions in a cloud ecosystem is REQUIRED.
  • 2+ years of SQL experience, specifically writing complex queries, is HIGHLY DESIRED.
  • Experience with source control (git) on the command line is REQUIRED.

  • Scope and execute together with team leadership.
  • Work with the team to understand platform capabilities and how to improve them.
  • Design, develop, enhance, and maintain complex data pipeline products.
  • Support analytics, data science, and engineering teams and address their challenges.
  • Commit to continuous learning and developing technical maturity across the company.

LeadershipPythonSQLGitKafkaAirflowAzureData engineeringSpark

Posted 2024-11-22
Apply
Apply
πŸ”₯ Data Engineer
Posted 2024-11-20

πŸ“ Argentina, Spain, England, United Kingdom, Lisbon, Portugal

🧭 Full-Time

πŸ” Web3

🏒 Company: ReownπŸ‘₯ 51-100UX DesignCloud ComputingAppsSoftware

  • 5+ years working in the analytics stack within a fast-paced environment.
  • 3+ years production experience with SQL templating engines like DBT.
  • Experience with distributed query engines (Bigquery, Athena, Spark), data warehouses, and BI tools.
  • Strong understanding of software engineering principles, coding standards, design patterns, version control (e.g., Git), testing methodologies, and CI/CD processes.
  • Experience with AWS/GCP/Azure services for deployment and management.
  • Familiarity with GitHub, CI/CD pipelines, GitHub Actions, and Terraform.
  • Ability to write Python scripts for ETL processes and data manipulation.
  • Proficient with libraries like pandas for analysis and transformation.
  • Experience handling various data formats (e.g., CSV, JSON, Parquet).
  • Strong problem-solving skills and communication abilities to discuss technical concepts.

  • Write complex SQL queries that extract and combine data from on-chain and off-chain logs for analytics.
  • Create dashboards and tools for team data discoverability and KR tracking.
  • Perform deep-dive analyses into specific topics for internal stakeholders.
  • Help design, implement, and evolve Reown's on-chain data infrastructure.
  • Build, maintain, and monitor end-to-end data pipelines for new datasets and features.
  • Write health-checks and alerts to ensure data correctness, consistency, and freshness.
  • Meet with product managers and stakeholders to understand data needs and detect new product opportunities.

AWSPythonSQLData AnalysisDesign PatternsETLGCPGitTableauAzureClickhouseData analysisPandasSparkCommunication SkillsCI/CDTerraformWritten communication

Posted 2024-11-20
Apply
Apply

πŸ“ Spain, Colombia, Argentina, Chile, Mexico

πŸ” HR Tech

🏒 Company: Jobgether

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field.
  • Minimum of 5 years of experience in data engineering, focusing on building and managing data pipelines and infrastructure.
  • 5 years of experience in Python programming.
  • Hands-on experience with big data frameworks and tools like Hadoop, Spark, or Kafka.
  • Proficiency with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (experience with MongoDB).
  • Experience with cloud platforms, specifically AWS.
  • Strong understanding of data modeling, schema design, and data warehousing concepts.
  • Excellent analytical and troubleshooting abilities.
  • Fluency in English and Spanish.
  • Preferred: Experience deploying machine learning models, familiarity with CI/CD pipelines, knowledge of data privacy regulations.

  • Design, build, and maintain scalable data pipelines and ETL processes to collect, process, and store large volumes of structured and unstructured data.
  • Develop and optimize data scraping and extraction solutions to gather job data from multiple sources efficiently.
  • Collaborate with data scientists to implement and optimize AI-driven matching algorithms.
  • Ensure data integrity, accuracy, and reliability by implementing robust validation and monitoring mechanisms.
  • Analyze and optimize system performance, addressing bottlenecks and scaling challenges.
  • Work with cross-functional teams to deploy machine learning models into production environments.
  • Stay abreast of emerging technologies and best practices in data engineering and AI.
  • Partner with Product, Engineering, and Operations teams to understand data requirements.
  • Develop and maintain comprehensive documentation for data pipelines and systems architecture.

AWSPostgreSQLPythonETLHadoopKafkaMongoDBMySQLAlgorithmsData engineeringNosqlSparkCommunication SkillsCollaborationDevOpsWritten communicationDocumentation

Posted 2024-11-20
Apply
Apply

πŸ“ Latin America, United States, Canada

πŸ” Life insurance

  • The ideal candidate will be independent and a great communicator.
  • Attention to detail is critical.
  • Must possess problem-solving skills.

  • Develop and maintain enterprise data and analytics systems for a US client.
  • Optimize performance by building and supporting decision-making tools.
  • Collaborate closely with software engineering, AI/ML, Cybersecurity, and DevOps/SysOps teams.
  • Support end-to-end data pipelines using Python or Scala.
  • Participate in Agile framework-related tasks.

PythonSoftware DevelopmentAgileData engineeringDevOpsAttention to detail

Posted 2024-11-15
Apply
Apply

πŸ“ Argentina, Colombia, Costa Rica, Mexico

🧭 Full-Time

πŸ” Data Analytics

  • Proficient with SQL and data visualization tools (e.g., Tableau, PowerBI, Google Data Studio).
  • Programming skills mainly in SQL.
  • Knowledge and experience with Python and/or R is a plus.
  • Experience with tools like Alteryx is a plus.
  • Experience working with Google Cloud and AWS is a plus.

  • Analyze data and consult with subject matter experts to design and develop business rules for data processing.
  • Setup and/or maintain existing dataflows in tools like Alteryx or Google Dataprep.
  • Create and/or maintain SQL scripts.
  • Monitor, troubleshoot, and remediate data quality across marketing data systems.
  • Design and execute data quality checks.
  • Maintain ongoing management of data governance, processing, and reporting.
  • Govern taxonomy additions, application, and use.
  • Serve as a knowledge expert for operational processes and identify improvement areas.
  • Evaluate opportunities for simplification and/or automation.

PythonSQLGCPMicrosoft Power BITableau

Posted 2024-11-09
Apply
Apply

πŸ“ Argentina, Colombia, Costa Rica, Mexico

πŸ” Data Analytics

  • Proficient with SQL and data visualization tools (i.e. Tableau, PowerBI, Google Data Studio).
  • Programming skills mainly SQL.
  • Knowledge and experience with Python and/or R a plus.
  • Experience with Alteryx a plus.
  • Experience working with Google Cloud and AWS a plus.

  • Analyze data and consult with subject matter experts to design and develop business rules for data processing.
  • Setup and/or maintain existing dataflows in data wrangling tools like Alteryx or Google Dataprep.
  • Create and/or maintain SQL scripts.
  • Monitor, troubleshoot, and remediate data quality across marketing data systems.
  • Design and execute data quality checks.
  • Maintain ongoing management and stewardship of data governance, processing, and reporting.
  • Govern taxonomy additions, application, and use.
  • Serve as a knowledge expert for operational processes and identify improvement areas.
  • Evaluate opportunities for simplification and/or automation for reporting and processes.

AWSPythonSQLData AnalysisGCPMicrosoft Power BITableauAmazon Web ServicesData analysisData engineering

Posted 2024-11-09
Apply