Apply

Data Engineer

Posted 10 days agoViewed

View full description

πŸ’Ž Seniority level: Junior, 2+ years

πŸ“ Location: Uruguay, Argentina, Colombia

πŸ” Industry: Data Engineering

🏒 Company: RootstrapπŸ‘₯ 51-100AndroidPresentation SoftwareiOSWeb AppsConsumer SoftwareEnterprise ApplicationsSmart Contracts3D TechnologyConsumer ApplicationsInformation Technology

πŸ—£οΈ Languages: English

⏳ Experience: 2+ years

πŸͺ„ Skills: AWSDockerPostgreSQLApache AirflowDynamoDBETLGCPHadoopKubernetesMongoDBMySQLSnowflakeApache KafkaAzureCassandra

Requirements:
  • 2+ years of experience in data engineering or related field
  • Experience with SQL databases like MySQL or PostgreSQL
  • Familiarity with NoSQL databases like MongoDB, Cassandra, or DynamoDB
  • Knowledge of data pipeline tools like Apache Kafka and Apache Airflow
  • Experience with cloud platforms such as AWS, Azure, or GCP
  • Familiarity with search and analytics systems like ElasticSearch or Splunk
  • Experience with data warehousing platforms like Snowflake or Amazon Redshift
  • Experience with Hadoop and Azure Data Lake
  • Proficiency in Docker and Kubernetes
  • Experience with distributed data processing using Apache Spark or Flink
Responsibilities:
  • Design, build, and maintain data infrastructure
  • Manage large datasets, ensuring data quality and reliability
  • Develop and manage ETL processes
  • Create and optimize data pipelines
  • Collaborate with Data Scientists
  • Monitor and maintain database performance
  • Ensure data security and compliance
  • Troubleshoot data-related issues
  • Document data processes and infrastructure
Apply

Related Jobs

Apply

πŸ“ LATAM

🧭 Full-Time

πŸ” Healthtech

🏒 Company: UrrlyπŸ‘₯ 1-10Artificial Intelligence (AI)Business DevelopmentSalesInformation Technology

  • Proficiency in SQL and Python
  • Strong experience with PostgreSQL
  • Experience working with APIs and cloud-based ETL processes using Airflow
  • Power BI experience is a huge advantage
  • Design, develop, and maintain ETL pipelines
  • Build and manage cloud-based workflows using Airflow
  • Integrate data from multiple sources using APIs
  • Collaborate with cross-functional teams to support data-driven decision-making
  • Create insightful reports and dashboards using Power BI

PostgreSQLPythonSQLApache AirflowCloud ComputingETL

Posted 1 day ago
Apply
Apply

πŸ“ Europe, APAC, Americas

🧭 Full-Time

πŸ” Software Development

🏒 Company: DockerπŸ‘₯ 251-500πŸ’° $105,000,000 Series C almost 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted 2 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 4 days ago

πŸ“ Latin America

🧭 Full-Time

πŸ’Έ 35000.0 - 40000.0 USD per year

πŸ” Higher Education Technology

🏒 Company: Civitas Learning

  • Bachelor's degree plus 3 years experience in complex ETL solutions.
  • Expertise in SQL with a focus on Postgres and Redshift.
  • Understanding of ETL best practices and tools.
  • Ability to work independently and in a team.
  • Collaborate directly with external customers to understand their student success goals.
  • Develop ETL transformations to map customer data systems.
  • Build and operationalize data science models on AWS.
  • Configure and troubleshoot API connections.
  • Develop data pipelines using AWS.

AWSSQLETLAPI testingData engineeringPostgres

Posted 4 days ago
Apply
Apply

πŸ“ Argentina

🧭 Full-Time

πŸ” Data Engineering

🏒 Company: FinalisπŸ‘₯ 101-250πŸ’° $10,700,000 Seed over 2 years agoFinancial ServicesInformation TechnologyFinTech

  • 8+ years experience as a Data Engineer or similar role
  • Proficiency in backend programming languages
  • Solid experience with PostgreSQL and MongoDB
  • Experience designing APIs, ideally with GraphQL
  • Familiarity with cloud platforms, preferably AWS
  • Strong integration experience with third-party tools
  • Build and maintain scalable data pipelines and ETL processes
  • Architect and optimize data warehouses and lakes
  • Collaborate with backend engineers for integration
  • Develop data models for platform capabilities
  • Optimize performance and security of data persistence
  • Integrate data from third-party sources
  • Design APIs for real-time data access

AWSGraphQLNode.jsPostgreSQLETLMongoDBAPI testingData engineering

Posted 9 days ago
Apply
Apply

πŸ“ LatAm

🧭 Full-Time

πŸ” B2B data and intelligence

🏒 Company: TruelogicπŸ‘₯ 101-250ConsultingWeb DevelopmentWeb DesignSoftware

  • 8+ years of experience as a Data/BI engineer.
  • Experience developing data pipelines with Airflow or equivalent code-based orchestration software.
  • Strong SQL abilities and hands-on experience with SQL and no-SQL DBs, performing analysis and performance optimizations.
  • Hands-on experience in Python or equivalent programming language
  • Experience with data warehouse solutions (like BigQuery/ Redshift/ Snowflake)
  • Experience with data modeling, data catalog concepts, data formats, data pipelines/ETL design, implementation, and maintenance.
  • Experience with AWS/GCP cloud services such as GCS/S3, Lambda/Cloud Function, EMR/Dataproc, Glue/Dataflow, and Athena.
  • Experience in Quality Checks
  • Experience in DBT
  • EFront Knowledge
  • Strong and Clear Communication Skills
  • Building, and continuously improving our data gathering, modeling, reporting capabilities, and self-service data platforms.
  • Working closely with Data Engineers, Data Analysts, Data Scientists, Product Owners, and Domain Experts to identify data needs.

AWSPythonSQLCloud ComputingETLSnowflakeAirflowData engineeringCommunication SkillsData modeling

Posted 13 days ago
Apply
Apply

πŸ“ LatAm

🧭 Full-Time

πŸ” Data Engineering

🏒 Company: TruelogicπŸ‘₯ 101-250ConsultingWeb DevelopmentWeb DesignSoftware

  • 5+ years working on data engineering roles
  • Proficient in Python and data processing technologies like Pandas and PySpark
  • Experience with AWS, GCP, or Azure
  • Familiarity with data privacy regulations
  • Code in Python for data processing solutions
  • Work with relational and non-relational data stores
  • Design data ingestion and processing pipelines
  • Implement data warehouse solutions
  • Ensure scalable data models

AWSPythonETLGCPHadoopKafkaMicrosoft SQL ServerMongoDBRabbitmqSnowflakeTableauAirflowAzureCassandraREST APIPandasSpark

Posted 13 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 15 days ago

πŸ“ Americas

πŸ” Code analysis solutions

  • Strong functional programming background (e.g., Haskell, Clojure, Scala, F# or similar).
  • Experience in data engineering and ETL development.
  • Familiarity with AWS, Docker, and workflow systems.
  • Strong problem-solving skills with a proactive approach to development.
  • Excellent communication skills and ability to work in a high-expectation environment.
  • Design and implement modular ETL components for the code analysis platform.
  • Leverage functional programming principles to build scalable and maintainable solutions.
  • Optimize performance and handle edge cases in data processing workflows.
  • Work with AWS, Docker, and workflow orchestration tools to enhance system efficiency.
  • Collaborate with cross-functional teams to align development with business goals.
  • Maintain high coding standards and contribute to a culture of engineering excellence.

AWSDockerETLData engineeringHaskellScala

Posted 15 days ago
Apply
Apply

πŸ“ Latin America

πŸ” AI economy, workforce development

🏒 Company: Correlation OneπŸ‘₯ 251-500πŸ’° $5,000,000 Series A almost 7 years agoInformation ServicesAnalyticsInformation Technology

  • 7+ years in a Data Engineering role with experience in data warehouses and ETL/ELT.
  • Advanced SQL experience and skills in database design.
  • Familiarity with pipeline monitoring and cloud environments (e.g., GCP).
  • Experience with APIs, Airflow, dbt, Git, and creating microservices.
  • Knowledge of implementing CDC with technologies like Kafka.
  • Solid understanding of software development practices and agile methodologies.
  • Proficiency in object-oriented scripting languages such as Python or Scala.
  • Experience with CI/CD processes and source control tools like GitHub.
  • Act as the data lake subject matter expert to develop technical vision.
  • Design the architecture for a well-architected data lakehouse.
  • Collaborate with architects to design the ELT process from data ingestion to analytics.
  • Create standard frameworks for software development.
  • Mentor junior engineers and support development teams.
  • Monitor database performance and adhere to data engineering best practices.
  • Develop schema design for reports and analytics.
  • Engage in hands-on development across the technical stack.

PostgreSQLPythonSQLApache AirflowETLGCPGitKafkaMongoDBData engineeringCI/CDTerraformMicroservicesScala

Posted 21 days ago
Apply
Apply

πŸ“ Uruguay

🧭 Full-Time

πŸ” Data Engineering

🏒 Company: NeocolπŸ‘₯ 11-50πŸ’° almost 2 years agoInformation ServicesSoftware

  • 1-3+ years of data migration experience
  • Experience with SQL or other programming languages
  • Ability to develop SQL Scripts with joins and functions
  • Understanding of basic database structure
  • Ability to resolve logical problems related to ETL processes
  • Participate in customer workshops for data migration
  • Aid in creating data migration plans
  • Develop migration scripts and troubleshoot issues
  • Verify data integrity and resolve errors
  • Create documentation of processes and results

SQLETLMicrosoft SQL Server

Posted 24 days ago
Apply
Apply

πŸ“ Brazil, Argentina, Peru, Colombia, Uruguay

πŸ” AdTech

🏒 Company: Workana Premium

  • 6+ years of experience in data engineering or related roles, preferably within the AdTech industry.
  • Expertise in SQL and experience with relational databases such as BigQuery and SpannerDB or similar.
  • Experience with GCP services, including Dataflow, Pub/Sub, and Cloud Storage.
  • Experience building and optimizing ETL/ELT pipelines in support of audience segmentation and analytics use cases.
  • Experience with Docker and Kubernetes for containerization and orchestration.
  • Familiarity with message queues or event-streaming tools, such as Kafka or Pub/Sub.
  • Knowledge of data modeling, schema design, and query optimization for performance at scale.
  • Programming experience in languages like Python, Go, or Java for data engineering tasks.
  • Build and optimize data pipelines and ETL/ELT processes to support AdTech products: Insights, Activation, and Measurement.
  • Leverage GCP tools like BigQuery, SpannerDB, and Dataflow to process and analyze real-time consumer-permissioned data.
  • Design scalable and robust data solutions to power audience segmentation, targeted advertising, and outcome measurement.
  • Develop and maintain APIs to facilitate data sharing and integration across the platform’s products.
  • Optimize database and query performance to ensure efficient delivery of advertising insights and analytics.
  • Work with event-driven architectures using tools like Pub/Sub or Kafka to ensure seamless data processing.
  • Proactively monitor and troubleshoot issues to maintain data accuracy, security, and performance.
  • Drive innovation by identifying opportunities to enhance the platform’s capabilities in audience targeting and measurement.

DockerPythonSQLETLGCPJavaKafkaKubernetesGoData modeling

Posted 27 days ago
Apply