Apply📍 LATAM
🧭 Full-Time
🔍 Software Development
- 7+ years of experience in data engineering, including ETL, big data, Spark/PySpark, and data warehousing.
- 5+ years working with databases and query languages (e.g., SQL, MySQL, PostgreSQL, DynamoDB).
- 5+ years working with cloud platforms (especially AWS).
- Strong experience with Python (PySpark, Pandas, etc.) and writing clean, maintainable code.
- Solid knowledge of OLAP/OLTP architectures and performance tuning.
- Experience preparing data for analytics and managing structured, semi-structured, and unstructured data.
- Hands-on experience with BI tools like Power BI or Tableau.
- Familiarity with Linux and command-line tools.
- Experience with infrastructure-as-code and CI/CD tools (Terraform, CircleCI).
- Build and maintain secure, high-performance data pipelines.
- Design data solutions that support both operational (OLTP) and analytical (OLAP) use cases.
- Contribute to our AI-driven data strategy.
- Integrate third-party and internal data sources.
- Collaborate with product teams on data contracts, SLAs, and transformations.
- Use infrastructure-as-code and CI/CD to automate and scale pipelines.
- Support and maintain dashboards and analytics tools (e.g., Power BI, Tableau).
- Monitor data systems to ensure reliability and data quality.
- Stay updated on new data technologies and share innovations with the team.
- Mentor junior engineers and champion best practices in data engineering.
AWSDockerPostgreSQLPythonSQLData AnalysisDynamoDBETLGitMySQLTableauApache KafkaData engineeringPandasSparkCI/CDLinuxDevOpsTerraformMicroservicesData modeling
Posted about 2 months ago
Apply