Apply

Staff Data Engineer

Posted 5 days agoViewed

View full description

💎 Seniority level: Staff, 7+ years

📍 Location: Latin America, EST, NOT STATED

🔍 Industry: AI economy, workforce development

🏢 Company: Correlation One👥 251-500💰 $5,000,000 Series A almost 7 years agoInformation ServicesAnalyticsInformation Technology

⏳ Experience: 7+ years

🪄 Skills: PostgreSQLPythonSQLApache AirflowETLGCPGitKafkaMongoDBData engineeringCI/CDTerraformMicroservicesScala

Requirements:
  • 7+ years in a Data Engineering role with experience in data warehouses and ETL/ELT.
  • Advanced SQL experience and skills in database design.
  • Familiarity with pipeline monitoring and cloud environments (e.g., GCP).
  • Experience with APIs, Airflow, dbt, Git, and creating microservices.
  • Knowledge of implementing CDC with technologies like Kafka.
  • Solid understanding of software development practices and agile methodologies.
  • Proficiency in object-oriented scripting languages such as Python or Scala.
  • Experience with CI/CD processes and source control tools like GitHub.
Responsibilities:
  • Act as the data lake subject matter expert to develop technical vision.
  • Design the architecture for a well-architected data lakehouse.
  • Collaborate with architects to design the ELT process from data ingestion to analytics.
  • Create standard frameworks for software development.
  • Mentor junior engineers and support development teams.
  • Monitor database performance and adhere to data engineering best practices.
  • Develop schema design for reports and analytics.
  • Engage in hands-on development across the technical stack.
Apply

Related Jobs

Apply
🔥 Staff Data Engineer | GenAI
Posted about 2 months ago

📍 Brazil

🧭 Full-Time

🔍 Corporate wellness

  • Experience in creating robust and scalable data models from business requirements.
  • Collaborative skills for collecting, preparing, and analyzing complex business data.
  • Familiarity with machine learning and AI concepts, particularly in generative AI.
  • Knowledge of data pipeline tools and technologies like Airflow and Kafka.
  • Ability to collaborate with various teams to understand data needs and develop effective data pipelines.
  • Comfort with big data concepts to ingest, process, and provide data to business analysts and product teams.
  • Strong motivation to contribute to a data-driven culture.
  • Develop and maintain data models and structures for efficient querying and analysis.
  • Design, develop, and maintain data pipelines to transform and process large volumes of data from various sources.
  • Implement automated data quality checks to ensure data quality and consistency.

PythonSQLApache AirflowArtificial IntelligenceKafkaMachine LearningData modeling

Posted about 2 months ago
Apply