Senior Google Cloud Data Engineer

New
BrazilFull-TimeSenior
Salary not disclosed
Apply NowOpens the employer's application page

Job Details

Languages
Advanced English proficiency.
Required Skills
PythonSQLGCPCI/CDData modelingBigQuerydbtLooker

Requirements

  • Strong hands-on experience in data engineering using Python for automation, pipeline development, and data processing.
  • Advanced SQL expertise, including complex queries, nested structures, and analytical functions.
  • Deep experience with Google Cloud Platform, especially BigQuery, Dataflow (Apache Beam), and Pub/Sub.
  • Proven ability to build scalable batch and streaming pipelines with high reliability and performance.
  • Strong understanding of data modeling, transformation frameworks, and modern data architecture principles.
  • Experience implementing dbt workflows, CI/CD pipelines, and data governance practices.
  • Expertise in Looker, including LookML, semantic modeling, explores, and dashboard development for diverse audiences.
  • Strong knowledge of data quality frameworks, monitoring systems, and production-grade data operations.
  • Experience optimizing cloud cost and performance in large-scale distributed data environments.
  • Ability to work independently while collaborating effectively in agile, cross-functional teams.
  • Strong communication skills with the ability to translate technical concepts into business insights.
  • Advanced English proficiency.

Responsibilities

  • Design, build, and maintain scalable batch and real-time data pipelines using GCP services such as BigQuery, Dataflow, and Pub/Sub.
  • Develop robust ETL/ELT workflows, ensuring high availability, fault tolerance, and data accuracy across distributed systems.
  • Implement and maintain dbt transformation models, CI/CD pipelines, and structured data contracts for curated datasets and analytical marts.
  • Optimize BigQuery performance through advanced query tuning, partitioning, clustering, and cost-efficient architecture design.
  • Build and monitor data quality frameworks using tools such as Great Expectations, including freshness SLOs and reconciliation checks.
  • Develop event-driven architectures and streaming pipelines using windowing, triggers, and watermarking strategies.
  • Design and maintain LookML semantic models, ensuring consistent metrics and governance across the organization.
  • Build impactful dashboards in Looker for both operational monitoring and executive reporting use cases.
  • Perform root cause analysis to resolve data, pipeline, or performance issues and implement permanent fixes.
  • Implement platform reliability controls including retries, dead-letter queues, disaster recovery runbooks, and security validation.
  • Collaborate with cross-functional teams to ensure alignment between data engineering, analytics, and business needs.
  • Document systems, pipelines, and architectural decisions to ensure transparency and maintainability.
View Full Description & ApplyYou'll be redirected to the employer's site
View details
Apply Now