5+ years working with data engineering, big data, or similar roles. Strong SQL skills and hands-on experience with databases like BigQuery, Spanner, or equivalents. Proficiency with GCP services (Dataflow, Pub/Sub, Cloud Storage). Experience building ETL/ELT pipelines and working on data for analytics or targeting use cases. Experience with container tools like Docker and Kubernetes. Familiarity with event-streaming platforms (Kafka, Pub/Sub). Knowledge of data modeling, query optimization, and performance tuning. Proficient in at least one programming language used in data (e.g., Python, Go, or Java).