Understand, format and prepare data for analytics and data-science processes. Design, build, and optimize scalable ETL/ELT pipelines for batch and streaming data. Collaborate with analysts to understand data needs and ensure accessible, well-modeled data sets. Dive deep into system metrics and usage patterns to identify opportunities for FinOps-driven cost savings. Manage data infrastructure on GCP (BigQuery, Cloud Composer, Vertex AI, Kubernetes, etc.). Automate infrastructure provisioning using Pulumi or Terraform. Set up data quality monitoring, alerting, and logging systems. Collaborate with data scientists and ML engineers to productionize models and build supporting pipelines. Continuously improve performance, scalability, and cost-efficiency of data workflows.