4+ years working in a data engineering role that supports incoming/outgoing products supporting internal and external customers 2+ years demonstrated expertise in designing an end-to-end data pipeline in cloud frameworks (such as GCP, AWS, Azure) with requirements from multiple stakeholders 2+ years of Python or similar experience writing efficient, testable, and readable code 2+ years experience in building streaming data ingestion pipelines 1+ year of ML (Machine Learning) support and implementation or MLOps Deep SQL experience with columnar databases such as Google BigQuery, Snowflake, or Amazon Redshift Demonstrated experience with AI Coding assistants Experience building CICD pipelines for data processing using tools such as Docker, CircleCI, dbt, git, etc Able to manage infrastructure using IAC tools such as Terraform or Pulumi Experience with common data orchestration tools such as Apache Airflow (or similar) to manage SLOs and processing dependencies Experience with creating alerts and monitoring pipelines which contribute to overall data governance Experience with containerization and container orchestration technologies with cloud architecture and implementation features (single- and multi-tenancy, orchestration, elastic scalability) Familiarity with standard IT security practices such as identity and access management (IAM), data protection, encryption, certificate, and key management