7+ years of experience in a data Engineering, or data-oriented software engineering role creating and pushing end-to-end data engineering pipelines. 3+ years of experience acting as technical lead and providing mentorship and feedback to junior engineers. Extensive experience building and optimizing cloud-based data streaming pipelines and infrastructure. Extensive experience exposing real-time predictive model outputs to production-grade systems leveraging large-scale distributed data processing and model training. Experience with SQL/NoSQL databases/warehouses: Postgres, BigQuery, BigTable, Materialize, AlloyDB, etc. Experience with Replication/ELT services: Data Stream, Hevo, etc. Experience with Data Transformation services: Spark, Dataproc, etc. Experience with Scripting languages: SQL, Python, Go. Experience with Cloud platform services in GCP and analogous systems: Cloud Storage, Cloud Compute Engine, Cloud Functions, Kubernetes Engine etc. Experience with Data Processing and Messaging Systems: Kafka, Pulsar, Flink. Experience with Code version control: Git. Experience with Data pipeline and workflow tools: Argo, Airflow, Cloud Composer. Experience with Monitoring and Observability platforms: Prometheus, Grafana, ELK stack, Datadog. Experience with Infrastructure as Code platforms: Terraform, Google Cloud Deployment Manager. Experience with Other platform tools such as Redis, FastAPI, and Streamlit. Excellent organizational, communication, presentation, and collaboration experience with organizational technical and non-technical teams. Graduate degree in Computer Science, Mathematics, Informatics, Information Systems or other quantitative field.