Bachelor's degree (or equivalent) in Computer Science or a related field. 5+ years of hands-on experience in architecting distributed system architecture. Exceptional programming skills in Python. Adeptness in SQL or SparkSQL. Experience with data stores such as Iceberg, Trino, BigQuery, StarRocks, and Citus. Proficiency in data pipeline and workflow orchestration tools like Airflow, DBT. Expertise in data processing technologies and streaming workflows including Spark, Kafka, and Flink. Competence in deploying and monitoring infrastructure within public cloud platforms, utilizing tools such as Docker, Terraform, Kubernetes, and Datadog. Proven ability in loading, querying, and transforming extensive datasets.