A Bachelor's degree (or equivalent) in Computer Science or a related field. 8+ years of hands-on experience in architecting distributed systems. Exceptional programming skills in Python and proficiency in SQL or SparkSQL. In-depth experience with data stores such as Iceberg, Trino, BigQuery, and StarRocks. Proficiency in tools like Airflow and DBT for data pipeline orchestration. Expertise in technologies like Spark, Kafka, and Flink. Competence in deploying and monitoring cloud infrastructure with tools like Docker and Terraform. Proven ability in managing extensive datasets.