Bachelor’s degree in Computer Science, Engineering, or related field. 2–4 years of experience building and operating large-scale data systems supporting analytics and ML workloads. Proficiency in Python and SQL, with experience in PySpark, pandas, or similar. Experience with DBT. Experience with modern data warehousing and lakehouse platforms, preferably Databricks. Hands-on experience with workflow orchestration tools such as Airflow, Dagster, or Prefect. Strong understanding of data modeling, ETL design, and distributed data systems. Experience with AWS data and compute services (S3, Lambda, ECS, CloudWatch, etc.) or equivalent cloud platforms. Familiarity with MLOps concepts. Experience using Infrastructure as Code, preferably Terraform. Excellent problem-solving, collaboration, and communication skills.