5-8 years of data engineering experience Proven track record in building production-scale ETL/ELT pipelines, data warehousing, and analytics infrastructure Bachelor's degree in Computer Science, Data Engineering, Software Engineering, or related technical field, or equivalent professional experience Professional experience with cloud data warehouses (ClickHouse, BigQuery, Snowflake, Redshift) Proficiency in Python and common libraries (pandas, dask, pyarrow, pytest) Experience with workflow orchestration tools (Dagster, Airflow, or similar) Experience with stream processing technologies (Kafka, Flink, or similar) Knowledge of data modeling, dimensional modeling, and schema design principles Experience managing cloud infrastructure, containerization, and CI/CD pipelines Comfortable with infrastructure as code and deployment automation Excellent communicator Self-motivated and able to work independently