5-8 years of professional experience building and operating production data systems. Strong hands-on expertise in Python. Ability to write clean, idiomatic, and maintainable Python code. Strong foundation in software engineering best practices (code reviews, documentation, testing, CI/CD). 6+ years of experience designing and modeling data in relational and non-relational databases. Experience working with modern data warehouses (dimensional modeling, ELT, query optimization). Hands-on experience building, scheduling, and monitoring batch data pipelines using Airflow or comparable tools. Strong understanding of data architecture fundamentals in cloud-based systems. Practical experience with AWS (core data and infrastructure services). Production experience with Snowflake (schema design, performance tuning, cost-aware usage). Experience working with sensitive or regulated data and compliance requirements (HIPAA, GDPR, CCPA). Ability to independently own and deliver data engineering projects with minimal supervision. Comfortable supporting high-priority data requests and operational issues. Proficiency with Python, SQL (MySQL/Snowflake), Airflow, AWS, and Terraform. Experience with distributed data processing tools such as Apache Spark (Nice-to-Have). Familiarity with serverless architectures (e.g., AWS Lambda) (Nice-to-Have).