7+ years’ experience in data engineering or software development, focusing on data architecture. Extensive experience (3-5+ years) with Databricks and the Medallion Architecture. Expertise in Python, including developing production-grade data pipelines and reusable automation scripts. Proficiency in PySpark and SQL with experience integrating data sources with Databricks. Expertise in cloud platform services (AWS, Azure, GCP) and their data storage solutions. In-depth experience with Delta Lake for ACID transactions and data processing. Strong understanding of data pipelines and transformation workflows. Experience building automation tools or reusable scripts. Knowledge of Alteryx and Tableau integration is a plus.