8+ years of experience as a data engineer or in related roles Minimum 2-3 years of experience with DataBricks platform (clusters, workspaces, security, migrations, ETL, integrations) Strong knowledge of Apache Spark (PySpark, query optimization) Proficiency in Python for data engineering (designing and implementing ETL/ELT pipelines) Practical experience with Delta Lake and data governance concepts (Unity Catalog or similar) Practical experience with Delta Live Tables (DLT) or similar tools Experience working in Microsoft Azure (Data Factory, Synapse, Logic Apps, Data Lake) and/or AWS (Redshift, Athena, Glue) Proficiency in SQL for schema design, query optimization, and business logic implementation Ability to take initiative and work independently English language proficiency for effective team communication