Master’s degree in Data Engineering, Computer Science, or a related technical field. 3–5 years of hands-on experience designing and maintaining large-scale data pipelines. Proven experience working with structured and unstructured financial data or other complex data domains. Experience collaborating in finance, consulting, or private equity environments is a must. Advanced proficiency in Python and SQL for data processing and transformation. Hands-on experience with cloud data platforms (e.g. AWS, Azure, GCP, or Databricks) and data workflow orchestration tools (e.g. Airflow, Prefect, or Dagster). Strong understanding of data modeling, schema design, and API integration. Experience with big data frameworks (Spark, Kafka, Delta Lake, etc.) is highly desirable. Familiarity with version control (Git) and DevOps practices (CI/CD pipelines). Strong communication skills and ability to work cross-functionally with non-technical stakeholders. Fluency in English; French is a plus.