3+ years of experience developing and deploying production-grade Python software. 3+ years of experience with Python and high-performance data libraries such as Polars and Pandas. Proficiency with JavaScript, SQL, and KQL. Experience with Extract, Transform, Load (ETL), Data Streaming, and Reconciliation. Experience building and maintaining deployment pipelines, including DevOps tools like Ansible, and containerization with Docker. Proficiency with cloud platforms (AWS or Azure) for deploying and scaling data systems. Highly desired experience with Azure, particularly Lakehouse and Eventhouse architectures. Experience with relevant infrastructure and tools including NATS, Power BI, Apache Spark/Databricks, and PySpark. Hands-on experience with data warehousing methodologies and optimization libraries (e.g., OR-Tools). Experience with log analysis, forensic debugging, and system performance tuning. Exposure to cloud-based systems. Familiarity with Agile/SCRUM methodologies in collaborative development workflows.