- Design, build, and maintain scalable ETL/ELT pipelines using Databricks and AWS
- Improve ingestion pipeline quality, reliability, scalability, and governance
- Develop and optimize core data models and foundational data tables
- Build analytics-ready datasets to support player insights and operational reporting
- Implement data governance, data quality, lineage, and observability practices
- Collaborate with product, analytics, and business stakeholders
- Optimize large-scale data processing workflows for performance and cost
- Contribute to the unification of fragmented data ecosystems
- Build and maintain reliable orchestration workflows and scheduling systems
- Participate in architectural discussions around scalability and modernization
AWSPythonSQL+5 more