Build and maintain high-throughput data pipelines for batch and streaming workloads. Co-design data models and validate them with backend teams during the kickoff phase. Ensure sustainable architecture and data integrity and educate other stakeholders. Work closely with ML engineers to provide clean, ML-ready datasets. Ensure data quality, availability, and order of entry at volumes of 300k+ db entries per second, and engineer ways to scale that even further. Document pipelines, transformations, and architecture clearly.