Design, build and maintain robust data pipelines for ingestion, transformation, and analytics using Spark, Airflow, and streaming technologies like Kafka/Kinesis. Develop and support APIs, utilities, and microservices to enable data access for internal applications and machine learning products. Write reliable, production-ready code to transform and deliver data for both batch and real time processing. Build and handle data models, metrics and feature stores to drive recommendations, personalization and analytics. Ensure data quality, observability and lineage, while collaborating with multi functional teams to deliver impactful, data driven solutions.