Design, build, and support scalable real-time and batch data pipelines using PySpark and Spark Structured Streaming on Databricks. Implement process automation and end-to-end workflows following Bronze → Silver → Gold architecture using Delta Lake best practices. Handle event-driven ingestion with Kafka and integrate into automated pipelines. Orchestrate workflows using Databricks Workflows/Jobs and CI/CD automation. Implement strong monitoring, observability, and alerting for reliability and performance (Databricks metrics, dashboards). Collaborate cross-functionally in agile sprints with Product, Analytics, and Data Science teams. Translate enterprise logical data models into optimized physical and performance-tuned implementations. Write modular, version-controlled code in Git; contribute to code reviews and enforce quality standards. Implement robust logging, error handling, and data quality validation across automation layers. Utilize relevant AWS services (S3, IAM, Secrets Manager) and DevOps practices. Promote best practices through documentation, knowledge sharing, tech talks, and training.