Design, develop, and maintain high-performance data pipelines using Python, SQL, dbt, and Snowflake on GCP. Advance code quality, test coverage, and maintainability of data pipelines. Champion and implement software engineering best practices, including code reviews, testing methodology, CI/CD, and documentation. Support the adoption of data quality tools and practices. Research, evaluate, and recommend new technologies for the data platform. Contribute to data architecture and design of the data warehouse. Collaborate with software engineering teams to define data structures and streamline ingestion. Work with stakeholders to understand and translate data needs into technical requirements. Troubleshoot and resolve complex data pipeline issues. Contribute to the development and maintenance of CI/CD pipelines for data infrastructure. Participate in on-call rotation to support critical data pipelines. Identify and address inefficiencies in data engineering processes.