Translate data models and product requirements into scalable data solutions. Develop and maintain ETL/ELT pipelines for data ingestion, processing, validation, and loading across AWS. Build and manage data workflows using AWS Glue, Lambda, and Step Functions. Work within structured S3 Raw, Curated, and Consumption zones. Participate in establishing data standards, naming conventions, and metadata practices. Incorporate AI-assisted development tools for data scrubbing, anomaly detection, and transformation. Use Terraform to define and manage data infrastructure. Troubleshoot data issues, performance bottlenecks, and pipeline failures. Implement data validation, monitoring, and quality checks. Load, maintain, and optimize datasets in Redshift Serverless, Aurora MySQL, DynamoDB, and Timestream. Write and optimize performant SQL queries, stored procedures, and database schemas. Monitor, manage, and optimize alerting systems (Sentry, Slack integrations). Create, manage, and improve infrastructure-as-code scripts and Terraform templates. Collaborate with Data Team to mature data practices and integration pipeline strategy. Collaborate with Product and Engineering Teams to implement system improvements. Participate in managing and coordinating production deployments and production support. Conduct code reviews, support Engineering Teams with backend best practices, and maintain documentation.