Translate data models and product requirements into scalable data solutions. Develop and maintain ETL/ELT pipelines for data ingestion, processing, validation, and loading across AWS. Build and manage data workflows using AWS Glue, Lambda, and Step Functions. Work within S3 Raw, Curated, and Consumption zones. Establish data standards, naming conventions, and metadata practices. Incorporate AI-assisted development tools for data scrubbing and anomaly detection. Use Terraform to define and manage data infrastructure. Troubleshoot data issues, performance bottlenecks, and pipeline failures. Implement data validation, monitoring, and quality checks. Load, maintain, and optimize datasets in Redshift Serverless, Aurora MySQL, DynamoDB, and Timestream. Write and optimize performant SQL queries, stored procedures, and database schemas. Monitor, manage, and optimize alerting systems (Sentry, Slack). Create, manage, and improve infrastructure-as-code scripts and Terraform templates. Collaborate with Data Team on integration pipeline strategy. Collaborate with Product and Engineering Teams on system improvements. Participate in managing and coordinating production deployments and support. Conduct code reviews, support Engineering Teams with backend best practices, and maintain documentation.