Designing and implementing scalable ETL/ELT data pipelines handling large volumes of data to support global expansion of our patent-pending forensics products that have been used by thousands of customers. Driving the expansion and management of our data processing infrastructure across multiple AWS regions. Analyzing current data architecture for security, scalability, performance, and data quality, and implementing appropriate solutions. Developing and deploying serverless data processing solutions using AWS Lambda, orchestrated using Apache Airflow. Designing and optimizing data architecture within relational databases (e.g., AWS RDS) and data warehouses (e.g., AWS Athena). Collaboration with cross-functional teams, including the product manager, UX designer, and other engineers to understand data processing requirements and translate them into technical solutions. Writing clean, maintainable, and well-documented code following best practices and coding standards, with data quality and security top of mind. Implementing observability and alerting for data pipelines to proactively identify and resolve issues. Conducting code and data pipeline reviews, provide constructive feedback, and mentor peers to ensure data quality and continuous improvement.