Bachelor's degree (or equivalent) in Computer Science, Engineering, or a related field 4+ years of hands-on experience building and deploying data pipelines in Python Proven expertise with Apache Airflow (DAG development, scheduler tuning, custom operators) Strong knowledge of Apache Spark (Spark SQL, DataFrames, performance tuning) Deep SQL skills Professional experience deploying cloud-native architectures on AWS, including services like S3, EMR, EKS, IAM, and Redshift Familiarity with secure cloud environments and experience implementing FedRAMP/FISMA controls Experience deploying applications and data workflows on Kubernetes, preferably EKS Infrastructure-as-Code proficiency with Terraform or CloudFormation Skilled in GitOps and CI/CD practices using Jenkins, GitLab CI, or similar tools Excellent verbal and written communication skills Willingness and ability to travel up to 25% to client sites as needed Active TS/SCI clearance required