Bachelor’s degree in data analytics, data science, statistics, information technology, or a related field. At least 4 years of SQL experience, creating complex queries and scalable ETL solutions. At least 3 years of Python experience, building and automating data pipelines and processes. At least 2 years of experience in data modeling, including building production-ready data structures and transformations. Experience with Snowflake, dbt, or similar data warehousing and modeling platforms. Experience developing test automation and workflow orchestration using tools such as Airflow, Dagster, GitLab CI/CD, or Jenkins. Experience reporting and querying cross-functional HR, People Analytics, or operational data. Experience with Git for version control and collaborative development. Familiarity with cloud environments (AWS preferred) and modern data engineering tools, operating in a managed DevOps framework with primary responsibility for the application and data layer. Knowledge of payroll and benefit modules preferred. Strong verbal and written communication skills. Ability to work without constant supervision and is a self starter. Ability to work quickly and think logically, especially under pressure. Thrives on learning and growth opportunities; desires new challenges and enjoys a good puzzle. Sound communication skills and the ability to influence and manage cross-functional stakeholders. Detail oriented, but able to pivot and re-prioritize efforts as required. Ability to use diplomacy and tact when handling problems. Ability to be flexible and adapt to change in a positive manner.