Degree educated in Computer Science, Engineering, Mathematics, or equivalent experience Experience with managing stakeholders and collaborating with customers Strong written and verbal communication skills required 3+ years working with relational databases and query languages 3+ years building data pipelines in production and ability to work across structured, semi-structured and unstructured data 3+ years data modeling (e.g. star schema, entity-relationship) 3+ years writing clean, maintainable, and robust code in Python, Scala, Java, or similar coding languages Ability to manage an individual workstream independently Expertise in software engineering concepts and best practices DevOps experience preferred Experience working with cloud data warehouses (Snowflake, Google BigQuery, AWS Redshift, Microsoft Synapse) preferred Experience working with cloud ETL/ELT tools (Fivetran, dbt, Matillion, Informatica, Talend, etc.) preferred Experience working with cloud platforms (AWS, Azure, GCP) and container technologies (Docker, Kubernetes) preferred Experience working with Apache Spark preferred Experience preparing data for analytics and following a data science workflow to drive business results preferred Consulting experience strongly preferred Willingness to travel