Apply

Data Engineer

Posted about 4 hours agoViewed

View full description

💎 Seniority level: Senior, 3+ years

📍 Location: United States

💸 Salary: 130000.0 - 200000.0 USD per year

🔍 Industry: Healthcare

🏢 Company: Datavant

⏳ Experience: 3+ years

🪄 Skills: PythonSQLApache AirflowETLSnowflakeData engineeringData modeling

Requirements:
  • 3+ years of experience as a data engineer, analytics engineer, or data scientist.
  • 1+ year of experience building and maintaining an enterprise-scale data lake and/or data warehouse.
  • Strong collaborative and communication skills.
  • Mastery of ANSI SQL and data modelling best practices.
  • Deep experience with data warehouse technologies like Snowflake, BigQuery, or Redshift.
  • Expertise in Python.
Responsibilities:
  • Deliver a world-class data platform from the ground up.
  • Plan and delegate complex projects with broad scope.
  • Mentor and grow early career developers or engineers.
  • Facilitate technical discussions to solve problems effectively.
  • Engage with stakeholders to meet their needs.
  • Build, upgrade, and maintain data-related infrastructure and monitoring across multiple clouds.
  • Write performant, readable, and reusable code.
  • Review code to ensure high technical quality.
Apply

Related Jobs

Apply
🔥 Data Engineer
Posted about 9 hours ago

📍 United States

💸 150000.0 - 190000.0 USD per year

🔍 Healthcare

🏢 Company: Oshi Health👥 51-100💰 $60,000,000 Series C 4 months agoMedicalMobileHealth Care

  • Hold a BS/BA degree in Computer Science, Math, Physics, or related field, or equivalent experience.
  • 3+ years of data development experience in startup environments.
  • Ability to understand complex requirements and develop scalable solutions.
  • Advanced SQL skills and knowledge of data warehousing standards.
  • Proficient in programming languages such as Golang or Python.
  • Familiar with dbt (Data Build Tool) for warehouse transformations.
  • Knowledge of cloud environments and FHIR standards is a plus.
  • Understanding of data security and HIPAA compliance is advantageous.
  • Contribute to Oshi's existing data warehouse for product, clinical, and strategy teams.
  • Collaborate with marketing and growth teams to build supporting data pipelines.
  • Develop reusable queries, data quality tests, and insights for reporting.
  • Design and implement complex data models, including real-time analytics.
  • Work across data stack including CI/CD pipelines and platform integrations.
  • Support and standardize data governance structures for sensitive client data.

PythonSQLCloud ComputingETLData engineeringCI/CDData modeling

Posted about 9 hours ago
Apply
Apply
🔥 Staff Data Engineer
Posted 1 day ago

📍 United States

💸 131414.0 - 197100.0 USD per year

🔍 Mental healthcare

🏢 Company: Headspace👥 11-50WellnessHealth CareChild Care

  • 10+ years of success in enterprise data solutions and high-impact initiatives.
  • Expertise in platforms like Databricks, Snowflake, dbt, and Redshift.
  • Experience designing and optimizing real-time and batch ETL pipelines.
  • Demonstrated leadership and mentorship abilities in engineering.
  • Strong collaboration skills with product and analytics stakeholders.
  • Bachelor’s or advanced degree in Computer Science, Engineering, or a related field.
  • Drive the architecture and implementation of pySpark data pipelines.
  • Create and enforce design patterns in code and schema.
  • Design and lead secure and compliant data warehousing platforms.
  • Partner with analytics and product leaders for actionable insights.
  • Mentor team members on dbt architecture and foster a data-first culture.
  • Act as a thought leader on data strategy and cross-functional roadmaps.

SQLCloud ComputingETLSnowflakeData engineeringData modelingData analytics

Posted 1 day ago
Apply
Apply
🔥 Senior Data Engineer
Posted 6 days ago

📍 United States, Canada

🧭 Regular

💸 125000.0 - 160000.0 USD per year

🔍 Digital driver assistance services

🏢 Company: Agero👥 1001-5000💰 $4,750,000 over 2 years agoAutomotiveInsurTechInformation TechnologyInsurance

  • Bachelor's degree in a technical field and 5+ years or Master's degree with 3+ years of industry experience.
  • Extensive experience with Snowflake or other cloud-based data warehousing solutions.
  • Expertise in ETL/ELT pipelines using tools like Airflow, DBT, Fivetran.
  • Proficiency in Python for data processing and advanced SQL for managing databases.
  • Solid understanding of data modeling techniques and cost management strategies.
  • Experience with data quality frameworks and deploying data solutions in the cloud.
  • Familiarity with version control systems and implementing CI/CD pipelines.
  • Develop and maintain ETL/ELT pipelines to ingest data from diverse sources.
  • Monitor and optimize cloud costs while performing query optimization in Snowflake.
  • Establish modern data architectures including data lakes and warehouses.
  • Apply dimensional modeling techniques and develop transformations using DBT or Spark.
  • Write reusable and efficient code, and develop data-intensive UIs and dashboards.
  • Implement data quality frameworks and observability solutions.
  • Collaborate cross-functionally and document data flows, processes, and architecture.

AWSPythonSQLApache AirflowDynamoDBETLFlaskMongoDBSnowflakeFastAPIPandasCI/CDData modeling

Posted 6 days ago
Apply
Apply
🔥 Data Engineer
Posted 10 days ago

📍 California

🧭 Full-Time

💸 145000.0 USD per year

🔍 Health Insurance

🏢 Company: Sidecar Health👥 101-250💰 $165,000,000 Series D 7 months ago🫂 Last layoff over 2 years agoHealth InsuranceInsurTechInsuranceHealth CareFinTech

  • Master’s degree or foreign degree equivalent in Computer Science or a related field.
  • 1+ years of experience in Data Engineer or Software Engineer roles.
  • Proficiency in SQL and Python, with the ability to write complex SQL statements.
  • Hands-on experience with ETL processes, real-time and batch data processing.
  • Familiarity with Spark, Athena, Docker, and version control systems like GIT.
  • Knowledge of secure, scalable, cloud-based architectures compliant with HIPAA or PCI.
  • Experience in creating data visualizations using Tableau or ThoughtSpot.
  • Ability to translate business requirements into scalable software solutions.
  • Use SQL and Python on AWS to build ETL jobs and data pipelines for data integration into Snowflake.
  • Leverage DBT to transform data, consolidate records, and create clean data models.
  • Utilize AWS technologies to send reports and support business teams.
  • Containerize and orchestrate data pipelines with Docker and Airflow.
  • Perform data quality checks and ensure data reliability.
  • Develop reports and dashboards using Tableau and ThoughtSpot.
  • Participate in agile development activities.

AWSDockerPythonSQLETLSnowflakeTableauAirflowSpark

Posted 10 days ago
Apply
Apply

📍 United States

💸 124300.0 - 186500.0 USD per year

🔍 Technology

🏢 Company: SMX👥 1001-5000Cloud ComputingAnalyticsCloud SecurityInformation TechnologyCyber Security

  • Two + years of experience in a related field.
  • Expertise in complex SQL.
  • Knowledge of AWS technologies.
  • Solid understanding of RDBMS concepts (Postgres, RedShift, SQL Server), logical data modeling, and database/query optimization.
  • Familiarity with AWS data migration tools (DMS).
  • Scripting knowledge in Python/Lambda.
  • Ability to obtain and maintain a Public Trust clearance; US Citizenship is required.
  • Strong team collaboration and communication skills.
  • Assist Data Architect and customer in collecting requirements and documenting tasks for maintaining and enhancing data loading platform (ETL/data pipelines).
  • Implement data loading and quality control activities based on project requirements and customer tickets.
  • Implement CI/CD pipelines related to data warehouse maintenance.
  • Code and implement unique data migration requirements using AWS technologies like DMS and Lambda/Python.
  • Implement and resolve issues for user identity and access management to various datasets.

AWSPostgreSQLPythonSQLETLCI/CD

Posted 11 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 142771.0 - 225000.0 USD per year

🔍 Media and Analytics

  • Master's degree in Computer Science, Data Science, engineering, mathematics, or a related quantitative field plus 3 years of experience in analytics software solutions.
  • Bachelor's degree in similar fields plus 5 years of experience is also acceptable.
  • 3 years of experience with Python and associated packages including Spark, AWS, S3, Java, JavaScript, and Adobe Analytics.
  • Proficiency in SQL for querying and managing data.
  • Experience in analytics programming languages such as Python (with Pandas).
  • Experience in handling large volumes of data and code management tools like Git.
  • 2 years of experience managing computer program orchestrations and using open-source management platforms like AirFlow.
  • Develop, test, and orchestrate econometric, statistical, and machine learning modules.
  • Conduct unit, integration, and regression testing.
  • Create data processing systems for analytic research and development.
  • Design, document, and present process flows for analytical systems.
  • Partner with Software Engineering for cloud-based solutions.
  • Orchestrate modules via directed acyclic graphs using workflow management systems.
  • Work in an agile development environment.

AWSPythonSQLApache AirflowGitMachine LearningData engineeringRegression testingPandasSpark

Posted 11 days ago
Apply
Apply

📍 U.S.

🧭 Full-Time

💸 142771.0 - 225000.0 USD per year

🔍 Media and analytics

  • Master’s degree in Computer Science, Data Science, engineering, mathematics or a related quantitative field plus 3 years of experience in delivering analytics software solutions or a Bachelor’s degree plus 5 years.
  • Must have 3 years of experience with Python, associated packages including Spark, AWS, and SQL for data management.
  • Experience with analytics programming languages, parallel processing, and code management tools like Git.
  • Two years of experience managing program orchestrations and working with open-source management platforms such as AirFlow.
  • Modern analytics programming: developing, testing and orchestrating econometric, statistical and machine learning modules.
  • Unit, integration and regression testing.
  • Understanding the deployment of econometric models and learning methods.
  • Create data processing systems for analytics research and development.
  • Design, write, and test modules for Nielsen analytics cloud-based platforms.
  • Extract data using SQL and orchestrate modules via workflow management platforms.
  • Design, document, and present process flows for analytical systems.
  • Partner with software engineering to build analytical solutions in an agile environment.

AWSPythonSQLApache AirflowGitMachine LearningSpark

Posted 11 days ago
Apply
Apply

📍 United States of America

🧭 Full-Time

💸 110000.0 - 160000.0 USD per year

🔍 Insurance industry

🏢 Company: Verikai_External

  • Bachelor's degree or above in Computer Science, Data Science, or a related field.
  • At least 5 years of relevant experience.
  • Proficient in SQL, Python, and data processing frameworks such as Spark.
  • Hands-on experience with AWS services including Lambda, Athena, Dynamo, Glue, Kinesis, and Data Wrangler.
  • Expertise in handling large datasets using technologies like Hadoop and Spark.
  • Experience working with PII and PHI under HIPAA constraints.
  • Strong commitment to data security, accuracy, and compliance.
  • Exceptional ability to communicate complex technical concepts to stakeholders.
  • Design, build, and maintain robust ETL processes and data pipelines for large-scale data ingestion and transformation.
  • Manage third-party data sources and customer data to ensure clean and deduplicated datasets.
  • Develop scalable data storage systems using cloud platforms like AWS.
  • Collaborate with data scientists and product teams to support data needs.
  • Implement data validation and quality checks, ensuring accuracy and compliance with regulations.
  • Integrate new data sources to enhance the data ecosystem and document data strategies.
  • Continuously optimize data workflows and research new tools for the data infrastructure.

AWSPythonSQLDynamoDBETLSpark

Posted 13 days ago
Apply
Apply

📍 Colorado

💸 106000.0 - 139000.0 USD per year

🔍 Legal services

🏢 Company: Rocket Lawyer👥 251-500💰 $223,000,000 Debt Financing almost 4 years agoLegal TechLaw EnforcementLegal

  • 5+ years of Python experience.
  • 3+ years leveraging technologies such as Airflow and Apache Spark.
  • Experience working with large language models, diffusion models, or other generative models.
  • Experience with MLOps tools and practices.
  • Strong understanding of data architectures and patterns.
  • Experience with containerization technologies like Docker and Kubernetes.
  • Contributions to open-source projects.
  • Experience in DataOps and MLOps implementation and support.
  • Experience in building and supporting AI/ML platforms.
  • Design, develop, and maintain robust, scalable data pipelines for processing large datasets for AI model training.
  • Perform data cleaning, normalization, transformation, and feature engineering, including handling unstructured data.
  • Build and manage data infrastructure like data lakes and warehouses, optimized for AI workloads.
  • Implement data quality checks and monitoring systems for data accuracy and consistency.
  • Contribute to MLOps best practices for data management and model deployment.
  • Work with GCP and Snowflake for data and AI offerings.
  • Optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness.

DockerPythonApache AirflowGCPKubernetesSnowflakeData engineering

Posted 13 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 14 days ago

📍 United States

🧭 Full-Time

💸 170000.0 - 195000.0 USD per year

🔍 Healthcare

🏢 Company: Parachute Health👥 101-250💰 $1,000 about 5 years agoMedicalHealth CareSoftware

  • 5+ years of relevant experience.
  • Experience in Data Engineering with Python.
  • Experience building customer-facing software.
  • Strong listening and communication skills.
  • Time management and organizational skills.
  • Proactive, a driven self-starter who can work independently or as part of a team.
  • Ability to think with the 'big picture' in mind.
  • Passionate about improving patient outcomes in the healthcare space.
  • Architect solutions to integrate and manage large volumes of data across various internal and external systems.
  • Establish best practices and data governance standards to ensure that data infrastructure is built for long-term scalability.
  • Build and maintain a reporting product for external customers that visualizes data and provides tabular reports.
  • Collaborate across the organization to assess data engineering needs.

PythonETLAirflowData engineeringData visualization

Posted 14 days ago
Apply