Apply

Senior Data Engineer

Posted 6 months agoViewed

View full description

💎 Seniority level: Senior, 5+ years

📍 Location: US, Pacific Time, NOT STATED

💸 Salary: 117725 - 162900 USD per year

🔍 Industry: Mental Health Benefits

🏢 Company: Modern Health👥 251-500💰 $74,000,000 Series D about 4 years agoMental HealthTherapeuticsmHealthWellnessHealth CareSoftware

🗣️ Languages: English

⏳ Experience: 5+ years

🪄 Skills: AWSPythonSQLETLJavaSnowflakeData engineeringCollaborationScala

Requirements:
  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in data engineering in a modern tech stack.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Strong experience with big data technologies and SQL.
  • Experience with relational and NoSQL databases.
  • Hands-on experience with cloud platforms like AWS, Azure, or Google Cloud.
  • Familiarity with data warehousing solutions.
  • Knowledge of data modeling and data governance principles.
  • Experience with IaaS technologies.
Responsibilities:
  • Design, develop, and maintain scalable data pipelines and ETL processes.
  • Architect and implement data storage solutions, such as data warehouses and databases.
  • Collaborate with data scientists and analysts to meet data requirements.
  • Optimize data systems for performance and scalability.
  • Ensure data quality through testing and monitoring.
  • Develop and enforce data governance policies.
  • Stay current with data technologies and troubleshoot issues.
Apply

Related Jobs

Apply
🔥 Senior Data Engineer
Posted 3 days ago

📍 United States, Canada

🧭 Full-Time

🔍 B2B SaaS

🏢 Company: Sanity

  • 4+ years of experience building data pipelines at scale
  • Deep expertise in SQL, Python, and Node.js/TypeScript
  • Production experience with Airflow and RudderStack
  • Track record of building reliable data infrastructure
  • Design, develop, and maintain scalable ETL/ELT pipelines
  • Collaborate to implement and scale product telemetry
  • Establish best practices for data ingestion and transformation
  • Monitor and optimize data pipeline performance

Node.jsPythonSQLApache AirflowETLTypeScript

Posted 3 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 3 days ago

📍 United States, Canada

🧭 Full-Time

🔍 E-commerce

  • Bachelor's or Master's degree in Computer Science or related field
  • 5+ years of experience in data engineering
  • Strong proficiency in SQL and database technologies
  • Experience with data pipeline orchestration tools
  • Proficiency in programming languages like Python and Scala
  • Hands-on experience with AWS cloud data services
  • Familiarity with big data frameworks like Apache Spark
  • Knowledge of data modeling and warehousing
  • Experience implementing CI/CD for data pipelines
  • Real-time data processing architectures experience
  • Design, develop, and maintain ETL/ELT pipelines
  • Optimize data architecture and storage solutions
  • Work with AWS for scalable data solutions
  • Ensure data quality, integrity, and security
  • Collaborate with cross-functional teams
  • Monitor and troubleshoot data workflows
  • Create APIs for analytical information

AWSPostgreSQLPythonSQLApache AirflowETLKafkaMySQLSnowflakeCI/CDScala

Posted 3 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 6 days ago

📍 United States, Europe

🧭 Full-Time

🔍 Blockchain Analytics

  • Experience with orchestration tools like DBT and Prefect
  • Proficiency with data warehouses like Trino, Snowflake, or Clickhouse
  • Deployment experience in public cloud platforms
  • Build data pipelines that power popular datasets on Dune
  • Support analysts in creating new datasets
  • Orchestrate robust transformation pipelines

PythonSQLETLKubernetesSnowflakeClickhouseData engineering

Posted 6 days ago
Apply
Apply

📍 United States, Canada

🧭 Full-Time

🔍 Data Engineering

🏢 Company: VTEX👥 1001-5000💰 over 3 years ago🫂 Last layoff over 2 years agoE-CommerceSaaSInformation TechnologySoftware

  • Extensive experience with cloud-based data platforms (e.g., AWS, GCP, Azure)
  • Proficiency in Python with data processing libraries
  • Experience with both relational and NoSQL databases
  • Strong understanding of data modeling concepts
  • Design and implement data architectures including data warehouses and lakes
  • Build and optimize ETL pipelines
  • Ensure the reliability, performance, and security of data platforms

AWSDockerPythonApache AirflowCloud ComputingETLGCPKafkaKubernetesAzureData engineeringNosqlTerraformData modeling

Posted 7 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 7 days ago

📍 United States, Canada

🧭 Full-Time

🔍 Software Development

🏢 Company: BioRender👥 101-250💰 $15,319,133 Series A almost 2 years agoLife ScienceGraphic DesignSoftware

  • 7+ years of data engineering experience of relevant industry experience
  • Expertise working with Data Warehousing platforms (AWS RedShift or Snowflake preferred) and data lake / lakehouse architectures
  • Experience with Data Streaming platforms (AWS Kinesis / Firehose preferred)
  • Expertise with SQL and programming languages commonly used in data platforms (Python, Spark, etc)
  • Experience with data pipeline orchestration (e.g., Airflow) and data pipeline integrations (e.g. Airbyte, Stitch)
  • Building and maintaining the right architecture and tooling to support our data science, analytics, product, and machine learning initiatives.
  • Solve complex architectural problems
  • Translate deeply technical designs into business appropriate representations as well as analyze business needs and requirements ensuring implementation of data services directly correlates to the strategy and growth of the business

AWSPythonSQLApache AirflowSnowflakeData engineeringSparkData modeling

Posted 7 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 175000.0 - 205000.0 USD per year

🔍 Software Development

🏢 Company: CoreWeave💰 $642,000,000 Secondary Market about 1 year agoCloud ComputingMachine LearningInformation TechnologyCloud Infrastructure

  • Hands-on experience applying Kimball Dimensional Data Modeling principles to large datasets.
  • Expertise in working with analytical table/file formats, including Iceberg, Parquet, Avro, and ORC.
  • Proven experience optimizing MPP databases (StarRocks, Snowflake, BigQuery, Redshift).
  • Minimum 5+ years of programming experience in Python or Scala.
  • Advanced SQL skills, with a strong ability to write, optimize, and debug complex queries.
  • Hands-on experience with Airflow for batch orchestration distributed computing frameworks like Spark or Flink.
  • Develop and maintain data models, including star and snowflake schemas, to support analytical needs across the organization.
  • Establish and enforce best practices for dimensional modeling in our Lakehouse.
  • Engineer and optimize data storage using analytical table/file formats (e.g., Iceberg, Parquet, Avro, ORC).
  • Partner with BI, analytics, and data science teams to design datasets that accurately reflect business metrics.
  • Tune and optimize data in MPP databases such as StarRocks, Snowflake, BigQuery, or Redshift.
  • Collaborate on data workflows using Airflow, building and managing pipelines that power our analytical infrastructure.
  • Ensure efficient processing of large datasets through distributed computing frameworks like Spark or Flink.

AWSDockerPythonSQLCloud ComputingETLKubernetesSnowflakeAirflowAlgorithmsApache KafkaData engineeringData StructuresREST APISparkCommunication SkillsAnalytical SkillsCollaborationCI/CDRESTful APIsDevOpsTerraformProblem-solving skillsJSONScalaData visualizationAnsibleData modelingData analyticsDebugging

Posted 12 days ago
Apply
Apply

📍 OR, WA, CA, CO, TX, IL

🧭 Contract

💸 65.0 - 75.0 USD per hour

🔍 Music industry

🏢 Company: Discogs👥 51-100💰 $2,500,000 about 7 years agoDatabaseCommunitiesMusic

  • Proficiency in data integration and ETL processes.
  • Knowledge of programming languages such as Python, Java, or Javascript.
  • Familiarity with cloud platforms and services (e.g., AWS, GCP, Azure).
  • Understanding of data warehousing concepts and technologies (e.g., Redshift, BigQuery, Snowflake).
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills to work effectively with cross-functional teams.
  • Experience with marketing automation platforms.
  • Experience with data warehouses in a marketing context.
  • Knowledge of API integration and data exchange formats such as JSON, XML, and CSV.
  • Design, develop, and maintain data pipelines to ingest, process, and store data.
  • Implement data validation and quality checks to maintain the integrity of incoming data.
  • Optimize and automate data workflows to improve efficiency and reduce manual intervention.
  • Work closely with the product, engineering, marketing and analytics teams to support data-driven decision-making.
  • Develop and maintain documentation related to data processes, workflows, and system architecture.
  • Troubleshoot and resolve data-related issues promptly to minimize disruptions.
  • Monitor and enhance the performance of data infrastructure, ensuring scalability and reliability.
  • Stay updated with industry trends and best practices in data engineering to apply improvements.

AWSPythonApache AirflowETLGCPMySQLSnowflakeApache KafkaAzureJSON

Posted 17 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 22 days ago

📍 United States

💸 104981.0 - 157476.0 USD per year

🔍 Mental healthcare

🏢 Company: Headspace👥 11-50WellnessHealth CareChild Care

  • 7+ years of proven success designing and implementing large-scale enterprise data systems.
  • Deep experience with industry-leading tools such as Databricks, Snowflake, and Redshift.
  • Demonstrated expertise in architectural patterns for building high-volume real-time and batch ETL pipelines.
  • Proven ability to partner effectively with product teams to drive alignment and deliver solutions.
  • Exceptional oral and written communication abilities.
  • Experience in coaching and mentoring team members.
  • Architect and implement robust data pipelines to ingest, aggregate, and index diverse data sources into the organization’s data lake.
  • Lead the creation of a secure, compliant, and privacy-focused data warehousing solution tailored to healthcare industry requirements.
  • Partner with the data analytics team to deliver a data platform that supports accurate reporting on business metrics.
  • Collaborate with data science and machine learning teams to build tools for rapid experimentation and innovation.
  • Mentor and coach data engineers while promoting a culture valuing data as a strategic asset.

AWSETLSnowflakeData engineeringData modeling

Posted 22 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 24 days ago

📍 United States, Canada

🧭 Regular

💸 125000.0 - 160000.0 USD per year

🔍 Digital driver assistance services

🏢 Company: Agero👥 1001-5000💰 $4,750,000 over 2 years agoAutomotiveInsurTechInformation TechnologyInsurance

  • Bachelor's degree in a technical field and 5+ years or Master's degree with 3+ years of industry experience.
  • Extensive experience with Snowflake or other cloud-based data warehousing solutions.
  • Expertise in ETL/ELT pipelines using tools like Airflow, DBT, Fivetran.
  • Proficiency in Python for data processing and advanced SQL for managing databases.
  • Solid understanding of data modeling techniques and cost management strategies.
  • Experience with data quality frameworks and deploying data solutions in the cloud.
  • Familiarity with version control systems and implementing CI/CD pipelines.
  • Develop and maintain ETL/ELT pipelines to ingest data from diverse sources.
  • Monitor and optimize cloud costs while performing query optimization in Snowflake.
  • Establish modern data architectures including data lakes and warehouses.
  • Apply dimensional modeling techniques and develop transformations using DBT or Spark.
  • Write reusable and efficient code, and develop data-intensive UIs and dashboards.
  • Implement data quality frameworks and observability solutions.
  • Collaborate cross-functionally and document data flows, processes, and architecture.

AWSPythonSQLApache AirflowDynamoDBETLFlaskMongoDBSnowflakeFastAPIPandasCI/CDData modeling

Posted 24 days ago
Apply
Apply
🔥 Senior Data Engineer - Verikai
Posted about 1 month ago

📍 United States of America

🧭 Full-Time

💸 110000.0 - 160000.0 USD per year

🔍 Insurance industry

🏢 Company: Verikai_External

  • Bachelor's degree or above in Computer Science, Data Science, or a related field.
  • At least 5 years of relevant experience.
  • Proficient in SQL, Python, and data processing frameworks such as Spark.
  • Hands-on experience with AWS services including Lambda, Athena, Dynamo, Glue, Kinesis, and Data Wrangler.
  • Expertise in handling large datasets using technologies like Hadoop and Spark.
  • Experience working with PII and PHI under HIPAA constraints.
  • Strong commitment to data security, accuracy, and compliance.
  • Exceptional ability to communicate complex technical concepts to stakeholders.
  • Design, build, and maintain robust ETL processes and data pipelines for large-scale data ingestion and transformation.
  • Manage third-party data sources and customer data to ensure clean and deduplicated datasets.
  • Develop scalable data storage systems using cloud platforms like AWS.
  • Collaborate with data scientists and product teams to support data needs.
  • Implement data validation and quality checks, ensuring accuracy and compliance with regulations.
  • Integrate new data sources to enhance the data ecosystem and document data strategies.
  • Continuously optimize data workflows and research new tools for the data infrastructure.

AWSPythonSQLDynamoDBETLSpark

Posted about 1 month ago
Apply