Apply

Staff Data Engineer

Posted 1 day agoViewed

View full description

๐Ÿ’Ž Seniority level: Staff, 10+ years

๐Ÿ“ Location: United States

๐Ÿ’ธ Salary: 131414.0 - 197100.0 USD per year

๐Ÿ” Industry: Mental healthcare

๐Ÿข Company: Headspace๐Ÿ‘ฅ 11-50WellnessHealth CareChild Care

โณ Experience: 10+ years

๐Ÿช„ Skills: SQLCloud ComputingETLSnowflakeData engineeringData modelingData analytics

Requirements:
  • 10+ years of success in enterprise data solutions and high-impact initiatives.
  • Expertise in platforms like Databricks, Snowflake, dbt, and Redshift.
  • Experience designing and optimizing real-time and batch ETL pipelines.
  • Demonstrated leadership and mentorship abilities in engineering.
  • Strong collaboration skills with product and analytics stakeholders.
  • Bachelorโ€™s or advanced degree in Computer Science, Engineering, or a related field.
Responsibilities:
  • Drive the architecture and implementation of pySpark data pipelines.
  • Create and enforce design patterns in code and schema.
  • Design and lead secure and compliant data warehousing platforms.
  • Partner with analytics and product leaders for actionable insights.
  • Mentor team members on dbt architecture and foster a data-first culture.
  • Act as a thought leader on data strategy and cross-functional roadmaps.
Apply

Related Jobs

Apply

๐Ÿ“ United States

๐Ÿงญ Full-Time

๐Ÿ’ธ 170000.0 - 195000.0 USD per year

๐Ÿ” Healthcare

๐Ÿข Company: Parachute Health๐Ÿ‘ฅ 101-250๐Ÿ’ฐ $1,000 about 5 years agoMedicalHealth CareSoftware

  • 5+ years of relevant experience.
  • Experience in Data Engineering with Python.
  • Experience building customer-facing software.
  • Strong listening and communication skills.
  • Time management and organizational skills.
  • Proactive, a driven self-starter who can work independently or as part of a team.
  • Ability to think with the 'big picture' in mind.
  • Passionate about improving patient outcomes in the healthcare space.
  • Architect solutions to integrate and manage large volumes of data across various internal and external systems.
  • Establish best practices and data governance standards to ensure that data infrastructure is built for long-term scalability.
  • Build and maintain a reporting product for external customers that visualizes data and provides tabular reports.
  • Collaborate across the organization to assess data engineering needs.

PythonETLAirflowData engineeringData visualization

Posted 14 days ago
Apply
Apply

๐Ÿ“ United States

๐Ÿ” Cyber security

๐Ÿข Company: BeyondTrust๐Ÿ‘ฅ 1001-5000๐Ÿ’ฐ Private over 3 years agoCloud ComputingSecurityCloud SecurityCyber SecuritySoftware

  • Strong programming and technology knowledge in cloud data processing.
  • Previous experience working in matured data lakes.
  • Strong data modelling skills for analytical workloads.
  • Spark (or equivalent parallel processing framework) experience is needed; existing Databricks knowledge is a plus.
  • Interest and aptitude for cybersecurity; interest in identity security is highly preferred.
  • Technical understanding of underlying systems and computation minutiae.
  • Experience working with distributed systems and data processing on object stores.
  • Ability to work autonomously.
  • Optimize data workloads at a software level by improving processing efficiency.
  • Develop new data processing routes to remove redundancy or reduce transformation overhead.
  • Monitor and maintain existing data workflows.
  • Use observability best practices to ensure pipeline performance.
  • Perform complex transformations on both real time and batch data assets.
  • Create new ML/Engineering solutions to tackle existing issues in the cybersecurity space.
  • Leverage CI/CD best practices to effectively develop and release source code.

PythonSparkCI/CDData modeling

Posted 28 days ago
Apply
Apply

๐Ÿ“ US

๐Ÿงญ Full-Time

๐Ÿ’ธ 206700.0 - 289400.0 USD per year

๐Ÿ” Social Media / Technology

  • MS or PhD in a quantitative discipline: engineering, statistics, operations research, computer science, informatics, applied mathematics, economics, etc.
  • 7+ years of experience with large-scale ETL systems and building clean, maintainable code (Python preferred).
  • Strong programming proficiency in Python, SQL, Spark, Scala.
  • Experience with data modeling, ETL concepts, and patterns for data governance.
  • Experience with data workflows, data modeling, and engineering.
  • Experience in data visualization and dashboard design using tools like Looker, Tableau, and D3.
  • Deep understanding of relational and MPP database designs.
  • Proven track record of cross-functional collaboration and excellent communication skills.
  • Act as the analytics engineering lead within Ads DS team contributing to data science data quality and automation initiatives.
  • Work on ETLs, reporting dashboards, and data aggregations for business tracking and ML model development.
  • Develop and maintain robust data pipelines for data ingestion, processing, and transformation.
  • Create user-friendly tools for internal team use, streamlining analysis and reporting processes.
  • Lead efforts to build a data-driven culture by enabling data self-service.
  • Provide technical guidance and mentorship to data analysts.

PythonSQLETLAirflowSparkScalaData visualizationData modeling

Posted 29 days ago
Apply
Apply
๐Ÿ”ฅ Staff Data Engineer
Posted about 2 months ago

๐Ÿ“ USA

๐Ÿงญ Full-Time

๐Ÿ’ธ 165000.0 - 210000.0 USD per year

๐Ÿ” E-commerce and AI technologies

๐Ÿข Company: Wizard๐Ÿ‘ฅ 11-50Customer ServiceManufacturing

  • 5+ years of professional experience in software development with a focus on data engineering.
  • Bachelor's degree in Computer Science or a related field, or equivalent practical experience.
  • Proficiency in Python with software engineering best practices.
  • Strong expertise in building ETL pipelines using tools like Apache Spark.
  • Hands-on experience with NoSQL databases like MongoDB, Cassandra, or DynamoDB.
  • Proficiency in real-time stream processing systems such as Kafka or AWS Kinesis.
  • Experience with cloud platforms (AWS, GCP, Azure) and technologies like Delta Lake and Parquet files.
  • Develop and maintain scalable data infrastructure for batch and real-time processing.
  • Build and optimize ETL pipelines for efficient data flow.
  • Collaborate with data scientists and cross-functional teams for accurate monitoring.
  • Design backend data solutions for microservices architecture.
  • Implement and manage integrations with third-party e-commerce platforms.

AWSPythonDynamoDBElasticSearchETLGCPGitHadoopKafkaMongoDBRabbitmqAzureCassandraRedis

Posted about 2 months ago
Apply
Apply

๐Ÿ“ United States

๐Ÿงญ Full-Time

๐Ÿ’ธ 179000.0 - 277000.0 USD per year

๐Ÿ” Healthcare

๐Ÿข Company: Komodo Health๐Ÿ‘ฅ 100-500๐Ÿ’ฐ $200,000,000 about 2 years ago๐Ÿซ‚ Last layoff about 2 years agoPredictive AnalyticsInformation TechnologyHealth CareSoftware

  • Deep expertise in software and data or related fields in healthcare and technology.
  • US Healthcare claims data experience.
  • Extensive experience building scalable, best-in-class solutions.
  • Demonstrated record of thought leadership and solution design.
  • Strong ability to communicate clearly with both technical and non-technical teams.
  • Knowledge of large-scale data and computational technologies.
  • Experience with SQL and query design on large, complex datasets.
  • Ability to use a variety of databases, ideally Snowflake on AWS.
  • Partnering with Engineering team members, Product Managers, and Data Scientists to understand complex health data use cases.
  • Building foundational pieces of the data platform architecture, pipelines, analytics, and services.
  • Architecting and developing reliable data pipelines that transform data at scale using SQL and Python in Snowflake.
  • Contributing to python packages in Github and APIs following current best practices.

PythonSQLSnowflakeAirflowAlgorithmsData engineeringData modeling

Posted 2 months ago
Apply
Apply

๐Ÿ“ US

๐Ÿงญ Full-Time

๐Ÿ’ธ 206700 - 289400 USD per year

๐Ÿ” Social media / Online community

  • MS or PhD in a quantitative discipline: engineering, statistics, operations research, computer science, informatics, applied mathematics, economics, etc.
  • 7+ years of experience with large-scale ETL systems, building clean, maintainable, object-oriented code (Python preferred).
  • Strong programming proficiency in Python, SQL, Spark, Scala.
  • Experience with data modeling, ETL concepts, and manipulating large structured and unstructured data.
  • Experience with data workflows (e.g., Airflow) and data visualization tools (e.g., Looker, Tableau).
  • Deep understanding of technical and functional designs for relational and MPP databases.
  • Proven track record of collaboration and excellent communication skills.
  • Experience in mentoring junior data scientists and analytics engineers.
  • Act as the analytics engineering lead within Ads DS team and contribute to data science data quality and automation initiatives.
  • Ensure high-quality data through ETLs, reporting dashboards, and data aggregations for business tracking and ML model development.
  • Develop and maintain robust data pipelines and workflows for data ingestion, processing, and transformation.
  • Create user-friendly tools for internal use across Data Science and cross-functional teams.
  • Lead efforts to build a data-driven culture by enabling data self-service.
  • Provide mentorship and coaching to data analysts and act as a thought partner for data teams.

LeadershipPythonSQLData AnalysisETLTableauStrategyAirflowData engineeringData scienceSparkCommunication SkillsCollaborationMentoringCoachingData visualizationData modeling

Posted 3 months ago
Apply
Apply

๐Ÿ“ AR, CA, CO, FL, GA, IL, KY, MA, MI, MT, MO, NV, NJ, NY, NC, OR, PA, TX, WA, WI

๐Ÿ” Food waste reduction and grocery technology

๐Ÿข Company: Afresh๐Ÿ‘ฅ 51-100๐Ÿ’ฐ $115,000,000 Series B over 2 years agoArtificial Intelligence (AI)LogisticsFood and BeverageMachine LearningAgricultureSupply Chain ManagementSoftware

  • 6+ years of experience as a data engineer, analytics engineer, or similar role.
  • Strong understanding of advanced SQL concepts.
  • Exceptional communication and leadership skills.
  • 1+ years of experience with SQL-driven transform libraries supporting ELT, including CI/CD pipelines.
  • Expert knowledge of OLTP and OLAP database design.
  • Familiarity with data engineering concepts like Data Mesh, Data Lake, Data Warehouse.
  • Experience with semantic layer setup defined with code (LookML, Cube.dev, etc.).
  • Technologies: SQL, Python, Airflow, dbt, Snowflake/Databricks/BigQuery, Spark.
  • Improve and extend data analytics architecture for reliable data across use cases.
  • Collaborate with engineers, product managers, and data scientists to understand data needs.
  • Build dimensional models and metrics for consistent insights.
  • Evolve existing data quality and governance processes.
  • Mentor and up-skill other engineers.

LeadershipPythonSQLSnowflakeAirflowData engineeringSparkCollaborationCI/CD

Posted 4 months ago
Apply