Apply

Staff Data Engineer

Posted 6 days agoViewed

View full description

๐Ÿ’Ž Seniority level: Staff, 10+ years

๐Ÿ’ธ Salary: 200000.0 - 228000.0 USD per year

๐Ÿ” Industry: Software Development

๐Ÿข Company: Later๐Ÿ‘ฅ 1-10Consumer ElectronicsiOSAppsSoftware

โณ Experience: 10+ years

Requirements:
  • 10+ years of experience in data engineering, software engineering, or related fields.
  • Proven experience leading the technical strategy and execution of large-scale data platforms.
  • Expertise in cloud technologies (Google Cloud Platform, AWS, Azure) with a focus on scalable data solutions (BigQuery, Snowflake, Redshift, etc.).
  • Strong proficiency in SQL, Python, and distributed data processing frameworks (Apache Spark, Flink, Beam, etc.).
  • Extensive experience with streaming data architectures using Kafka, Flink, Pub/Sub, Kinesis, or similar technologies.
  • Expertise in data modeling, schema design, indexing, partitioning, and performance tuning for analytical workloads, including data governance (security, access control, compliance: GDPR, CCPA, SOC 2)
  • Strong experience designing and optimizing scalable, fault-tolerant data pipelines using workflow orchestration tools like Airflow, Dagster, or Dataflow.
  • Ability to lead and influence engineering teams, drive cross-functional projects, and align stakeholders towards a common data vision.
  • Experience mentoring senior and mid-level data engineers to enhance team performance and skill development.
Responsibilities:
  • Lead the design and evolution of a scalable data architecture that meets analytical, machine learning, and operational needs.
  • Architect and optimize data pipelines for batch and real-time data processing, ensuring efficiency and reliability.
  • Implement best practices for distributed data processing, ensuring scalability, performance, and cost-effectiveness of data workflows.
  • Define and enforce data governance policies, implement automated validation checks, and establish monitoring frameworks to maintain data integrity.
  • Ensure data security and compliance with industry regulations by designing appropriate access controls, encryption mechanisms, and auditing processes.
  • Drive innovation in data engineering practices by researching and implementing new technologies, tools, and methodologies.
  • Work closely with data scientists, engineers, analysts, and business stakeholders to understand data requirements and deliver impactful solutions.
  • Develop reusable frameworks, libraries, and automation tools to improve efficiency, reliability, and maintainability of data infrastructure.
  • Guide and mentor data engineers, fostering a high-performing engineering culture through best practices, peer reviews, and knowledge sharing.
  • Establish and monitor SLAs for data pipelines, proactively identifying and mitigating risks to ensure high availability and reliability.
Apply

Related Jobs

Apply

๐Ÿ“ United States

๐Ÿ” Software Development

  • Have built scalable web scraping platforms from the ground up
  • Experience juggling multiple projects with shifting priorities while continuing to deliver value to the business
  • Be a curious, detail oriented, self-starter who wants to take full ownership of high impact projects with visibility throughout the organization
  • Play a key role in the implementation and evolution of our web scraping and data platform
  • Craft and implement a best in class web scraping strategy and infrastructure
  • Build and scale pipelines that garner millions of records, across hundreds of sites, stored as measurable data that enable insights for our Analytics team and our customers

AWSBackend DevelopmentPostgreSQLPythonSQLApache AirflowETLData engineeringREST APINodeJSSoftware EngineeringData analytics

Posted 6 days ago
Apply
Apply

๐Ÿ“ United States

๐Ÿงญ Full-Time

๐Ÿ” Software Development

๐Ÿข Company: Rula๐Ÿ‘ฅ 251-500๐Ÿ’ฐ Series C 7 months agoPersonal HealthMental HealthAddiction TreatmentHealth InsuranceWellnessHealth CareHome Health Care

  • Experience building and maintaining ETL/ELT pipelines using tools like AWS Glue, DBT, Dagster, or similar orchestration frameworks.
  • Experience in Python and SQL for data processing and transformation.
  • Experience in AWS services such as Redshift, S3, Glue, and IAM.
  • Experience designing and optimizing data warehouses (Redshift, Snowflake) and managing S3 data lakes.
  • Experience implementing data validation, quality checks, and error-handling mechanisms.
  • Familiarity with data governance practices, including metadata management and documentation.
NOT STATED

AWSPythonSQLData AnalysisETLAmazon Web ServicesApache KafkaData engineeringCI/CDTerraformData modeling

Posted 7 days ago
Apply
Apply

๐Ÿ“ North America

๐Ÿ” Software Development

NOT STATED
NOT STATED

AWSBackend DevelopmentGraphQLSQLElasticSearchETLKafkaRuby on RailsSoftware ArchitectureAlgorithmsData engineeringData StructuresGoRedisCI/CDRESTful APIsMicroservicesData modeling

Posted 8 days ago
Apply
Apply

๐Ÿ“ United States

๐Ÿ’ธ 131414.0 - 197100.0 USD per year

๐Ÿ” Mental healthcare

๐Ÿข Company: Headspace๐Ÿ‘ฅ 11-50WellnessHealth CareChild Care

  • 10+ years of success in enterprise data solutions and high-impact initiatives.
  • Expertise in platforms like Databricks, Snowflake, dbt, and Redshift.
  • Experience designing and optimizing real-time and batch ETL pipelines.
  • Demonstrated leadership and mentorship abilities in engineering.
  • Strong collaboration skills with product and analytics stakeholders.
  • Bachelorโ€™s or advanced degree in Computer Science, Engineering, or a related field.
  • Drive the architecture and implementation of pySpark data pipelines.
  • Create and enforce design patterns in code and schema.
  • Design and lead secure and compliant data warehousing platforms.
  • Partner with analytics and product leaders for actionable insights.
  • Mentor team members on dbt architecture and foster a data-first culture.
  • Act as a thought leader on data strategy and cross-functional roadmaps.

SQLCloud ComputingETLSnowflakeData engineeringData modelingData analytics

Posted 18 days ago
Apply
Apply

๐Ÿ“ Latin America

๐Ÿ” AI economy, workforce development

๐Ÿข Company: Correlation One๐Ÿ‘ฅ 251-500๐Ÿ’ฐ $5,000,000 Series A almost 7 years agoInformation ServicesAnalyticsInformation Technology

  • 7+ years in a Data Engineering role with experience in data warehouses and ETL/ELT.
  • Advanced SQL experience and skills in database design.
  • Familiarity with pipeline monitoring and cloud environments (e.g., GCP).
  • Experience with APIs, Airflow, dbt, Git, and creating microservices.
  • Knowledge of implementing CDC with technologies like Kafka.
  • Solid understanding of software development practices and agile methodologies.
  • Proficiency in object-oriented scripting languages such as Python or Scala.
  • Experience with CI/CD processes and source control tools like GitHub.
  • Act as the data lake subject matter expert to develop technical vision.
  • Design the architecture for a well-architected data lakehouse.
  • Collaborate with architects to design the ELT process from data ingestion to analytics.
  • Create standard frameworks for software development.
  • Mentor junior engineers and support development teams.
  • Monitor database performance and adhere to data engineering best practices.
  • Develop schema design for reports and analytics.
  • Engage in hands-on development across the technical stack.

PostgreSQLPythonSQLApache AirflowETLGCPGitKafkaMongoDBData engineeringCI/CDTerraformMicroservicesScala

Posted 21 days ago
Apply
Apply

๐Ÿ“ United States

๐Ÿงญ Full-Time

๐Ÿ’ธ 130000.0 - 170000.0 USD per year

๐Ÿ” Data Engineering

  • 8+ years experience in a data engineering role
  • Strong knowledge of REST-based APIs and cloud technologies (AWS, Azure, GCP)
  • Experience with Python/SQL for building data pipelines
  • Bachelor's degree in computer science or related field
  • Design and build data pipelines across various source systems
  • Collaborate with teams to develop data acquisition and integration strategies
  • Coach and guide others in scalable pipeline building
  • Deploy to cloud-based platforms and troubleshoot issues

AWSDockerPythonSQLApache AirflowCloud ComputingETLGCPMachine LearningSnowflakeData engineeringREST APIData modeling

Posted 23 days ago
Apply
Apply
๐Ÿ”ฅ Staff Data Engineer
Posted about 1 month ago

๐Ÿ“ United States

๐Ÿงญ Full-Time

๐Ÿ’ธ 170000.0 - 195000.0 USD per year

๐Ÿ” Healthcare

๐Ÿข Company: Parachute Health๐Ÿ‘ฅ 101-250๐Ÿ’ฐ $1,000 about 5 years agoMedicalHealth CareSoftware

  • 5+ years of relevant experience.
  • Experience in Data Engineering with Python.
  • Experience building customer-facing software.
  • Strong listening and communication skills.
  • Time management and organizational skills.
  • Proactive, a driven self-starter who can work independently or as part of a team.
  • Ability to think with the 'big picture' in mind.
  • Passionate about improving patient outcomes in the healthcare space.
  • Architect solutions to integrate and manage large volumes of data across various internal and external systems.
  • Establish best practices and data governance standards to ensure that data infrastructure is built for long-term scalability.
  • Build and maintain a reporting product for external customers that visualizes data and provides tabular reports.
  • Collaborate across the organization to assess data engineering needs.

PythonETLAirflowData engineeringData visualization

Posted about 1 month ago
Apply
Apply
๐Ÿ”ฅ Staff Data Engineer
Posted about 1 month ago

๐Ÿ“ Europe

๐Ÿงญ Full-Time

๐Ÿ” Supply Chain Risk Analytics

๐Ÿข Company: Everstream Analytics๐Ÿ‘ฅ 251-500๐Ÿ’ฐ $50,000,000 Series B almost 2 years agoProductivity ToolsArtificial Intelligence (AI)LogisticsMachine LearningRisk ManagementAnalyticsSupply Chain ManagementProcurement

  • Deep understanding of Python, including data manipulation and analysis libraries like Pandas and NumPy.
  • Extensive experience in data engineering, including ETL, data warehousing, and data pipelines.
  • Strong knowledge of AWS services, such as RDS, Lake Formation, Glue, Spark, etc.
  • Experience with real-time data processing frameworks like Apache Kafka/MSK.
  • Proficiency in SQL and NoSQL databases, including PostgreSQL, Opensearch, and Athena.
  • Ability to design efficient and scalable data models.
  • Strong analytical skills to identify and solve complex data problems.
  • Excellent communication and collaboration skills to work effectively with cross-functional teams.
  • Manage and grow a remote team of data engineers based in Europe.
  • Collaborate with Platform and Data Architecture teams to deliver robust, scalable, and maintainable data pipelines.
  • Lead and own data engineering projects, including data ingestion, transformation, and storage.
  • Develop and optimize real-time data processing pipelines using technologies like Apache Kafka/MSK or similar.
  • Design and implement data lakehouses and ETL pipelines using AWS services like Glue or similar.
  • Create efficient data models and optimize database queries for optimal performance.
  • Work closely with data scientists, product managers, and engineers to understand data requirements and translate them into technical solutions.
  • Mentor junior data engineers and share your expertise. Establish and promote best practices.

AWSPostgreSQLPythonSQLETLApache KafkaNosqlSparkData modeling

Posted about 1 month ago
Apply
Apply
๐Ÿ”ฅ Staff Data Engineer
Posted about 1 month ago

๐Ÿ“ United States

๐Ÿ” Cyber security

๐Ÿข Company: BeyondTrust๐Ÿ‘ฅ 1001-5000๐Ÿ’ฐ Private over 3 years agoCloud ComputingSecurityCloud SecurityCyber SecuritySoftware

  • Strong programming and technology knowledge in cloud data processing.
  • Previous experience working in matured data lakes.
  • Strong data modelling skills for analytical workloads.
  • Spark (or equivalent parallel processing framework) experience is needed; existing Databricks knowledge is a plus.
  • Interest and aptitude for cybersecurity; interest in identity security is highly preferred.
  • Technical understanding of underlying systems and computation minutiae.
  • Experience working with distributed systems and data processing on object stores.
  • Ability to work autonomously.
  • Optimize data workloads at a software level by improving processing efficiency.
  • Develop new data processing routes to remove redundancy or reduce transformation overhead.
  • Monitor and maintain existing data workflows.
  • Use observability best practices to ensure pipeline performance.
  • Perform complex transformations on both real time and batch data assets.
  • Create new ML/Engineering solutions to tackle existing issues in the cybersecurity space.
  • Leverage CI/CD best practices to effectively develop and release source code.

PythonSparkCI/CDData modeling

Posted about 1 month ago
Apply
Apply

๐Ÿ“ United States, Canada

๐Ÿงญ Full-Time

๐Ÿ’ธ 170000.0 - 205000.0 USD per year

๐Ÿ” Healthcare

๐Ÿข Company: Wellth

  • 7+ years in analytics engineering or data analysis in healthcare
  • Hands-on experience with healthcare data sets
  • Proficiency in SQL, Python, and dbt
  • Lead design and implementation of data pipelines for healthcare data
  • Create foundational data layers for analytics
  • Ensure data quality and consistency

PythonSQLApache AirflowETLGitData engineeringData visualizationData modeling

Posted 2 months ago
Apply

Related Articles

Posted 6 months ago

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

Posted 6 months ago

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Posted 6 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 6 months ago

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.

Posted 6 months ago

The article explores the current statistics for remote work in 2024, covering the percentage of the global workforce working remotely, growth trends, popular industries and job roles, geographic distribution of remote workers, demographic trends, work models comparison, job satisfaction, and productivity insights.