Apply

Staff Data Engineer

Posted about 1 month agoViewed

View full description

💎 Seniority level: Staff, 8+ years

📍 Location: United States

💸 Salary: 130000.0 - 170000.0 USD per year

🔍 Industry: Data Engineering

🗣️ Languages: English

⏳ Experience: 8+ years

🪄 Skills: AWSDockerPythonSQLApache AirflowCloud ComputingETLGCPMachine LearningSnowflakeData engineeringREST APIData modeling

Requirements:
  • 8+ years experience in a data engineering role
  • Strong knowledge of REST-based APIs and cloud technologies (AWS, Azure, GCP)
  • Experience with Python/SQL for building data pipelines
  • Bachelor's degree in computer science or related field
Responsibilities:
  • Design and build data pipelines across various source systems
  • Collaborate with teams to develop data acquisition and integration strategies
  • Coach and guide others in scalable pipeline building
  • Deploy to cloud-based platforms and troubleshoot issues
Apply

Related Jobs

Apply
🔥 Staff Data Engineer
Posted 2 days ago

📍 United States

🧭 Full-Time

🔍 Software Development

🏢 Company: Life360👥 251-500💰 $33,038,258 Post-IPO Equity over 2 years ago🫂 Last layoff about 2 years agoAndroidFamilyAppsMobile AppsMobile

  • Minimum 7 years of experience working with high volume data infrastructure.
  • Experience with Databricks and AWS.
  • Experience with dbt.
  • Experience with job orchestration tooling like Airflow.
  • Proficient programming in Python.
  • Proficient with SQL and the ability to optimize complex queries.
  • Proficient with large-scale data processing using Spark and/or Presto/Trino.
  • Proficient in data modeling and database design.
  • Experience with streaming data with a tool like Kinesis or Kafka.
  • Experience working with high volume event based data architecture like Amplitude and Braze.
  • Experience in modern development lifecycle including Agile methodology, CI/CD, automated deployments using Terraform, GitHub Actions, etc.
  • Knowledge and proficiency in the latest open source and data frameworks, modern data platform tech stacks and tools.
  • Always learning and staying up to speed with the fast moving data world.
  • You have good communication and collaboration skills and can work independently.
  • BS in Computer Science, Software Engineering, Mathematics, or equivalent experience.
  • Design, implement, and manage scalable data processing platforms used for real-time analytics and exploratory data analysis.
  • Manage our financial data from ingestion through ETL to storage and batch processing.
  • Automate, test and harden all data workflows.
  • Architect logical and physical data models to ensure the needs of the business are met.
  • Collaborate across the data teams, engineering, data science, and analytics, to understand their needs, while applying engineering best practices.
  • Architect and develop systems and algorithms for distributed real-time analytics and data processing.
  • Implement strategies for acquiring data to develop new insights.
  • Mentor junior engineers, imparting best practices and institutionalizing efficient processes to foster growth and innovation within the team.
  • Champion data engineering best practices and institutionalizing efficient processes to foster growth and innovation within the team.

AWSProject ManagementPythonSQLApache AirflowETLKafkaAlgorithmsData engineeringData StructuresREST APISparkCommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingAgile methodologiesMentoringTerraformData visualizationTechnical supportData modelingData analyticsData managementDebugging

Posted 2 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 23 days ago

📍 United States, Canada

💸 200000.0 - 228000.0 USD per year

🔍 Software Development

🏢 Company: Later👥 1-10Consumer ElectronicsiOSAppsSoftware

  • 10+ years of experience in data engineering, software engineering, or related fields.
  • Proven experience leading the technical strategy and execution of large-scale data platforms.
  • Expertise in cloud technologies (Google Cloud Platform, AWS, Azure) with a focus on scalable data solutions (BigQuery, Snowflake, Redshift, etc.).
  • Strong proficiency in SQL, Python, and distributed data processing frameworks (Apache Spark, Flink, Beam, etc.).
  • Extensive experience with streaming data architectures using Kafka, Flink, Pub/Sub, Kinesis, or similar technologies.
  • Expertise in data modeling, schema design, indexing, partitioning, and performance tuning for analytical workloads, including data governance (security, access control, compliance: GDPR, CCPA, SOC 2)
  • Strong experience designing and optimizing scalable, fault-tolerant data pipelines using workflow orchestration tools like Airflow, Dagster, or Dataflow.
  • Ability to lead and influence engineering teams, drive cross-functional projects, and align stakeholders towards a common data vision.
  • Experience mentoring senior and mid-level data engineers to enhance team performance and skill development.
  • Lead the design and evolution of a scalable data architecture that meets analytical, machine learning, and operational needs.
  • Architect and optimize data pipelines for batch and real-time data processing, ensuring efficiency and reliability.
  • Implement best practices for distributed data processing, ensuring scalability, performance, and cost-effectiveness of data workflows.
  • Define and enforce data governance policies, implement automated validation checks, and establish monitoring frameworks to maintain data integrity.
  • Ensure data security and compliance with industry regulations by designing appropriate access controls, encryption mechanisms, and auditing processes.
  • Drive innovation in data engineering practices by researching and implementing new technologies, tools, and methodologies.
  • Work closely with data scientists, engineers, analysts, and business stakeholders to understand data requirements and deliver impactful solutions.
  • Develop reusable frameworks, libraries, and automation tools to improve efficiency, reliability, and maintainability of data infrastructure.
  • Guide and mentor data engineers, fostering a high-performing engineering culture through best practices, peer reviews, and knowledge sharing.
  • Establish and monitor SLAs for data pipelines, proactively identifying and mitigating risks to ensure high availability and reliability.

AWSPythonSQLApache AirflowCloud ComputingData AnalysisETLGCPKafkaMachine LearningSnowflakeData engineeringData modelingData management

Posted 23 days ago
Apply
Apply

📍 North America

🔍 Advertising

NOT STATED
NOT STATED

AWSBackend DevelopmentGraphQLSQLElasticSearchETLKafkaRuby on RailsSoftware ArchitectureAlgorithmsData engineeringData StructuresGoRedisCI/CDRESTful APIsMicroservicesData modeling

Posted 24 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted about 1 month ago

📍 United States

💸 131414.0 - 197100.0 USD per year

🔍 Mental healthcare

🏢 Company: Headspace👥 11-50WellnessHealth CareChild Care

  • 10+ years of success in enterprise data solutions and high-impact initiatives.
  • Expertise in platforms like Databricks, Snowflake, dbt, and Redshift.
  • Experience designing and optimizing real-time and batch ETL pipelines.
  • Demonstrated leadership and mentorship abilities in engineering.
  • Strong collaboration skills with product and analytics stakeholders.
  • Bachelor’s or advanced degree in Computer Science, Engineering, or a related field.
  • Drive the architecture and implementation of pySpark data pipelines.
  • Create and enforce design patterns in code and schema.
  • Design and lead secure and compliant data warehousing platforms.
  • Partner with analytics and product leaders for actionable insights.
  • Mentor team members on dbt architecture and foster a data-first culture.
  • Act as a thought leader on data strategy and cross-functional roadmaps.

SQLCloud ComputingETLSnowflakeData engineeringData modelingData analytics

Posted about 1 month ago
Apply
Apply
🔥 Staff Data Engineer
Posted about 2 months ago

📍 United States

🧭 Full-Time

💸 170000.0 - 195000.0 USD per year

🔍 Healthcare

🏢 Company: Parachute Health👥 101-250💰 $1,000 about 5 years agoMedicalHealth CareSoftware

  • 5+ years of relevant experience.
  • Experience in Data Engineering with Python.
  • Experience building customer-facing software.
  • Strong listening and communication skills.
  • Time management and organizational skills.
  • Proactive, a driven self-starter who can work independently or as part of a team.
  • Ability to think with the 'big picture' in mind.
  • Passionate about improving patient outcomes in the healthcare space.
  • Architect solutions to integrate and manage large volumes of data across various internal and external systems.
  • Establish best practices and data governance standards to ensure that data infrastructure is built for long-term scalability.
  • Build and maintain a reporting product for external customers that visualizes data and provides tabular reports.
  • Collaborate across the organization to assess data engineering needs.

PythonETLAirflowData engineeringData visualization

Posted about 2 months ago
Apply
Apply
🔥 Staff Data Engineer
Posted 2 months ago

📍 United States

🔍 Cyber security

🏢 Company: BeyondTrust👥 1001-5000💰 Private almost 4 years agoCloud ComputingSecurityCloud SecurityCyber SecuritySoftware

  • Strong programming and technology knowledge in cloud data processing.
  • Previous experience working in matured data lakes.
  • Strong data modelling skills for analytical workloads.
  • Spark (or equivalent parallel processing framework) experience is needed; existing Databricks knowledge is a plus.
  • Interest and aptitude for cybersecurity; interest in identity security is highly preferred.
  • Technical understanding of underlying systems and computation minutiae.
  • Experience working with distributed systems and data processing on object stores.
  • Ability to work autonomously.
  • Optimize data workloads at a software level by improving processing efficiency.
  • Develop new data processing routes to remove redundancy or reduce transformation overhead.
  • Monitor and maintain existing data workflows.
  • Use observability best practices to ensure pipeline performance.
  • Perform complex transformations on both real time and batch data assets.
  • Create new ML/Engineering solutions to tackle existing issues in the cybersecurity space.
  • Leverage CI/CD best practices to effectively develop and release source code.

PythonSparkCI/CDData modeling

Posted 2 months ago
Apply
Apply
🔥 Staff Data Engineer
Posted 3 months ago

📍 United States, Canada

🧭 Full-Time

💸 170000.0 - 205000.0 USD per year

🔍 Healthcare

🏢 Company: Wellth

  • 7+ years in analytics engineering or data analysis in healthcare
  • Hands-on experience with healthcare data sets
  • Proficiency in SQL, Python, and dbt
  • Lead design and implementation of data pipelines for healthcare data
  • Create foundational data layers for analytics
  • Ensure data quality and consistency

PythonSQLApache AirflowETLGitData engineeringData visualizationData modeling

Posted 3 months ago
Apply
Apply

📍 Paris, New York, San Francisco, Sydney, Madrid, London, Berlin

🔍 Communication technology

  • Passionate about data engineering.
  • Experience in designing and developing data infrastructure.
  • Technical skills to solve complex challenges.
  • Play a crucial role in designing, developing, and maintaining data infrastructure.
  • Collaborate with teams across the company to solve complex challenges.
  • Improve operational efficiency and lead business towards strategic goals.
  • Contribute to engineering efforts that enhance customer journey.

AWSPostgreSQLPythonSQLApache AirflowETLData engineering

Posted 4 months ago
Apply