Apply

Senior Data Engineer

Posted 6 days agoViewed

View full description

๐Ÿ’Ž Seniority level: Senior, 7+ years

๐Ÿ“ Location: Denver, CO

๐Ÿ” Industry: Construction

๐Ÿข Company: EquipmentShare๐Ÿ‘ฅ 1001-5000๐Ÿ’ฐ $400,000,000 Debt Financing over 1 year agoConstruction

๐Ÿ—ฃ๏ธ Languages: English

โณ Experience: 7+ years

๐Ÿช„ Skills: AWSPythonSQLApache AirflowKafkaMLFlowSnowflakeSparkCI/CD

Requirements:
  • 7+ years of relevant data platform development experience building production-grade solutions.
  • Proficient with SQL and a high-order object-oriented language (e.g., Python).
  • Experience with designing and building distributed data architecture.
  • Experience building and managing production-grade data pipelines using tools such as Airflow, dbt, DataHub, MLFlow.
  • Experience building and managing production-grade data platforms using distributed systems such as Kafka, Spark, Flink and/or others.
  • Familiarity with event data streaming at scale.
  • Proven track record learning new technologies and applying that learning quickly.
  • Experience building observability and monitoring into data products.
  • Motivated to identify opportunities for automation to reduce manual toil.
Responsibilities:
  • Collaborate with Product Managers, Designers, Engineers, Data Scientists, and Data Analysts to take ideas from concept to production at scale.
  • Design, build and maintain a data platform to enable automation and self-service for data scientists, machine learning engineers, and analysts.
  • Design, build and maintain data product framework to support EquipmentShare application data science and analytics features.
  • Design, build and maintain CI/CD pipelines and automated data and machine learning deployment processes.
  • Develop data monitoring and alerting capabilities.
  • Document architecture, processes, and procedures for knowledge sharing and cross-team collaboration.
  • Mentor peers to help them build their skills.
Apply

Related Jobs

Apply

๐Ÿ“ USA

๐Ÿ’ธ 152960.0 - 183552.0 USD per year

๐Ÿ” Data Engineering and Observability Solutions

  • Software development skills in Python, Java, Scala, or Go.
  • High proficiency in SQL.
  • Experience with workflow orchestration systems like Prefect, Dagster, or Airflow.
  • Knowledge of MLOps best practices.
  • Familiarity with dbt or similar data transformation tools.
  • Excellent communication skills for technical topics.

  • Build and maintain production quality data pipelines between operational systems and BigQuery.
  • Implement data quality and freshness checks to ensure data accuracy and consistency.
  • Build and maintain machine learning pipelines for automated model validation and deployment.
  • Create and maintain documentation for data engineering processes and workflows.
  • Maintain observability and monitoring of internal data pipelines.
  • Troubleshoot data pipeline issues to ensure data availability.
  • Contribute to dbt systems ensuring efficiency and availability.

PythonSQLETLGCPMachine LearningData engineering

Posted 4 days ago
Apply
Apply

๐Ÿ“ United States, United Kingdom, Spain, Estonia

๐Ÿ” Identity verification

๐Ÿข Company: Veriff๐Ÿ‘ฅ 501-1000๐Ÿ’ฐ $100,000,000 Series C almost 3 years ago๐Ÿซ‚ Last layoff over 1 year agoArtificial Intelligence (AI)Fraud DetectionInformation TechnologyCyber SecurityIdentity Management

  • Expert-level knowledge of SQL, particularly with Redshift.
  • Strong experience in data modeling with an understanding of dimensional data modeling best practices.
  • Proficiency in data transformation frameworks like dbt.
  • Solid programming skills in languages used in data engineering, such as Python or R.
  • Familiarity with orchestration frameworks like Apache Airflow or Luigi.
  • Experience with data from diverse sources including RDBMS and APIs.

  • Collaborate with business stakeholders to design, document, and implement robust data models.
  • Build and optimize data pipelines to transform raw data into actionable insights.
  • Fine-tune query performance and ensure efficient use of data warehouse infrastructure.
  • Ensure data reliability and quality through rigorous testing and monitoring.
  • Assist in migrating from batch processing to real-time streaming systems.
  • Expand support for various use cases including business intelligence and analytics.

PythonSQLApache AirflowETLData engineeringJSONData modeling

Posted 11 days ago
Apply
Apply

๐Ÿ“ TX, MN, FL

๐Ÿ’ธ 130000.0 - 195000.0 USD per year

๐Ÿ” Healthcare

๐Ÿข Company: NeueHealth

  • Bachelorโ€™s degree in Computer Science, Computer Engineering, Information Systems, or equivalent.
  • Around five years of experience in an enterprise data engineering role in an Azure environment.
  • Healthcare IT background preferred.
  • Experience coding in Scala and building batch and streaming data pipelines.
  • Experience with API design.
  • Extensive experience developing data solutions in Azure Cloud.
  • Experience with event sourcing and/or Big Data architectures.

  • Write traditional code and server-less functions mainly in Scala.
  • Build APIs, data microservices, and ETL pipelines for data sharing and analytics.
  • Develop and optimize processes for large language models and AI enhancements.
  • Support Data Ingestion frameworks deployed in Azure.
  • Participate in cultivating a culture of DevOps and Quality Assurance.
  • Act as tech lead and mentor junior engineers.
  • Continuously document code and team processes.

ETLC#AzureSparkCI/CDDevOpsMicroservicesScala

Posted 11 days ago
Apply
Apply

๐Ÿ“ USA

๐Ÿงญ Full-Time

๐Ÿ’ธ 165000.0 - 210000.0 USD per year

๐Ÿ” E-commerce and AI technologies

๐Ÿข Company: Wizard๐Ÿ‘ฅ 11-50Customer ServiceManufacturing

  • 5+ years of professional experience in software development with a focus on data engineering.
  • Bachelor's degree in Computer Science or a related field, or equivalent practical experience.
  • Proficiency in Python with software engineering best practices.
  • Strong expertise in building ETL pipelines using tools like Apache Spark.
  • Hands-on experience with NoSQL databases like MongoDB, Cassandra, or DynamoDB.
  • Proficiency in real-time stream processing systems such as Kafka or AWS Kinesis.
  • Experience with cloud platforms (AWS, GCP, Azure) and technologies like Delta Lake and Parquet files.

  • Develop and maintain scalable data infrastructure for batch and real-time processing.
  • Build and optimize ETL pipelines for efficient data flow.
  • Collaborate with data scientists and cross-functional teams for accurate monitoring.
  • Design backend data solutions for microservices architecture.
  • Implement and manage integrations with third-party e-commerce platforms.

AWSPythonDynamoDBElasticSearchETLGCPGitHadoopKafkaMongoDBRabbitmqAzureCassandraRedis

Posted 12 days ago
Apply
Apply

๐Ÿ“ United States

๐Ÿงญ Full-Time

๐Ÿข Company: Avalore, LLC

  • Masterโ€™s or PhD in statistics, mathematics, computer science, or related field.
  • 8+ years of experience as a Data Engineer within the IC.
  • Outstanding communication skills, influencing abilities, and client focus.
  • Professional proficiency in English is required.
  • Current, active Top Secret security clearance.
  • Applicants must be currently authorized to work in the United States on a full-time basis.

  • Develops and documents data pipelines for ingest, transformation, and preparation of data for AI applications.
  • Designs scalable technologies such as streaming and transformation, joining disparate data sets for predictive analytics.
  • Develops API interfaces for accessibility.
  • Leads technical efforts and guides development teams.

PythonSQLApache AirflowArtificial IntelligenceETLMachine LearningAPI testingData engineering

Posted 14 days ago
Apply
Apply

๐Ÿ“ USA

๐Ÿงญ Full-Time

๐Ÿ’ธ 190000.0 - 220000.0 USD per year

๐Ÿ” B2B data / Data as a Service (DaaS)

๐Ÿข Company: People Data Labs๐Ÿ‘ฅ 101-250๐Ÿ’ฐ $45,000,000 Series B about 3 years agoDatabaseArtificial Intelligence (AI)Developer APIsMachine LearningAnalyticsB2BSoftware

  • 5-7+ years industry experience with strategic technical problem-solving.
  • Strong software development fundamentals.
  • Experience with Python.
  • Expertise in Apache Spark (Java, Scala, or Python-based).
  • Proficiency in SQL.
  • Experience building scalable data processing systems.
  • Familiarity with data pipeline orchestration tools (e.g., Airflow, dbt).
  • Knowledge of modern data design and storage patterns.
  • Experience working in Databricks.
  • Familiarity with cloud computing services (e.g., AWS, GCP, Azure).
  • Experience in data warehousing technologies.
  • Understanding of modern data storage formats and tools.

  • Build infrastructure for ingestion, transformation, and loading of data using Spark, SQL, AWS, and Databricks.
  • Create an entity resolution framework for merging billions of entities into clean datasets.
  • Develop CI/CD pipelines and anomaly detection systems to enhance data quality.
  • Provide solutions to undefined data engineering problems.
  • Assist Engineering and Product teams with data-related technical issues.

AWSPythonSQLKafkaAirflowData engineeringPandasCI/CD

Posted 16 days ago
Apply
Apply
๐Ÿ”ฅ Senior Data Engineer
Posted about 1 month ago

๐Ÿ“ United States, United Kingdom, Singapore, Indonesia, Germany, France, Japan, Australia

๐Ÿ” Customer engagement platform

๐Ÿข Company: Braze๐Ÿ‘ฅ 1001-5000๐Ÿ’ฐ Grant over 1 year agoCRMAnalyticsMarketingMarketing AutomationSoftware

  • 5+ years of hands-on experience in data engineering, cloud data warehouses, and ETL development.
  • Proven expertise in designing and optimizing data pipelines and architectures.
  • Strong proficiency in advanced SQL and data modeling techniques.
  • Experience leading impactful data projects from conception to deployment.
  • Effective collaboration skills with cross-functional teams and stakeholders.
  • In-depth understanding of technical architecture and data flow in a cloud-based environment.
  • Ability to mentor and guide junior team members.
  • Passion for building scalable data solutions.
  • Strong analytical and problem-solving skills with attention to detail.
  • Experience with large event-level data aggregation.
  • Familiarity with data governance principles.

  • Lead the design, implementation, and monitoring of scalable data pipelines and architectures using tools like Snowflake and dbt.
  • Develop and maintain robust ETL processes to ensure high-quality data ingestion, transformation, and storage.
  • Collaborate with data scientists, analysts, and engineers to implement data solutions for customer engagement.
  • Optimize and manage data flows across various platforms and applications.
  • Ensure data quality, consistency, and governance through best practices.
  • Work with large-scale event-level data to support business intelligence and analytics.
  • Implement and maintain data products using advanced techniques.
  • Collaborate with cross-functional teams to deliver valuable data solutions.
  • Evaluate and integrate new data technologies to enhance data infrastructure.

SQLBusiness IntelligenceETLSnowflakeData engineeringCollaborationCompliance

Posted about 1 month ago
Apply
Apply
๐Ÿ”ฅ Senior Data Engineer
Posted about 1 month ago

๐Ÿ“ USA

๐Ÿงญ Full-Time

๐Ÿ’ธ 140000 - 160000 USD per year

๐Ÿ” Health tech

๐Ÿข Company: Carrum Health

  • 10+ years professional experience as a data engineer, including ownership of data products.
  • Proficiency with data engineering technologies including Python, PostgreSQL, AWS Athena, and Docker.
  • History of designing systems focused on data quality and scalability.
  • Experience in the healthcare space or another highly-regulated industry.
  • Strong interpersonal skills and the ability to work collaboratively.

  • Develop and maintain creative solutions to evolving challenges that span data ingestion, processing, and modeling.
  • Collaborate closely with internal and external stakeholders to understand data needs.
  • Grow and maintain pipelines that power analytics, machine learning, and data products.
  • Participate early in the development of data foundation and infrastructure, implementing quality control and reporting systems.
  • Understand HIPAA compliance and support Carrumโ€™s success in this area.

AWSDockerPostgreSQLPythonBashETLMachine LearningRubyTableauData engineeringData scienceDevOpsCompliance

Posted about 1 month ago
Apply
Apply
๐Ÿ”ฅ Senior Data Engineer
Posted about 1 month ago

๐Ÿ“ United States, Canada

๐Ÿ” Advanced analytics consulting

๐Ÿข Company: Tiger Analytics๐Ÿ‘ฅ 1001-5000AdvertisingConsultingBig DataNewsMachine LearningAnalytics

  • Bachelorโ€™s degree in Computer Science or similar field.
  • 8+ years of experience in a Data Engineer role.
  • Experience with relational SQL and NoSQL databases like MySQL, Postgres.
  • Strong analytical skills and advanced SQL knowledge.
  • Development of ETL pipelines using Python & SQL.
  • Good experience with Customer Data Platforms (CDP).
  • Experience in SQL optimization and performance tuning.
  • Data modeling and building high-volume ETL pipelines.
  • Working experience with any cloud platform.
  • Experience with Google Tag Manager and Power BI is a plus.
  • Experience with object-oriented scripting languages: Python, Java, Scala, etc.
  • Experience extracting/querying/joining large data sets at scale.
  • Strong communication and organizational skills.

  • Designing, building, and maintaining scalable data pipelines on cloud infrastructure.
  • Working closely with cross-functional teams.
  • Supporting data analytics, machine learning, and business intelligence initiatives.

PythonSQLBusiness IntelligenceETLJavaMySQLPostgresNosqlAnalytical SkillsOrganizational skillsData modeling

Posted about 1 month ago
Apply
Apply

๐Ÿ“ ANY STATE

๐Ÿ” Data and technology

  • 5+ years of experience making contributions in the form of code.
  • Experience with algorithms and data structures and knowing when to apply them.
  • Experience with machine learning techniques to develop better predictive and clustering models.
  • Experience working with high-scale systems.
  • Experience creating powerful machine learning tools for experimentation and productionalization at scale.
  • Experience in data engineering and warehousing to develop ingestion engines, ETL pipelines, and organizing data for consumption.

  • Be a senior member of the team by contributing to the architecture, design, and implementation of EMS systems.
  • Mentor junior engineers and promote their growth.
  • Lead technical projects and manage planning, execution, and success of complex technical projects.
  • Collaborate with other engineering, product, and data science teams to ensure optimal product development.

PythonSQLETLGCPKubeflowMachine LearningAlgorithmsData engineeringData scienceData StructuresTensorflowCollaborationScala

Posted about 1 month ago
Apply