Apply

Senior Data Engineer

Posted about 5 hours agoViewed

View full description

💎 Seniority level: Senior, 5+ years

📍 Location: United States

💸 Salary: 144000.0 - 180000.0 USD per year

🔍 Industry: Software Development

🏢 Company: Hungryroot👥 101-250💰 $40,000,000 Series C almost 4 years agoArtificial Intelligence (AI)Food and BeverageE-CommerceRetailConsumer GoodsSoftware

⏳ Experience: 5+ years

🪄 Skills: AWSPythonSQLApache AirflowData MiningETLSnowflakeAlgorithmsAmazon Web ServicesData engineeringData StructuresSparkCI/CDRESTful APIsMicroservicesJSONScalaData visualizationData modelingData analyticsData management

Requirements:
  • 5+ years of experience in ETL development and data modeling
  • 5+ years of experience in both Scala and Python
  • 5+ years of experience in Spark
  • Excellent problem-solving skills and the ability to translate business problems into practical solutions
  • 2+ years of experience working with the Databricks Platform
Responsibilities:
  • Develop pipelines in Spark (Python + Scala) in the Databricks Platform
  • Build cross-functional working relationships with business partners in Food Analytics, Operations, Marketing, and Web/App Development teams to power pipeline development for the business
  • Ensure system reliability and performance
  • Deploy and maintain data pipelines in production
  • Set an example of code quality, data quality, and best practices
  • Work with Analysts and Data Engineers to enable high quality self-service analytics for all of Hungryroot
  • Investigate datasets to answer business questions, ensuring data quality and business assumptions are understood before deploying a pipeline
Apply

Related Jobs

Apply
🔥 Lead/Senior Data Engineer
Posted about 4 hours ago

📍 United States, Latin America, India

🔍 Software Development

🏢 Company: phData👥 501-1000💰 $2,499,997 Seed about 7 years agoInformation ServicesAnalyticsInformation Technology

  • 4+ years as a hands-on Data Engineer and/or Software Engineer
  • Experience with software development life cycle, including unit and integration testing
  • Programming expertise in Java, Python and/or Scala
  • Experience with core cloud data platforms including Snowflake, AWS, Azure, Databricks and GCP
  • Experience using SQL and the ability to write, debug, and optimize SQL queries
  • Client-facing written and verbal communication skills
  • Design and implement data solutions
  • Help ensure performance, security, scalability, and robust data integration
  • Develop end-to-end technical solutions into production
  • Multitask, prioritize, and work across multiple projects at once
  • Create and deliver detailed presentations
  • Detailed solution documentation (e.g. including POCS and roadmaps, sequence diagrams, class hierarchies, logical system views, etc.)

AWSPythonSoftware DevelopmentSQLCloud ComputingData AnalysisETLGCPJavaKafkaSnowflakeAzureData engineeringSparkCommunication SkillsCI/CDProblem SolvingAgile methodologiesRESTful APIsDocumentationScalaData modeling

Posted about 4 hours ago
Apply
Apply

📍 United States

💸 135000.0 - 155000.0 USD per year

🔍 Software Development

🏢 Company: Jobgether👥 11-50💰 $1,493,585 Seed about 2 years agoInternet

  • 8+ years of experience as a data engineer, with a strong background in data lake systems and cloud technologies.
  • 4+ years of hands-on experience with AWS technologies, including S3, Redshift, EMR, Kafka, and Spark.
  • Proficient in Python or Node.js for developing data pipelines and creating ETLs.
  • Strong experience with data integration and frameworks like Informatica and Python/Scala.
  • Expertise in creating and managing AWS services (EC2, S3, Lambda, etc.) in a production environment.
  • Solid understanding of Agile methodologies and software development practices.
  • Strong analytical and communication skills, with the ability to influence both IT and business teams.
  • Design and develop scalable data pipelines that integrate enterprise systems and third-party data sources.
  • Build and maintain data infrastructure to ensure speed, accuracy, and uptime.
  • Collaborate with data science teams to build feature engineering pipelines and support machine learning initiatives.
  • Work with AWS cloud technologies like S3, Redshift, and Spark to create a world-class data mesh environment.
  • Ensure proper data governance and implement data quality checks and lineage at every stage of the pipeline.
  • Develop and maintain ETL processes using AWS Glue, Lambda, and other AWS services.
  • Integrate third-party data sources and APIs into the data ecosystem.

AWSNode.jsPythonSQLETLKafkaData engineeringSparkAgile methodologiesScalaData modelingData management

Posted about 7 hours ago
Apply
Apply

📍 United States

🔍 Software Development

AWSPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLApache KafkaData engineeringCI/CDRESTful APIsMicroservicesData visualizationData modelingData analyticsData management

Posted 5 days ago
Apply
Apply

📍 United States, Canada

🧭 Full-Time

💸 105825.0 - 136950.0 CAD per year

🔍 Data Engineering

🏢 Company: Samsara👥 1001-5000💰 Secondary Market over 4 years ago🫂 Last layoff almost 5 years agoCloud Data ServicesBusiness IntelligenceInternet of ThingsSaaSSoftware

  • BS degree in Computer Science, Statistics, Engineering, or a related quantitative discipline
  • 6+ years experience in a data engineering and data science-focused role
  • ​​Proficiency in data manipulation and processing in SQL and Python
  • Expertise building data pipelines with new API endpoints from their documentation
  • Proficiency in building ETL pipelines to handle large volumes of data
  • Demonstrated experience in designing data models at scale
  • Build and maintain highly reliable computed tables, incorporating data from various sources, including unstructured and highly sensitive data
  • Access, manipulate, and integrate external datasets with internal data
  • Building analytical and statistical models to identify patterns, anomalies, and root causes
  • Leverage SQL and Python to shape and aggregate data
  • Incorporate generative AI tools (ChatGPT Enterprise) into production data pipelines and automated workflows
  • Collaborate closely with data scientists, data analysts, and Tableau developers to ship top quality analytic products
  • Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices

PythonSQLETLTableauAPI testingData engineeringData scienceSparkCommunication SkillsAnalytical SkillsData visualizationData modeling

Posted 6 days ago
Apply
Apply

📍 United States, Canada

🔍 Software Development

🏢 Company: Overstory👥 1-10E-Commerce

  • Approximately 5 years of experience in Data Engineering with at least one experience in a startup environment
  • Product-minded and able to demonstrate significant impact you have had on a business through the application of technology
  • Proven experience of data engineering across the following (or similar) technologies: Python, data orchestration platforms (Airflow, Luigi, Dagster, etc…), data quality frameworks, data lakes/warehouses
  • Ability to design and implement scalable and resilient data systems
  • Excellent communication skills and ability to collaborate effectively in a cross-functional team environment
  • Passion for learning and staying updated with evolving technologies and industry trends
  • Owning day-to-day operational responsibilities of deliveries our analysis to the customers
  • Developing data-driven solutions to customer problems that our products aren’t solving for yet
  • Building new and improving existing technologies such as:
  • Automation of the analysis for all customers, leading to faster implementation of Overstory’s recommendations
  • Metrics to identify what are the time bottlenecks in the current flow of analysis, therefore helping all Overstory teams identify areas of improvements
  • Visualization of status and progress of the analysis for internal use
  • Working on performance & scalability of our pipelines ensuring that our tech can handle our growth

PythonSQLCloud ComputingGCPAmazon Web ServicesData engineeringCommunication SkillsAnalytical SkillsRESTful APIsData visualizationData modeling

Posted 10 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 12 days ago

📍 United States

🧭 Full-Time

🔍 Software Development

🏢 Company: Mangomint👥 51-100💰 $35,000,000 Series B 6 months agoManagement Information SystemsBeautySoftware

  • 3+ years of experience in data engineering or a related role
  • Proficiency in SQL and Python for data pipelines and automation
  • Experience with dbt (or similar data modeling tools)
  • Familiarity with Snowflake (or other cloud data warehouses)
  • Knowledge of APIs and experience integrating various data sources
  • Experience with CRM and business systems (Salesforce, Outreach, Stripe, etc.)
  • Strong problem-solving skills and ability to take ownership of projects
  • Ability to work independently in a small, fast-paced startup environment
  • Effective communication skills to translate business needs into technical solutions
  • Design, develop, and maintain ETL/ELT data pipelines using Snowflake, dbt, Prefect, and other modern data tools
  • Automate data workflows to improve efficiency and reliability
  • Integrate CRM and other business systems to support cross-functional needs
  • Develop data enrichment pipelines to power our sales process
  • Build internal data tools to drive data-driven decision making
  • Work directly with stakeholders to define requirements and implement data solutions that support business objectives
  • Ensure data integrity, governance, and security best practices are upheld
  • Support analytics and reporting efforts by building dashboards and data models in dbt and Sigma

AWSPythonSQLETLSnowflakeCRMData modeling

Posted 12 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 12 days ago

📍 United States

🧭 Full-Time

💸 160000.0 - 182000.0 USD per year

🔍 Adtech

🏢 Company: tvScientific👥 11-50💰 $9,400,000 Convertible Note about 1 year agoInternetAdvertising

  • 7+ years of experience in data engineering.
  • Proven experience building data infrastructure using Spark with Scala.
  • Familiarity with data lakes, cloud warehouses, and storage formats.
  • Strong proficiency in AWS services.
  • Expertise in SQL for data manipulation and extraction.
  • Design and implement robust data infrastructure using Spark with Scala.
  • Collaborate with our cross-functional teams to design data solutions that meet business needs.
  • Build out our core data pipelines, store data in optimal engines and formats, and feed our machine learning models.
  • Leverage and optimize AWS resources.
  • Collaborate closely with the Data Science team.

AWSSQLCloud ComputingETLMachine LearningData engineeringSparkScalaData modeling

Posted 12 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 12 days ago

📍 United States, Canada

🧭 Full-Time

💸 110000.0 - 130000.0 USD per year

🔍 Software Development

🏢 Company: Ceros👥 101-250💰 $100,000,000 Private over 4 years agoAdvertisingContent CreatorsContent MarketingGraphic DesignSoftware

  • 5+ years of experience in data engineering, focusing on AWS Redshift and ETL pipeline development.
  • Strong expertise in SQL performance tuning, schema management, and query optimization.
  • Experience designing and maintaining ETL pipelines using AWS Glue, Matillion, or similar tools.
  • Proficiency in JavaScript/TypeScript, with experience building custom ETL workflows and integrations.
  • Hands-on experience with Python for data automation and scripting.
  • Strong understanding of data warehousing best practices, ensuring high-quality, scalable data models.
  • Experience with data monitoring and alerting tools such as AWS CloudWatch and New Relic.
  • Ability to work independently in a fast-paced environment, collaborating across teams to support data-driven initiatives.
  • Own and lead the management of AWS Redshift, ensuring optimal performance, disk usage, and cost efficiency.
  • Design and maintain scalable ETL pipelines using AWS Glue, Lambda, and Matillion to integrate data from Mixpanel, CRM platforms, and customer engagement tools.
  • Optimize SQL-based data transformations and Redshift queries to improve performance and reliability.
  • Automate data offloading and partition management, leveraging AWS services like S3 and external schemas.
  • Ensure version control and documentation of all Redshift queries, ETL processes, and AWS configurations through a centralized GitHub repository.
  • Develop monitoring and alerting for data pipelines using CloudWatch and other observability tools to ensure high availability and early issue detection.
  • Implement and maintain data quality checks and governance processes to ensure accuracy and consistency across foundational tables.
  • Collaborate with AI engineers and business stakeholders to enhance data accessibility and reporting for internal teams.
  • Maintain and optimize BI dashboards in Metabase and HubSpot, ensuring accuracy and efficiency of business reporting.
  • Manage key integrations between Redshift and external platforms, including Mixpanel, HubSpot, and Census, optimizing data accessibility and performance.
  • Administer AWS infrastructure supporting Redshift, ensuring efficient resource utilization, IAM security, and cost management.
  • Automate repetitive data tasks using Python and scripting to enhance data processes and improve team efficiency.

AWSPythonSQLETLGitJavascriptTypeScriptAmazon Web ServicesAPI testingData engineeringREST APICI/CDAnsibleData modelingData analyticsData management

Posted 12 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 14 days ago

📍 United States, Canada

🧭 Full-Time

💸 113000.0 - 130000.0 USD per year

🔍 Software Development

🏢 Company: Later👥 1-10Consumer ElectronicsiOSAppsSoftware

  • Minimum of 5 years in data engineering or related fields
  • Bachelor’s degree in Computer Science, Engineering, or related field
  • Strong focus on building data infrastructure and pipelines
  • Design and build a robust data warehouse architecture
  • Design, build, and maintain scalable data pipelines
  • Develop reliable transformation layers and data pipelines
  • Establish optimized data architectures using cloud technologies
  • Enforce data quality checks and governance practices
  • Collaborate with cross-functional teams to deliver insights
  • Analyze and optimize data pipelines for performance

SQLData engineering

Posted 14 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 15 days ago

📍 United States

🧭 Full-Time

💸 150000.0 - 190000.0 USD per year

🔍 Game Development

🏢 Company: Second Dinner

  • Demonstrated experience in large-scale distributed data systems such as Spark and Flink
  • Deep expertise in analytical database technologies, including SQL and NoSQL
  • Experience with database technologies and ETL/ELT processes
  • Experience with orchestration and automation tools like Airflow and Beam
  • Experience with Databricks and AWS-based data/analytics solutions
  • Develop and operate data infrastructure and pipelines for analytics and reporting
  • Empower Marketing team with high-quality data for user acquisition and retention
  • Collaborate with teams to gain insights from analytics

AWSPythonSQLApache AirflowETLData engineeringNosqlSpark

Posted 15 days ago
Apply