Apply

Senior Data Engineer

Posted 4 months agoViewed

View full description

πŸ’Ž Seniority level: Senior, 5+ years

πŸ“ Location: United States

πŸ” Industry: Advertising software for Connected TV

🏒 Company: MNTNπŸ‘₯ 251-500πŸ’° $2,000,000 Seed about 2 years agoAdvertisingReal TimeMarketingSoftware

πŸ—£οΈ Languages: English

⏳ Experience: 5+ years

πŸͺ„ Skills: AWSPythonSQLCloud ComputingETLGCPGitAirflowAlgorithmsAzureData engineeringGoSparkCommunication SkillsCI/CDLinuxData modeling

Requirements:
  • 5+ years of experience related to data engineering, analysis, and modeling complex data.
  • Experience with distributed processing engines such as Spark.
  • Strong experience with programming languages like Python and familiarity with algorithms.
  • Experience in SQL, data modeling, and manipulating large data sets.
  • Hands-on experience with data warehousing and building data pipelines.
  • Familiarity with software processes and tools such as Git, CI/CD, Linux, and Airflow.
  • Experience with cloud computing environments like AWS, Azure, or GCP.
  • Strong written and verbal communication skills for conveying technical topics.
Responsibilities:
  • Become the expert on MNTN Data Pipelines, Infrastructure and Processes.
  • Design architecture with observability to maintain high quality data pipelines.
  • Create and manage ETL/ELT workflows for transforming large data sets.
  • Organize data and metrics for ad buying features and client performance.
  • Organize visualizations, reporting, and alerting for performance and trends.
  • Investigate critical incidents and ensure issues are resolved.
Apply

Related Jobs

Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ” B2B SaaS

🏒 Company: Sanity

  • 4+ years of experience building data pipelines at scale
  • Deep expertise in SQL, Python, and Node.js/TypeScript
  • Production experience with Airflow and RudderStack
  • Track record of building reliable data infrastructure
  • Design, develop, and maintain scalable ETL/ELT pipelines
  • Collaborate to implement and scale product telemetry
  • Establish best practices for data ingestion and transformation
  • Monitor and optimize data pipeline performance

Node.jsPythonSQLApache AirflowETLTypeScript

Posted 1 day ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ” E-commerce

  • Bachelor's or Master's degree in Computer Science or related field
  • 5+ years of experience in data engineering
  • Strong proficiency in SQL and database technologies
  • Experience with data pipeline orchestration tools
  • Proficiency in programming languages like Python and Scala
  • Hands-on experience with AWS cloud data services
  • Familiarity with big data frameworks like Apache Spark
  • Knowledge of data modeling and warehousing
  • Experience implementing CI/CD for data pipelines
  • Real-time data processing architectures experience
  • Design, develop, and maintain ETL/ELT pipelines
  • Optimize data architecture and storage solutions
  • Work with AWS for scalable data solutions
  • Ensure data quality, integrity, and security
  • Collaborate with cross-functional teams
  • Monitor and troubleshoot data workflows
  • Create APIs for analytical information

AWSPostgreSQLPythonSQLApache AirflowETLKafkaMySQLSnowflakeCI/CDScala

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Health and Wellness Solutions

🏒 Company: Panasonic Well

  • 5+ years technology industry experience
  • Proficiency in building data pipelines in Python and/or Kotlin
  • Deep understanding of relational and non-relational database solutions
  • Experience with large-scale data pipeline construction
  • Familiarity with PCI, CCPA, GDPR compliance
  • Design, develop, and optimize automated data pipelines
  • Identify improvements for data reliability and quality
  • Own and evolve data architecture with a focus on privacy
  • Drive continuous improvement in data workflows
  • Collaborate with Data Scientists, AI Engineers, and Product Managers

PythonETLKafkaKotlinSnowflakeData engineeringComplianceData modeling

Posted 5 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ” Software Development

🏒 Company: BioRenderπŸ‘₯ 101-250πŸ’° $15,319,133 Series A almost 2 years agoLife ScienceGraphic DesignSoftware

  • 7+ years of data engineering experience of relevant industry experience
  • Expertise working with Data Warehousing platforms (AWS RedShift or Snowflake preferred) and data lake / lakehouse architectures
  • Experience with Data Streaming platforms (AWS Kinesis / Firehose preferred)
  • Expertise with SQL and programming languages commonly used in data platforms (Python, Spark, etc)
NOT STATED

AWSPythonSQLApache AirflowSnowflakeData engineeringSparkData modeling

Posted 6 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 175000.0 - 205000.0 USD per year

πŸ” Software Development

🏒 Company: CoreWeaveπŸ’° $642,000,000 Secondary Market about 1 year agoCloud ComputingMachine LearningInformation TechnologyCloud Infrastructure

  • Hands-on experience applying Kimball Dimensional Data Modeling principles to large datasets.
  • Expertise in working with analytical table/file formats, including Iceberg, Parquet, Avro, and ORC.
  • Proven experience optimizing MPP databases (StarRocks, Snowflake, BigQuery, Redshift).
  • Minimum 5+ years of programming experience in Python or Scala.
  • Advanced SQL skills, with a strong ability to write, optimize, and debug complex queries.
  • Hands-on experience with Airflow for batch orchestration distributed computing frameworks like Spark or Flink.
  • Develop and maintain data models, including star and snowflake schemas, to support analytical needs across the organization.
  • Establish and enforce best practices for dimensional modeling in our Lakehouse.
  • Engineer and optimize data storage using analytical table/file formats (e.g., Iceberg, Parquet, Avro, ORC).
  • Partner with BI, analytics, and data science teams to design datasets that accurately reflect business metrics.
  • Tune and optimize data in MPP databases such as StarRocks, Snowflake, BigQuery, or Redshift.
  • Collaborate on data workflows using Airflow, building and managing pipelines that power our analytical infrastructure.
  • Ensure efficient processing of large datasets through distributed computing frameworks like Spark or Flink.

AWSDockerPythonSQLCloud ComputingETLKubernetesSnowflakeAirflowAlgorithmsApache KafkaData engineeringData StructuresREST APISparkCommunication SkillsAnalytical SkillsCollaborationCI/CDRESTful APIsDevOpsTerraformProblem-solving skillsJSONScalaData visualizationAnsibleData modelingData analyticsDebugging

Posted 11 days ago
Apply
Apply

πŸ“ OR, WA, CA, CO, TX, IL

🧭 Contract

πŸ’Έ 65.0 - 75.0 USD per hour

πŸ” Music industry

🏒 Company: DiscogsπŸ‘₯ 51-100πŸ’° $2,500,000 about 7 years agoDatabaseCommunitiesMusic

  • Proficiency in data integration and ETL processes.
  • Knowledge of programming languages such as Python, Java, or Javascript.
  • Familiarity with cloud platforms and services (e.g., AWS, GCP, Azure).
  • Understanding of data warehousing concepts and technologies (e.g., Redshift, BigQuery, Snowflake).
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills to work effectively with cross-functional teams.
  • Experience with marketing automation platforms.
  • Experience with data warehouses in a marketing context.
  • Knowledge of API integration and data exchange formats such as JSON, XML, and CSV.
  • Design, develop, and maintain data pipelines to ingest, process, and store data.
  • Implement data validation and quality checks to maintain the integrity of incoming data.
  • Optimize and automate data workflows to improve efficiency and reduce manual intervention.
  • Work closely with the product, engineering, marketing and analytics teams to support data-driven decision-making.
  • Develop and maintain documentation related to data processes, workflows, and system architecture.
  • Troubleshoot and resolve data-related issues promptly to minimize disruptions.
  • Monitor and enhance the performance of data infrastructure, ensuring scalability and reliability.
  • Stay updated with industry trends and best practices in data engineering to apply improvements.

AWSPythonApache AirflowETLGCPMySQLSnowflakeApache KafkaAzureJSON

Posted 16 days ago
Apply
Apply

πŸ“ United States

πŸ’Έ 104981.0 - 157476.0 USD per year

πŸ” Mental healthcare

🏒 Company: HeadspaceπŸ‘₯ 11-50WellnessHealth CareChild Care

  • 7+ years of proven success designing and implementing large-scale enterprise data systems.
  • Deep experience with industry-leading tools such as Databricks, Snowflake, and Redshift.
  • Demonstrated expertise in architectural patterns for building high-volume real-time and batch ETL pipelines.
  • Proven ability to partner effectively with product teams to drive alignment and deliver solutions.
  • Exceptional oral and written communication abilities.
  • Experience in coaching and mentoring team members.
  • Architect and implement robust data pipelines to ingest, aggregate, and index diverse data sources into the organization’s data lake.
  • Lead the creation of a secure, compliant, and privacy-focused data warehousing solution tailored to healthcare industry requirements.
  • Partner with the data analytics team to deliver a data platform that supports accurate reporting on business metrics.
  • Collaborate with data science and machine learning teams to build tools for rapid experimentation and innovation.
  • Mentor and coach data engineers while promoting a culture valuing data as a strategic asset.

AWSETLSnowflakeData engineeringData modeling

Posted 20 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Regular

πŸ’Έ 125000.0 - 160000.0 USD per year

πŸ” Digital driver assistance services

🏒 Company: AgeroπŸ‘₯ 1001-5000πŸ’° $4,750,000 over 2 years agoAutomotiveInsurTechInformation TechnologyInsurance

  • Bachelor's degree in a technical field and 5+ years or Master's degree with 3+ years of industry experience.
  • Extensive experience with Snowflake or other cloud-based data warehousing solutions.
  • Expertise in ETL/ELT pipelines using tools like Airflow, DBT, Fivetran.
  • Proficiency in Python for data processing and advanced SQL for managing databases.
  • Solid understanding of data modeling techniques and cost management strategies.
  • Experience with data quality frameworks and deploying data solutions in the cloud.
  • Familiarity with version control systems and implementing CI/CD pipelines.
  • Develop and maintain ETL/ELT pipelines to ingest data from diverse sources.
  • Monitor and optimize cloud costs while performing query optimization in Snowflake.
  • Establish modern data architectures including data lakes and warehouses.
  • Apply dimensional modeling techniques and develop transformations using DBT or Spark.
  • Write reusable and efficient code, and develop data-intensive UIs and dashboards.
  • Implement data quality frameworks and observability solutions.
  • Collaborate cross-functionally and document data flows, processes, and architecture.

AWSPythonSQLApache AirflowDynamoDBETLFlaskMongoDBSnowflakeFastAPIPandasCI/CDData modeling

Posted 23 days ago
Apply
Apply

πŸ“ United States of America

🧭 Full-Time

πŸ’Έ 110000.0 - 160000.0 USD per year

πŸ” Insurance industry

🏒 Company: Verikai_External

  • Bachelor's degree or above in Computer Science, Data Science, or a related field.
  • At least 5 years of relevant experience.
  • Proficient in SQL, Python, and data processing frameworks such as Spark.
  • Hands-on experience with AWS services including Lambda, Athena, Dynamo, Glue, Kinesis, and Data Wrangler.
  • Expertise in handling large datasets using technologies like Hadoop and Spark.
  • Experience working with PII and PHI under HIPAA constraints.
  • Strong commitment to data security, accuracy, and compliance.
  • Exceptional ability to communicate complex technical concepts to stakeholders.
  • Design, build, and maintain robust ETL processes and data pipelines for large-scale data ingestion and transformation.
  • Manage third-party data sources and customer data to ensure clean and deduplicated datasets.
  • Develop scalable data storage systems using cloud platforms like AWS.
  • Collaborate with data scientists and product teams to support data needs.
  • Implement data validation and quality checks, ensuring accuracy and compliance with regulations.
  • Integrate new data sources to enhance the data ecosystem and document data strategies.
  • Continuously optimize data workflows and research new tools for the data infrastructure.

AWSPythonSQLDynamoDBETLSpark

Posted 29 days ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ United States

πŸ’Έ 229500.0 - 280500.0 USD per year

πŸ” Event analytics

🏒 Company: MixpanelπŸ‘₯ 251-500πŸ’° $200,000,000 Series C over 3 years agoWeb AppsSaaSAnalyticsMobile Apps

  • A strong background in both data and software engineering, with at least 5 years of professional experience.
  • Proficiency with at least one programming language (Python, Java, etc.) and SQL.
  • Excellent debugging and technical investigation skills.
  • Excellent technical communication skills, ideally in a remote environment.
  • Familiarity with rETL tools (e.g., Hightouch, Census, Rudderstack).
  • Experience with modern data storage technologies (e.g., BigQuery SQL, Airflow, DBT or similar).
  • Build and maintain software and data pipelines across backend and data orchestration systems.
  • Design and build data architecture spanning a wide range of complex data.
  • Create and maintain foundational datasets to support analytics, modeling, and product/business needs.
  • Collaborate with, teach, and learn from engineers across the organization.
  • Work with Finance and Data Science to ensure relevance and understanding.
  • Participate in team on-call rotation to maintain system health.
  • Build testing and alerting features for data hygiene.
  • Write internal technical documentation for systems designed and maintained.

PythonSQLApache AirflowETLData engineeringDebugging

Posted about 1 month ago
Apply