Apply

Senior Data Engineer

Posted 2 days agoViewed

View full description

πŸ’Ž Seniority level: Senior, 5+ years

πŸ“ Location: United States, Canada

πŸ” Industry: E-commerce

πŸ—£οΈ Languages: English

⏳ Experience: 5+ years

πŸͺ„ Skills: AWSPostgreSQLPythonSQLApache AirflowETLKafkaMySQLSnowflakeCI/CDScala

Requirements:
  • Bachelor's or Master's degree in Computer Science or related field
  • 5+ years of experience in data engineering
  • Strong proficiency in SQL and database technologies
  • Experience with data pipeline orchestration tools
  • Proficiency in programming languages like Python and Scala
  • Hands-on experience with AWS cloud data services
  • Familiarity with big data frameworks like Apache Spark
  • Knowledge of data modeling and warehousing
  • Experience implementing CI/CD for data pipelines
  • Real-time data processing architectures experience
Responsibilities:
  • Design, develop, and maintain ETL/ELT pipelines
  • Optimize data architecture and storage solutions
  • Work with AWS for scalable data solutions
  • Ensure data quality, integrity, and security
  • Collaborate with cross-functional teams
  • Monitor and troubleshoot data workflows
  • Create APIs for analytical information
Apply

Related Jobs

Apply
πŸ”₯ Senior Data Engineer
Posted about 6 hours ago

πŸ“ United States, Canada

🧭 Full-Time

πŸ” Software Development

🏒 Company: Float.com

  • Expertise in machine learning and advanced algorithms
  • Proficient in Python or Java, comfortable with SQL and Javascript/Typescript
  • Experience with large-scale data pipelines and stream processing
  • Skilled in data integration, cleaning, and validation
  • Familiar with vector and graph databases
  • Lead technical viability discussions
  • Develop and test proof-of-concepts for Resource Recommendation Engine
  • Conduct comprehensive analysis of existing data
  • Implement and maintain data streaming pipeline
  • Develop and implement advanced algorithms
  • Design, implement, and maintain streaming data architecture
  • Establish best practices for optimization and AI
  • Mentor and train team members

PythonSQLKafkaMachine LearningAlgorithmsData engineering

Posted about 6 hours ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ” B2B SaaS

🏒 Company: Sanity

  • 4+ years of experience building data pipelines at scale
  • Deep expertise in SQL, Python, and Node.js/TypeScript
  • Production experience with Airflow and RudderStack
  • Track record of building reliable data infrastructure
  • Design, develop, and maintain scalable ETL/ELT pipelines
  • Collaborate to implement and scale product telemetry
  • Establish best practices for data ingestion and transformation
  • Monitor and optimize data pipeline performance

Node.jsPythonSQLApache AirflowETLTypeScript

Posted 2 days ago
Apply
Apply

πŸ“ Europe, APAC, Americas

🧭 Full-Time

πŸ” Software Development

🏒 Company: DockerπŸ‘₯ 251-500πŸ’° $105,000,000 Series C almost 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted 3 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Health and Wellness Solutions

🏒 Company: Panasonic Well

  • 5+ years technology industry experience
  • Proficiency in building data pipelines in Python and/or Kotlin
  • Deep understanding of relational and non-relational database solutions
  • Experience with large-scale data pipeline construction
  • Familiarity with PCI, CCPA, GDPR compliance
  • Design, develop, and optimize automated data pipelines
  • Identify improvements for data reliability and quality
  • Own and evolve data architecture with a focus on privacy
  • Drive continuous improvement in data workflows
  • Collaborate with Data Scientists, AI Engineers, and Product Managers

PythonETLKafkaKotlinSnowflakeData engineeringComplianceData modeling

Posted 6 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ” Software Development

🏒 Company: BioRenderπŸ‘₯ 101-250πŸ’° $15,319,133 Series A almost 2 years agoLife ScienceGraphic DesignSoftware

  • 7+ years of data engineering experience of relevant industry experience
  • Expertise working with Data Warehousing platforms (AWS RedShift or Snowflake preferred) and data lake / lakehouse architectures
  • Experience with Data Streaming platforms (AWS Kinesis / Firehose preferred)
  • Expertise with SQL and programming languages commonly used in data platforms (Python, Spark, etc)
  • Experience with data pipeline orchestration (e.g., Airflow) and data pipeline integrations (e.g. Airbyte, Stitch)
  • Building and maintaining the right architecture and tooling to support our data science, analytics, product, and machine learning initiatives.
  • Solve complex architectural problems
  • Translate deeply technical designs into business appropriate representations as well as analyze business needs and requirements ensuring implementation of data services directly correlates to the strategy and growth of the business

AWSPythonSQLApache AirflowSnowflakeData engineeringSparkData modeling

Posted 6 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 175000.0 - 205000.0 USD per year

πŸ” Software Development

🏒 Company: CoreWeaveπŸ’° $642,000,000 Secondary Market about 1 year agoCloud ComputingMachine LearningInformation TechnologyCloud Infrastructure

  • Hands-on experience applying Kimball Dimensional Data Modeling principles to large datasets.
  • Expertise in working with analytical table/file formats, including Iceberg, Parquet, Avro, and ORC.
  • Proven experience optimizing MPP databases (StarRocks, Snowflake, BigQuery, Redshift).
  • Minimum 5+ years of programming experience in Python or Scala.
  • Advanced SQL skills, with a strong ability to write, optimize, and debug complex queries.
  • Hands-on experience with Airflow for batch orchestration distributed computing frameworks like Spark or Flink.
  • Develop and maintain data models, including star and snowflake schemas, to support analytical needs across the organization.
  • Establish and enforce best practices for dimensional modeling in our Lakehouse.
  • Engineer and optimize data storage using analytical table/file formats (e.g., Iceberg, Parquet, Avro, ORC).
  • Partner with BI, analytics, and data science teams to design datasets that accurately reflect business metrics.
  • Tune and optimize data in MPP databases such as StarRocks, Snowflake, BigQuery, or Redshift.
  • Collaborate on data workflows using Airflow, building and managing pipelines that power our analytical infrastructure.
  • Ensure efficient processing of large datasets through distributed computing frameworks like Spark or Flink.

AWSDockerPythonSQLCloud ComputingETLKubernetesSnowflakeAirflowAlgorithmsApache KafkaData engineeringData StructuresREST APISparkCommunication SkillsAnalytical SkillsCollaborationCI/CDRESTful APIsDevOpsTerraformProblem-solving skillsJSONScalaData visualizationAnsibleData modelingData analyticsDebugging

Posted 11 days ago
Apply
Apply

πŸ“ OR, WA, CA, CO, TX, IL

🧭 Contract

πŸ’Έ 65.0 - 75.0 USD per hour

πŸ” Music industry

🏒 Company: DiscogsπŸ‘₯ 51-100πŸ’° $2,500,000 about 7 years agoDatabaseCommunitiesMusic

  • Proficiency in data integration and ETL processes.
  • Knowledge of programming languages such as Python, Java, or Javascript.
  • Familiarity with cloud platforms and services (e.g., AWS, GCP, Azure).
  • Understanding of data warehousing concepts and technologies (e.g., Redshift, BigQuery, Snowflake).
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills to work effectively with cross-functional teams.
  • Experience with marketing automation platforms.
  • Experience with data warehouses in a marketing context.
  • Knowledge of API integration and data exchange formats such as JSON, XML, and CSV.
  • Design, develop, and maintain data pipelines to ingest, process, and store data.
  • Implement data validation and quality checks to maintain the integrity of incoming data.
  • Optimize and automate data workflows to improve efficiency and reduce manual intervention.
  • Work closely with the product, engineering, marketing and analytics teams to support data-driven decision-making.
  • Develop and maintain documentation related to data processes, workflows, and system architecture.
  • Troubleshoot and resolve data-related issues promptly to minimize disruptions.
  • Monitor and enhance the performance of data infrastructure, ensuring scalability and reliability.
  • Stay updated with industry trends and best practices in data engineering to apply improvements.

AWSPythonApache AirflowETLGCPMySQLSnowflakeApache KafkaAzureJSON

Posted 16 days ago
Apply
Apply

πŸ“ US, Canada

🧭 Full-Time

πŸ’Έ 95795.0 - 128800.0 USD per year

πŸ” Internet of Things (IoT)

  • BS degree in Computer Science, Statistics, Engineering, or a related quantitative discipline.
  • 6+ years experience in a data engineering and data science-focused role.
  • Proficiency in data manipulation and processing in SQL and Python.
  • Expertise building data pipelines with new API endpoints from their documentation.
  • Proficiency in building ETL pipelines to handle large volumes of data.
  • Demonstrated experience in designing data models at scale.
  • Build and maintain highly reliable computed tables, incorporating data from various sources, including unstructured and highly sensitive data.
  • Access, manipulate, and integrate external datasets with internal data.
  • Build analytical and statistical models to identify patterns, anomalies, and root causes.
  • Leverage SQL and Python to shape and aggregate data.
  • Incorporate generative AI tools into production data pipelines and automated workflows.
  • Collaborate closely with data scientists, data analysts, and Tableau developers to ship top quality analytic products.
  • Champion, role model, and embed Samsara’s cultural principles.

PythonSQLETLData engineeringData scienceData modeling

Posted 20 days ago
Apply
Apply

πŸ“ United States

πŸ’Έ 104981.0 - 157476.0 USD per year

πŸ” Mental healthcare

🏒 Company: HeadspaceπŸ‘₯ 11-50WellnessHealth CareChild Care

  • 7+ years of proven success designing and implementing large-scale enterprise data systems.
  • Deep experience with industry-leading tools such as Databricks, Snowflake, and Redshift.
  • Demonstrated expertise in architectural patterns for building high-volume real-time and batch ETL pipelines.
  • Proven ability to partner effectively with product teams to drive alignment and deliver solutions.
  • Exceptional oral and written communication abilities.
  • Experience in coaching and mentoring team members.
  • Architect and implement robust data pipelines to ingest, aggregate, and index diverse data sources into the organization’s data lake.
  • Lead the creation of a secure, compliant, and privacy-focused data warehousing solution tailored to healthcare industry requirements.
  • Partner with the data analytics team to deliver a data platform that supports accurate reporting on business metrics.
  • Collaborate with data science and machine learning teams to build tools for rapid experimentation and innovation.
  • Mentor and coach data engineers while promoting a culture valuing data as a strategic asset.

AWSETLSnowflakeData engineeringData modeling

Posted 21 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Regular

πŸ’Έ 125000.0 - 160000.0 USD per year

πŸ” Digital driver assistance services

🏒 Company: AgeroπŸ‘₯ 1001-5000πŸ’° $4,750,000 over 2 years agoAutomotiveInsurTechInformation TechnologyInsurance

  • Bachelor's degree in a technical field and 5+ years or Master's degree with 3+ years of industry experience.
  • Extensive experience with Snowflake or other cloud-based data warehousing solutions.
  • Expertise in ETL/ELT pipelines using tools like Airflow, DBT, Fivetran.
  • Proficiency in Python for data processing and advanced SQL for managing databases.
  • Solid understanding of data modeling techniques and cost management strategies.
  • Experience with data quality frameworks and deploying data solutions in the cloud.
  • Familiarity with version control systems and implementing CI/CD pipelines.
  • Develop and maintain ETL/ELT pipelines to ingest data from diverse sources.
  • Monitor and optimize cloud costs while performing query optimization in Snowflake.
  • Establish modern data architectures including data lakes and warehouses.
  • Apply dimensional modeling techniques and develop transformations using DBT or Spark.
  • Write reusable and efficient code, and develop data-intensive UIs and dashboards.
  • Implement data quality frameworks and observability solutions.
  • Collaborate cross-functionally and document data flows, processes, and architecture.

AWSPythonSQLApache AirflowDynamoDBETLFlaskMongoDBSnowflakeFastAPIPandasCI/CDData modeling

Posted 23 days ago
Apply