Apply

Senior Data Engineer (Remote)

Posted 6 months agoViewed

View full description

📍 Location: United States

💸 Salary: 141000 - 174000 USD per year

🔍 Industry: Catering technology

🏢 Company: ezCater, Inc

🗣️ Languages: English

🪄 Skills: AWSPythonSQLETLSnowflakeCI/CD

Requirements:
  • Strong experience with data warehousing, data lakes, and ELT processes across enterprise platforms like Snowflake, Redshift, or BigQuery.
  • Proficient in building performant data pipelines across disparate systems.
  • Familiarity with cloud platforms such as AWS, GCP, or Azure.
  • Expertise in SQL and experience with Python.
  • Ability to work both independently and collaboratively.
  • Willingness to adapt to a large and complex business landscape.
  • Experience with technologies such as Snowflake, dbt, Fivetran, Airflow, AWS, Sagemaker, MLFlow, Kubernetes, Docker, and Python for ETL and data science is advantageous.
Responsibilities:
  • Write and ship code mainly using dbt (SQL, Jinja).
  • Collaborate closely with analysts and stakeholders to refine requirements and debug data sets.
  • Design and develop high-performance data pipelines while adhering to software development lifecycle best practices.
  • Identify optimization opportunities within the existing data stack.
  • Utilize automation to enhance developer efficiency.
  • Monitor data systems for quality and availability while aiming to reduce costs.
  • Contribute to team processes and community, and mentor other Data Engineers.
Apply

Related Jobs

Apply

📍 United States

💸 135000.0 - 155000.0 USD per year

🔍 Software Development

🏢 Company: Jobgether👥 11-50💰 $1,493,585 Seed about 2 years agoInternet

  • 8+ years of experience as a data engineer, with a strong background in data lake systems and cloud technologies.
  • 4+ years of hands-on experience with AWS technologies, including S3, Redshift, EMR, Kafka, and Spark.
  • Proficient in Python or Node.js for developing data pipelines and creating ETLs.
  • Strong experience with data integration and frameworks like Informatica and Python/Scala.
  • Expertise in creating and managing AWS services (EC2, S3, Lambda, etc.) in a production environment.
  • Solid understanding of Agile methodologies and software development practices.
  • Strong analytical and communication skills, with the ability to influence both IT and business teams.
  • Design and develop scalable data pipelines that integrate enterprise systems and third-party data sources.
  • Build and maintain data infrastructure to ensure speed, accuracy, and uptime.
  • Collaborate with data science teams to build feature engineering pipelines and support machine learning initiatives.
  • Work with AWS cloud technologies like S3, Redshift, and Spark to create a world-class data mesh environment.
  • Ensure proper data governance and implement data quality checks and lineage at every stage of the pipeline.
  • Develop and maintain ETL processes using AWS Glue, Lambda, and other AWS services.
  • Integrate third-party data sources and APIs into the data ecosystem.

AWSNode.jsPythonSQLETLKafkaData engineeringSparkAgile methodologiesScalaData modelingData management

Posted 21 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 170000.0 - 210000.0 USD per year

🔍 Health and Fitness

  • Minimum of 6 years of experience working in data engineering
  • Expertise both in using SQL and Python for data cleansing, transformation, modeling, pipelining, etc.
  • Proficient in working with other stakeholders and converting requirements into detailed technical specifications; owning and leading projects from inception to completion
  • Proficiency in working with high volume datasets in SQL-based warehouses such as BigQuery
  • Proficiency with parallelized python-based data processing frameworks such as Google Dataflow (Apache Beam), Apache Spark, etc.
  • Experience using ELT tools like Dataform or dbt
  • Professional experience maintaining data systems in GCP and AWS
  • Deep understanding of data modeling, access, storage, caching, replication, and optimization techniques
  • Experienced with orchestrating data pipelines and Kubernetes-based jobs with Apache Airflow
  • Understanding of the software development lifecycle and CI/CD
  • Monitoring and metrics-gathering (e.g. Datadog, NewRelic, Cloudwatch, etc)
  • Willingness to participate in a weekly on-call support rotation - currently the rotation is monthly
  • Proficiency with git and working collaboratively in a shared codebase
  • Excellent documentation skills
  • Self motivation and a deep sense of pride in your work
  • Passion for the outdoors
  • Comfort with ambiguity, and an instinct for moving quickly
  • Humility, empathy and open-mindedness - no egos
  • Work cross-functionally to ensure data scientists have access to clean, reliable, and secure data, the backbone for new algorithmic product features
  • Build, deploy, and orchestrate large-scale batch and stream data pipelines to transform and move data to/from our data warehouse and other systems
  • Deliver scalable, testable, maintainable, and high-quality code
  • Investigate, test-for, monitor, and alert on inconsistencies in our data, data systems, or processing costs
  • Create tools to improve data and model discoverability and documentation
  • Ensure data collection and storage adheres to GDPR and other privacy and legal compliance requirements
  • Uphold best data-quality standards and practices, promoting such knowledge throughout the organization
  • Deploy and build systems that enable machine learning and artificial intelligence product solutions
  • Mentoring others on best industry practices

AWSPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLApache KafkaData engineeringCI/CDRESTful APIsMicroservicesData visualizationData modelingData analyticsData management

Posted 26 days ago
Apply
Apply

📍 United States

🔍 Software Development

  • 5+ years of data engineering experience
  • Able to leverage abstraction to solve complex problems
  • Excellent SQL, Python, and data modeling skills
  • Experience with dbt and Snowflake
  • Experience designing and building scalable data systems
  • Strong communication and collaboration skills
  • Experience managing projects and delivering results within set timelines.
  • Can work iteratively, defining requirements as needed
  • Develop and optimize data models to support various business needs, ensuring data consistency and accuracy across the organization
  • Design, build, and maintain a simple, effective, and scalable data warehouse infrastructure using Snowflake
  • Implement and manage ELT processes with dbt to ensure reliable data transformation pipelines
  • Manage and monitor data ingestion including API integrations and event streams
  • Work closely with data analysts, product engineers, and business stakeholders to understand their requirements and deliver actionable insights.
  • Mentor and support analysts and peers, fostering a culture of learning and continuous improvement
  • Proactively identify and resolve operational issues with evolutionary recommendations
  • Ensure testing and validation mechanisms are in place so that data transformations are verified, complete, documented, and meet SLAs

AWSPostgreSQLPythonSQLApache AirflowData AnalysisETLSnowflakeApache KafkaData engineeringRESTful APIsData visualizationMarketingDigital MarketingData modelingData management

Posted 28 days ago
Apply
Apply

📍 United States

🧭 Full-Time

🔍 Software Development

  • Experience in data engineering and analytics
  • Familiarity with data structures and algorithms
  • Build the graph underlying Sayari's products
  • Collaborate with product and software engineering teams

AWSGraphQLPostgreSQLSQLETLData engineering

Posted 3 months ago
Apply
Apply

📍 ANY STATE

🔍 Data and technology

  • 5+ years of experience making contributions in the form of code.
  • Experience with algorithms and data structures and knowing when to apply them.
  • Experience with machine learning techniques to develop better predictive and clustering models.
  • Experience working with high-scale systems.
  • Experience creating powerful machine learning tools for experimentation and productionalization at scale.
  • Experience in data engineering and warehousing to develop ingestion engines, ETL pipelines, and organizing data for consumption.
  • Be a senior member of the team by contributing to the architecture, design, and implementation of EMS systems.
  • Mentor junior engineers and promote their growth.
  • Lead technical projects and manage planning, execution, and success of complex technical projects.
  • Collaborate with other engineering, product, and data science teams to ensure optimal product development.

PythonSQLETLGCPKubeflowMachine LearningAlgorithmsData engineeringData scienceData StructuresTensorflowCollaborationScala

Posted 5 months ago
Apply