Apply

Senior Data Engineer (Remote)

Posted 2024-10-19

View full description

📍 Location: United States

💸 Salary: 141000 - 174000 USD per year

🔍 Industry: Catering technology

🏢 Company: ezCater, Inc

🗣️ Languages: English

🪄 Skills: AWSPythonSQLETLSnowflakeCI/CD

Requirements:
  • Strong experience with data warehousing, data lakes, and ELT processes across enterprise platforms like Snowflake, Redshift, or BigQuery.
  • Proficient in building performant data pipelines across disparate systems.
  • Familiarity with cloud platforms such as AWS, GCP, or Azure.
  • Expertise in SQL and experience with Python.
  • Ability to work both independently and collaboratively.
  • Willingness to adapt to a large and complex business landscape.
  • Experience with technologies such as Snowflake, dbt, Fivetran, Airflow, AWS, Sagemaker, MLFlow, Kubernetes, Docker, and Python for ETL and data science is advantageous.
Responsibilities:
  • Write and ship code mainly using dbt (SQL, Jinja).
  • Collaborate closely with analysts and stakeholders to refine requirements and debug data sets.
  • Design and develop high-performance data pipelines while adhering to software development lifecycle best practices.
  • Identify optimization opportunities within the existing data stack.
  • Utilize automation to enhance developer efficiency.
  • Monitor data systems for quality and availability while aiming to reduce costs.
  • Contribute to team processes and community, and mentor other Data Engineers.
Apply

Related Jobs

Apply

📍 United States

🧭 Full-Time

💸 $135,000 - $165,000 per year

🔍 Energy Solutions

  • A bachelor's degree in computer science or information technology plus 8 years minimum of relevant experience.
  • High proficiency in programming languages commonly used in ETL development, such as PLSQL, SQL, Python.
  • Expertise in utilizing AWS services, including but not limited to Amazon S3, Glue, Data Catalog, Amazon Redshift, Redshift Spectrum, and Amazon Athena.
  • Proficiency in working with relational databases such as Postgres, Oracle, MySQL, or SQL Server.
  • Experience in performance tuning and optimizing database operations.
  • Familiarity with data governance frameworks and data security best practices.
  • Passion for learning new technologies, staying up to date with industry trends.

  • Build, automate, and manage near-real-time scalable data ingestion pipelines for master data management, deep-learning, and predictive analytics.
  • Build and maintain cloud native big data environments on AWS that are highly secure, scalable, flexible, and high-performing using SQL, NoSQL, and NewSQL technologies.
  • Lead data governance and data profiling efforts to ensure data quality and proper metadata documentation for data lineage.
  • Provide technical input into build/buy/partner decisions for all components of the data infrastructure.
  • Partner closely with Data Scientists, BI developers, and Product Managers to design and implement data models, database schemas, data structures, and processing logic.
  • Design and develop ETL processes to validate and transform data, calculate metrics, and model features using Spark, Python, SQL, and AWS technologies.
  • Lead by example, demonstrating best practices for code development and optimization.
  • Define SLAs for data availability and correctness and automate monitoring.

AWSPythonSQLETLMySQLOracleData StructuresPostgresNosqlSparkCI/CD

Posted 2024-09-20
Apply