Apply

Senior Data Engineer

Posted over 1 year agoViewed

View full description

๐Ÿ“ Location: Anywhere in the united states

๐Ÿ’ธ Salary: $170k to $195k

๐Ÿ—ฃ๏ธ Languages: English

Requirements:
Experience with designing and implementing scalable solutions, ability to refactor and simplify existing processes, experience with relational and non-relational databases, proficient in python, experience with handling and working with large datasets, experience with etl pipelines and data warehousing best practices, value collaboration and feedback, excellent documentation and verbal communication skills.Apply

Related Jobs

Apply

๐Ÿ“ Pakistan

๐Ÿข Company: CodeNinja๐Ÿ‘ฅ 51-100E-CommerceMobile AppsSoftware

  • Degree in Computer Science, Engineering, or a related field.
  • 5+ years of strong experience in data engineering tools like Apache Spark, Hadoop, or similar.
  • Experience with SQL/NoSQL databases, data warehousing solutions, and cloud platforms.
  • Strong Python/Java skills for data processing and workflow automation.

  • Design and build scalable data pipelines to support AI/ML workflows.
  • Ensure data is accessible, clean, and ready for analysis by the AI/ML team.
  • Manage and optimize databases for storing and retrieving large datasets.
  • Collaborate with AI engineers to integrate model outputs into existing data structures.
  • Work with infrastructure teams to ensure seamless integration of data processing tools.
Posted 2 days ago
Apply
Apply

๐Ÿ“ USA

๐Ÿ’ธ 152960.0 - 183552.0 USD per year

๐Ÿ” Data Engineering and Observability Solutions

  • Software development skills in Python, Java, Scala, or Go.
  • High proficiency in SQL.
  • Experience with workflow orchestration systems like Prefect, Dagster, or Airflow.
  • Knowledge of MLOps best practices.
  • Familiarity with dbt or similar data transformation tools.
  • Excellent communication skills for technical topics.

  • Build and maintain production quality data pipelines between operational systems and BigQuery.
  • Implement data quality and freshness checks to ensure data accuracy and consistency.
  • Build and maintain machine learning pipelines for automated model validation and deployment.
  • Create and maintain documentation for data engineering processes and workflows.
  • Maintain observability and monitoring of internal data pipelines.
  • Troubleshoot data pipeline issues to ensure data availability.
  • Contribute to dbt systems ensuring efficiency and availability.

PythonSQLETLGCPMachine LearningData engineering

Posted 7 days ago
Apply
Apply

๐Ÿ“ Colombia, Spain, Ecuador, Venezuela, Argentina

๐Ÿ” HR Tech

๐Ÿข Company: Jobgether๐Ÿ‘ฅ 11-50๐Ÿ’ฐ $1,493,585 Seed almost 2 years agoInternet

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field.
  • Minimum of 5 years of experience in data engineering.
  • 5 years of experience in Python programming.
  • Hands-on experience with big data technologies like Hadoop, Spark, or Kafka.
  • Proficiency with MySQL, PostgreSQL, and MongoDB.
  • Experience with AWS cloud platforms.
  • Strong understanding of data modeling and warehousing concepts.
  • Excellent analytical and problem-solving skills.
  • Fluency in English and Spanish.

  • Design, build, and maintain scalable data pipelines and ETL processes.
  • Develop and optimize data scraping and extraction solutions.
  • Collaborate with data scientists to implement AI-driven algorithms.
  • Ensure data integrity and reliability with validation mechanisms.
  • Analyze and optimize system performance.
  • Deploy machine learning models into production.
  • Stay updated on emergent technologies in data engineering.
  • Work with cross-functional teams to define data requirements.
  • Develop and maintain comprehensive documentation.

AWSDockerPostgreSQLPythonETLHadoopKafkaKubernetesMachine LearningMongoDBMySQLSparkCI/CDData modeling

Posted 7 days ago
Apply
Apply

๐Ÿงญ Full-Time

๐Ÿ’ธ 167200.0 - 209000.0 USD per year

๐Ÿ” Entertainment, Anime, Streaming

๐Ÿข Company: Crunchyroll, LLC

  • Bachelor's degree in Computer Science, Information Technology, or a related field.
  • 8+ years of experience in data engineering, focusing on data pipelines and architectures.
  • Extensive experience with AWS cloud platform and related data services.
  • Proficiency in automation frameworks (e.g., Terraform, Cloud Formation).
  • Proficiency in data lake and pipeline tools (e.g., Databricks, Apache Kafka/Kinesis, AWS Glue).
  • Proficiency in programming languages (e.g., Python, Java).
  • Strong understanding of SQL (e.g., Redshift, Snowflake) and NoSQL databases (e.g., DynamoDB).
  • Experience with search optimization, CI/CD pipelines, and best practices in data engineering.
  • Strong problem-solving skills and ability to mentor junior engineers.

  • Play a pivotal role in designing, implementing, and optimizing data services and pipelines.
  • Collaborate with cross-functional teams to enable operational data analysis and enhance eventing systems.
  • Develop tools to empower data and services teams.
  • Drive 100% automation and establish best practices.
  • Create scalable architectures and optimize data environments.
Posted 8 days ago
Apply
Apply

๐Ÿ” Technology and software development

  • Minimum 5 years of experience in data engineering or a related field.
  • Strong proficiency in Python programming language.
  • Deep understanding of AWS services, including Kinesis, S3, Athena, Redshift, DynamoDB, and Lambda.
  • Experience with data ingestion pipelines, ETL processes, and data warehousing concepts.
  • Proficiency in SQL and NoSQL databases.
  • Experience with data modeling and schema design.
  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration skills.
  • Experience with data security and privacy best practices.

  • Develop, maintain, and enhance ETL pipelines.
  • Craft client code that is efficient, performant, testable, scalable, secure, and of high quality.
  • Provide accurate status tracking, reporting, and estimation using project methods and tools.
  • Gather requirements, validate understanding, and create and maintain documentation.
  • Execute activities within current methodology, upholding quality standards.
  • Collaborate with engineers, designers, and managers to comprehend user pain points and iterate on solutions.
  • Take ownership of projects from technical design to successful launch.
Posted 8 days ago
Apply
Apply

๐Ÿ” Software development

  • Minimum 5 years of experience in data engineering or a related field.
  • Strong proficiency in Python programming language.
  • Deep understanding of AWS services including Kinesis, S3, Athena, Redshift, DynamoDB, and Lambda.
  • Experience with data ingestion pipelines, ETL processes, and data warehousing concepts.
  • Proficiency in SQL and NoSQL databases.
  • Experience with data modeling and schema design.
  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration skills.
  • Experience with data security and privacy best practices.

  • Develop, maintain, and enhance ETL pipelines.
  • Craft efficient, performant, testable, scalable, secure client code of the highest quality.
  • Provide accurate status tracking, reporting, and estimation using project methods and tools.
  • Gather requirements, validate understanding amongst the team, and create and maintain documentation.
  • Execute activities within current methodology and uphold the highest quality standards.
  • Foster collaboration with engineers, designers, and managers to understand user pain points.
  • Take ownership of projects from technical design to successful launch.
Posted 8 days ago
Apply
Apply

๐Ÿ” Software Development

  • Minimum 5 years of experience in data engineering or a related field.
  • Strong proficiency in Python programming language.
  • Deep understanding of AWS services, including Kinesis, S3, Athena, Redshift, DynamoDB, and Lambda.
  • Experience with data ingestion pipelines, ETL processes, and data warehousing concepts.
  • Proficiency in SQL and NoSQL databases.
  • Experience with data modeling and schema design.
  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration skills.
  • Experience with data security and privacy best practices.

  • Develop, maintain and enhance ETL pipelines.
  • Craft efficient, performant, testable, scalable, and secure client code.
  • Provide accurate status tracking, reporting, and estimation using project methods and tools.
  • Gather requirements, validate understanding among the team, and maintain documentation.
  • Foster collaboration with engineers, designers, and managers to understand user pain points.
  • Take ownership of projects from design to launch.
Posted 8 days ago
Apply
Apply

๐Ÿ“ Denver, CO

๐Ÿงญ Full-Time

๐Ÿ” Construction

๐Ÿข Company: EquipmentShare๐Ÿ‘ฅ 1001-5000๐Ÿ’ฐ $400,000,000 Debt Financing over 1 year agoConstruction

  • 7+ years of relevant data platform development experience building production-grade solutions.
  • Proficient with SQL and a high-order object-oriented language (e.g., Python).
  • Experience with designing and building distributed data architecture.
  • Experience building and managing production-grade data pipelines using tools such as Airflow, dbt, DataHub, MLFlow.
  • Experience building and managing production-grade data platforms using distributed systems such as Kafka, Spark, Flink and/or others.
  • Familiarity with event data streaming at scale.
  • Proven track record learning new technologies and applying that learning quickly.
  • Experience building observability and monitoring into data products.
  • Motivated to identify opportunities for automation to reduce manual toil.

  • Collaborate with Product Managers, Designers, Engineers, Data Scientists, and Data Analysts to take ideas from concept to production at scale.
  • Design, build and maintain a data platform to enable automation and self-service for data scientists, machine learning engineers, and analysts.
  • Design, build and maintain data product framework to support EquipmentShare application data science and analytics features.
  • Design, build and maintain CI/CD pipelines and automated data and machine learning deployment processes.
  • Develop data monitoring and alerting capabilities.
  • Document architecture, processes, and procedures for knowledge sharing and cross-team collaboration.
  • Mentor peers to help them build their skills.

AWSPythonSQLApache AirflowKafkaMLFlowSnowflakeSparkCI/CD

Posted 9 days ago
Apply
Apply

๐Ÿ“ United States, United Kingdom, Spain, Estonia

๐Ÿ” Identity verification

๐Ÿข Company: Veriff๐Ÿ‘ฅ 501-1000๐Ÿ’ฐ $100,000,000 Series C almost 3 years ago๐Ÿซ‚ Last layoff over 1 year agoArtificial Intelligence (AI)Fraud DetectionInformation TechnologyCyber SecurityIdentity Management

  • Expert-level knowledge of SQL, particularly with Redshift.
  • Strong experience in data modeling with an understanding of dimensional data modeling best practices.
  • Proficiency in data transformation frameworks like dbt.
  • Solid programming skills in languages used in data engineering, such as Python or R.
  • Familiarity with orchestration frameworks like Apache Airflow or Luigi.
  • Experience with data from diverse sources including RDBMS and APIs.

  • Collaborate with business stakeholders to design, document, and implement robust data models.
  • Build and optimize data pipelines to transform raw data into actionable insights.
  • Fine-tune query performance and ensure efficient use of data warehouse infrastructure.
  • Ensure data reliability and quality through rigorous testing and monitoring.
  • Assist in migrating from batch processing to real-time streaming systems.
  • Expand support for various use cases including business intelligence and analytics.

PythonSQLApache AirflowETLData engineeringJSONData modeling

Posted 14 days ago
Apply
Apply

๐Ÿ“ TX, MN, FL

๐Ÿ’ธ 130000.0 - 195000.0 USD per year

๐Ÿ” Healthcare

๐Ÿข Company: NeueHealth

  • Bachelorโ€™s degree in Computer Science, Computer Engineering, Information Systems, or equivalent.
  • Around five years of experience in an enterprise data engineering role in an Azure environment.
  • Healthcare IT background preferred.
  • Experience coding in Scala and building batch and streaming data pipelines.
  • Experience with API design.
  • Extensive experience developing data solutions in Azure Cloud.
  • Experience with event sourcing and/or Big Data architectures.

  • Write traditional code and server-less functions mainly in Scala.
  • Build APIs, data microservices, and ETL pipelines for data sharing and analytics.
  • Develop and optimize processes for large language models and AI enhancements.
  • Support Data Ingestion frameworks deployed in Azure.
  • Participate in cultivating a culture of DevOps and Quality Assurance.
  • Act as tech lead and mentor junior engineers.
  • Continuously document code and team processes.

ETLC#AzureSparkCI/CDDevOpsMicroservicesScala

Posted 14 days ago
Apply

Related Articles

Posted 4 months ago

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

Posted 4 months ago

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Posted 4 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 4 months ago

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.

Posted 4 months ago

The article explores the current statistics for remote work in 2024, covering the percentage of the global workforce working remotely, growth trends, popular industries and job roles, geographic distribution of remote workers, demographic trends, work models comparison, job satisfaction, and productivity insights.