Apply

Data Ops Senior Data Engineer

Posted 6 days agoViewed

View full description

💎 Seniority level: Senior

📍 Location: United States

🔍 Industry: Software Development

🗣️ Languages: English

🪄 Skills: AWSPythonSQLETLJavaApache KafkaData engineeringGoRustCI/CDDevOpsTerraformData visualizationData modelingData analyticsData management

Requirements:
  • Experience with Infrastructure as Code tools such as Terraform or CloudFormation. Ability to automate the deployment and management of data infrastructure.
  • Familiarity with Continuous Integration and Continuous Deployment (CI/CD) processes. Experience setting up and maintaining CI/CD pipelines for data applications.
  • Proficiency in software development lifecycle process. Release fast and improve incrementally.
  • Experience with tools and frameworks for ensuring data quality, such as data validation, anomaly detection, and monitoring. Ability to design systems to track and enforce data quality standards.
  • Proven experience in designing, building, and maintaining scalable data pipelines capable of processing terabytes of data daily using modern data processing frameworks (e.g., Apache Spark, Apache Kafka, Flink, Open Table Formats, modern OLAP databases).
  • Strong foundation in data architecture principles and the ability to evaluate emerging technologies.
  • Proficient in at least one modern programming language (Go, Python, Java, Rust) and SQL.
Responsibilities:
  • Design and implement both real-time and batch data processing pipelines, leveraging technologies like Apache Kafka, Apache Flink, or managed cloud streaming services to ensure scalability and resilience
  • Create data pipelines that efficiently process terabytes of data daily, leveraging data lakes and data warehouses within the AWS cloud. Must be proficient with technologies like Apache Spark to handle large-scale data processing.
  • Implement robust schema management practices and lay the groundwork for future data contracts. Ensure pipeline integrity by establishing and enforcing data quality checks, improving overall data reliability and consistency
  • Develop tools to support rapid development of data products. Provide recommended patterns to support data pipeline deployments.
  • Designing, implementing, and maintaining data governance frameworks and best practices to ensure data quality, security, compliance, and accessibility across the organization.
  • Develop tools to support the rapid development of data products and establish recommended patterns for data pipeline deployments. Mentor and guide junior engineers, fostering their growth in best practices and efficient development processes.
  • Collaborate with the DevOps team to integrate data needs into DevOps tooling.
  • Champion DataOps practices within the organization, promoting a culture of collaboration, automation, and continuous improvement in data engineering processes.
  • Stay abreast of emerging technologies, tools and trends in data processing and analytics, and evaluate their potential impact and relevance to Fetch’s strategy.
Apply

Related Jobs

Apply

📍 United States

🧭 Full-Time

🔍 Software Development

🏢 Company: Fetch

  • 5+ years experience in data engineering
  • Proficiency in AWS, Apache Spark, Kafka, Flink
  • Experience with data quality, CI/CD processes, and Infrastructure as Code tools
  • Foundation in data architecture principles
  • Proficient in Python, Go, Java, Rust, and SQL
  • Design and implement real-time and batch data processing pipelines
  • Create data pipelines for AWS data lakes and warehouses
  • Develop tools for rapid data product development
  • Design and maintain data governance frameworks
  • Champion DataOps practices and mentor junior engineers

AWSPythonSQLApache KafkaData engineeringCI/CDTerraform

Posted 5 months ago
Apply