Apply

Senior Data Engineer - Streaming Platform

Posted 7 days agoViewed

View full description

💎 Seniority level: Senior, 2+ years

📍 Location: EMEA countries

🔍 Industry: Mobile Games and Apps

🏢 Company: Voodoo

⏳ Experience: 2+ years

🪄 Skills: Backend DevelopmentPythonSQLGCPJavaKafkaKubernetesAlgorithmsData engineeringData StructuresSparkCI/CDRESTful APIsLinuxTerraformMicroservicesScalaData modelingDebugging

Requirements:
  • Extensive experience in data or backend engineering, with at least 2+ years building real-time data pipelines.
  • Proficiency with stream processing frameworks like Flink, Spark Structured Streaming, Beam, or similar.
  • Strong programming experience in Java, Scala, or Python, with a focus on distributed systems.
  • Deep understanding of event streaming and messaging platforms such as GCP Pub/Sub, AWS Kinesis, Apache Pulsar, or Kafka — including performance tuning, delivery guarantees, and schema management.
  • Solid experience operating data services in Kubernetes, including Helm, resource tuning, and service discovery.
  • Experience with Protobuf/Avro, and best practices around schema evolution in streaming environments.
  • Familiarity with CI/CD workflows and infrastructure-as-code (e.g., Terraform, ArgoCD, CircleCI).
  • Strong debugging skills and a bias for building reliable, self-healing systems.
Responsibilities:
  • Design, implement, and optimize real-time data pipelines handling billions of events per day with strict SLAs.
  • Architect data flows for bidstream data, auction logs, impression tracking and user behavior data.
  • Build scalable and reliable event ingestion and processing systems using Kafka, Flink, Spark Structured Streaming, or similar technologies.
  • Operate data infrastructure on Kubernetes, managing deployments, autoscaling, resource limits, and high availability.
  • Collaborate with backend to integrate OpenRTB signals into our data platform in near real-time.
  • Ensure high-throughput, low-latency processing, and system resilience in our streaming infrastructure.
  • Design and manage event schemas (Avro, Protobuf), schema evolution strategies, and metadata tracking.
  • Implement observability, alerting, and performance monitoring for critical data services.
  • Contribute to decisions on data modeling and data retention strategies for real-time use cases.
  • Mentor other engineers and advocate for best practices in streaming architecture, reliability, and performance.
  • Continuously evaluate new tools, trends, and techniques to evolve our modern streaming stack.
Apply