Apply

Staff, Data Engineer - Messaging Data Platform

Posted 2024-11-12

View full description

💎 Seniority level: Staff, 5+ years of experience

📍 Location: Estonia

🔍 Industry: Communications

⏳ Experience: 5+ years of experience

🪄 Skills: AWSLeadershipDynamoDBElasticSearchETLJavaKafkaApache KafkaData engineeringElasticsearchSparkMicroservices

Requirements:
  • 3+ years of Java development experience.
  • 5+ years of experience with Big Data processing tools and frameworks such as Apache Spark, SparkSQL.
  • Experience with Lakehouse technologies, such as Apache Hudi, Apache Iceberg, Databricks Delta Lake.
  • Experience in building AI/ML pipelines.
  • Deep technical understanding of ETL tools.
  • Familiarity with data testing and verification tooling and best practices.
  • Experience with cloud services (AWS preferred, Google, Azure etc.).
  • Proficient in working with Key-Value, Streaming, and Search Database technologies, including AWS DynamoDB, Apache Kafka, and ElasticSearch.
  • Readiness to participate in the on-call rotation.
Responsibilities:
  • Oversee the design, construction, testing, and maintenance of advanced, scalable data architectures and pipelines.
  • Drive the development of innovative data solutions that meet complex business requirements.
  • Create and enforce best practices for data architecture, ensuring scalability, reliability, and performance.
  • Provide architectural guidance and mentorship to junior engineers.
  • Tackle the most challenging technical issues and provide advanced troubleshooting support.
  • Collaborate with senior leadership to align data engineering strategies with organizational goals.
  • Participate in long-term planning for data infrastructure and analytics initiatives.
  • Lead cross-functional projects, ensuring timely delivery and alignment with business objectives.
  • Coordinate with product managers, analysts, and other stakeholders to define project requirements and scope.
  • Continuously monitor and enhance the performance of data systems and pipelines.
Apply

Related Jobs

Apply

📍 Estonia

🔍 Communications

  • 3+ years of Java development experience.
  • 5+ years of experience with Big Data processing tools and frameworks such as Apache Spark, SparkSQL.
  • Experience with Lakehouse technologies, such as Apache Hudi, Apache Iceberg, Databricks Delta Lake.
  • Experience in building AI/ML pipelines.
  • Deep technical understanding of ETL tools, low-latency data stores, multiple data warehouses and data catalogs.
  • Familiarity with data testing and verification tooling and best practices.
  • Experience with cloud services (AWS preferred, Google, Azure etc.).
  • Proficient in working with Key-Value, Streaming, and Search Database technologies, including AWS DynamoDB, Apache Kafka, and ElasticSearch.
  • Readiness to participate in the on-call rotation.

  • Oversee the design, construction, testing, and maintenance of advanced, scalable data architectures and pipelines.
  • Drive the development of innovative data solutions that meet complex business requirements.
  • Create and enforce best practices for data architecture, ensuring scalability, reliability, and performance.
  • Provide architectural guidance and mentorship to junior engineers.
  • Tackle the most challenging technical issues and provide advanced troubleshooting support.
  • Collaborate with senior leadership to align data engineering strategies with organizational goals.
  • Participate in long-term planning for data infrastructure and analytics initiatives.
  • Lead cross-functional projects, ensuring timely delivery and alignment with business objectives.
  • Coordinate with product managers, analysts, and other stakeholders to define project requirements and scope.
  • Continuously monitor and enhance the performance of data systems and pipelines.

AWSLeadershipDynamoDBElasticSearchETLJavaKafkaApache KafkaData engineeringElasticsearchSpark

Posted 2024-08-31
Apply