Apply

Big Data Engineer

Posted 13 days agoViewed

View full description

πŸ“ Location: UK, US, Europe

🏒 Company: RNRS Solutions

πŸ—£οΈ Languages: English

πŸͺ„ Skills: AWSPostgreSQLApache AirflowETLHadoopJavaKafkaKubernetesData engineeringScala

Requirements:
NOT STATED
Responsibilities:
  • Build, optimize, and maintain scalable Big Data pipelines.
  • Design and implement real-time data processing solutions.
  • Work with large-scale distributed systems to manage structured and unstructured data.
  • Develop ETL processes and improve data ingestion workflows.
  • Collaborate with software engineers, analysts, and data scientists.
Apply

Related Jobs

Apply

πŸ“ Spain

πŸ” Software Development

🏒 Company: Plain ConceptsπŸ‘₯ 251-500ConsultingAppsMobile AppsInformation TechnologyMobile

  • 3 years of experience in data engineering.
  • Strong experience with Python or Scala and Spark, processing large datasets.
  • Solid experience in Cloud platforms (Azure or AWS).
  • Hands-on experience building data pipelines (CI/CD).
  • Experience with testing (unit, integration, etc.).
  • Knowledge of SQL and NoSQL databases.
  • Participating in the design and development of Data solutions for challenging projects.
  • Develop projects from scratch with minimal supervision and strong team collaboration.
  • Be a key player in fostering best practices, clean, and reusable code.
  • Develop ETLs using Spark (Python/Scala).
  • Work on cloud-based projects (Azure/AWS).
  • Build scalable pipelines using a variety of technologies.

AWSPythonSQLAgileCloud ComputingETLAzureData engineeringNosqlSparkCI/CDScala

Posted 1 day ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ” Software Development

  • 5+ years of experience in customer-facing software/technology or consulting.
  • 5+ years of experience with β€œon-premises to cloud” migrations or IT transformations.
  • 5+ years of experience building, and operating solutions built on GCP
  • Proficiency in Oozie andPig
  • Proficiency in Java or Python
  • Design and develop scalable batch processing systems using technologies like Hadoop, Oozie, Pig, Hive, MapReduce, and HBase, with hands-on coding in Java or Python.
  • Write clean, efficient, and production-ready code with a strong focus on data structures and algorithmic problem-solving applied to real-world data engineering tasks.
  • Develop, manage, and optimize complex data workflows within the Apache Hadoop ecosystem, with a strong focus on Oozie orchestration and job scheduling.
  • Leverage Google Cloud Platform (GCP) tools such as Dataproc, GCS, and Composer to build scalable and cloud-native big data solutions.
  • Implement DevOps and automation best practices, including CI/CD pipelines, infrastructure as code (IaC), and performance tuning across distributed systems.
  • Collaborate with cross-functional teams to ensure data pipeline reliability, code quality, and operational excellence in a remote-first environment.

SQLGCPHadoopJavaAirflowAlgorithmsData engineeringData StructuresSparkCI/CDDevOpsTerraform

Posted about 1 month ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 116100.0 - 198440.0 USD per year

πŸ” Software Development

  • 5+ years of experience in customer-facing software/technology or consulting.
  • 5+ years of experience with β€œon-premises to cloud” migrations or IT transformations.
  • 5+ years of experience building, and operating solutions built on GCP
  • Proficiency in Oozie andPig
  • Proficiency in Java or Python
  • Experience with managed cloud services and understanding of cloud-based batch processing systems are critical.
  • Strong programming skills with Java (specifically Spark), Python, Pig, and SQL.
  • Expertise in public cloud services, particularly in GCP.
  • Proficiency in the Apache Hadoop ecosystem with Oozie, Pig, Hive, Map Reduce.
  • Familiarity with BigTable and Redis.
  • Experienced in Infrastructure and Applied DevOps principles in daily work. Utilize tools for continuous integration and continuous deployment (CI/CD), and Infrastructure as Code (IaC) like Terraform to automate and improve development and release processes.
  • Proven experience in engineering batch processing systems at scale.
  • Bachelors's degree in Computer Science, software engineering or related field of study.
  • Design and develop scalable batch processing systems using technologies like Hadoop, Oozie, Pig, Hive, MapReduce, and HBase, with hands-on coding in Java or Python.
  • Write clean, efficient, and production-ready code with a strong focus on data structures and algorithmic problem-solving applied to real-world data engineering tasks.
  • Develop, manage, and optimize complex data workflows within the Apache Hadoop ecosystem, with a strong focus on Oozie orchestration and job scheduling.
  • Leverage Google Cloud Platform (GCP) tools such as Dataproc, GCS, and Composer to build scalable and cloud-native big data solutions.
  • Implement DevOps and automation best practices, including CI/CD pipelines, infrastructure as code (IaC), and performance tuning across distributed systems.
  • Collaborate with cross-functional teams to ensure data pipeline reliability, code quality, and operational excellence in a remote-first environment.

PythonSQLApache HadoopGCPHadoop HDFSJavaAlgorithmsData StructuresSparkCI/CDDevOpsTerraform

Posted about 1 month ago
Apply
Apply

πŸ“ United States, Canada

🧭 Temporary

πŸ” Big Data

  • 6+ years experience in cloud environments with Azure services
  • Strong experience with SQL Server and DB2
  • Experience with ETL/ELT tools
  • Proficiency in Python for automation
  • Knowledge of Kafka for real-time streaming
  • Design and deploy big data solutions
  • Manage SQL Server and DB2 databases
  • Optimize database performance
  • Use ETL/ELT tools for data processing
  • Implement CI/CD practices

AWSPythonETLGCPKafkaAzureCI/CDData modeling

Posted 5 months ago
Apply