Sr Big Data Engineer - Oozie and Pig (GCP)

Posted about 2 months agoViewed
US-Remote, Canada-RemoteFull-TimeBig Data Engineering
Company:
Location:US-Remote, Canada-Remote
Languages:English
Seniority level:Senior, 5+ years
Experience:5+ years
Skills:
PythonSQLApache AirflowApache HadoopGCPHadoopJavaJiraAlgorithmsData StructuresRedisSparkCI/CDDevOpsTerraform
Requirements:
Bachelor's degree in Computer Science, software engineering or related field of study. Experience with managed cloud services and understanding of cloud-based batch processing systems are critical. Must be able to lead Jira Epics is MUST Proficiency in Oozie, Airflow, Map Reduce, Java are MUST haves. Strong programming skills with Java (specifically Spark), Python, Pig, and SQL. Expertise in public cloud services, particularly in GCP. Proficiency in the Apache Hadoop ecosystem with Oozie, Pig, Hive, Map Reduce. Familiarity with BigTable and Redis. Experienced in Infrastructure and Applied DevOps principles in daily work. Utilize tools for continuous integration and continuous deployment (CI/CD), and Infrastructure as Code (IaC) like Terraform to automate and improve development and release processes. Proven experience in engineering batch processing systems at scale. 5+ years of experience in customer-facing software/technology or consulting. 5+ years of experience with “on-premises to cloud” migrations or IT transformations. 5+ years of experience building, and operating solutions built on GCP Proficiency in Oozie and Pig Must be able to lead Jira Epics Proficiency in Java or Python
Responsibilities:
Design and develop scalable batch processing systems using technologies like Hadoop, Oozie, Pig, Hive, MapReduce, and HBase, with hands-on coding in Java or Python (Java is a must). Must be able to lead Jira Epics Write clean, efficient, and production-ready code with a strong focus on data structures and algorithmic problem-solving applied to real-world data engineering tasks. Develop, manage, and optimize complex data workflows within the Apache Hadoop ecosystem, with a strong focus on Oozie orchestration and job scheduling. Leverage Google Cloud Platform (GCP) tools such as Dataproc, GCS, and Composer to build scalable and cloud-native big data solutions. Implement DevOps and automation best practices, including CI/CD pipelines, infrastructure as code (IaC), and performance tuning across distributed systems. Collaborate with cross-functional teams to ensure data pipeline reliability, code quality, and operational excellence in a remote-first environment.
Similar Jobs:
Posted 1 day ago
United StatesFull-TimeSoftware Development
Senior Full Stack Engineer
Company:Five9
Posted 1 day ago
North AmericasFull-TimeSoftware Development
Backend Engineer II - Minesweeper - Personalization
Company:
Posted 1 day ago
CanadaFull-TimeSoftware Development
Senior Software Engineer, Backend (Growth Platform)