Apply

Big Data Engineer

Posted 2024-10-15

View full description

πŸ’Ž Seniority level: Middle, Several years

πŸ“ Location: Romania

πŸ” Industry: Big Data

🏒 Company: CREATEQ

πŸ—£οΈ Languages: English

⏳ Experience: Several years

πŸͺ„ Skills: PythonSQLJavaKafkaAirflowApache KafkaNosqlSpark

Requirements:
  • Several years of experience developing in a modern programming language, preferably Java and Python.
  • Significant experience with developing and maintaining distributed big data systems with production quality deployment and monitoring.
  • Exposure to high-performance data pipelines, preferably with Apache Kafka & Spark.
  • Experience with scheduling systems such as Airflow, and SQL/NoSQL databases.
  • Experience with cloud data platforms is a plus.
  • Exposure to Docker and/or Kubernetes is preferred.
  • Good command of spoken and written English.
  • University degree in computer sciences or equivalent professional experience.
Responsibilities:
  • Develop new data pipelines and maintain the data ecosystem focusing on fault-tolerant ingestion, storage, data lifecycle, and computing metrics.
  • Communicate efficiently with team members to develop software and creative solutions to meet customer needs.
  • Write high-quality, reusable code, test it, and deploy it to production.
  • Apply best practices according to industry standards while promoting a culture of agility and excellence.
Apply

Related Jobs

Apply

πŸ“ Romania

🏒 Company: Jobgether

  • 8+ years of IT experience with at least 4 years working with Azure Databricks.
  • Strong proficiency in Python for data engineering, along with expertise in PySpark and SQL.
  • Knowledge of Azure Data components including Data Factory, Data Lake, SQL Data Warehouse (DW), and Azure SQL.
  • Experience in data modeling, source system analysis, and developing technical designs for data flows.
  • Familiarity with data profiling, cataloging, and mapping processes.
  • Experience with data visualization and exploration tools.

  • Lead the technical planning and execution of data migration, including ingestion, transformation, and storage within Azure Data Factory and Azure Data Lake.
  • Design and implement scalable data pipelines for seamless data movement from various sources using Azure Databricks.
  • Develop reusable frameworks for ingesting large datasets and implement data validation and cleansing mechanisms.
  • Work with real-time streaming technologies to process and ingest data effectively.
  • Provide technical support during and after migration, resolving challenges as they arise.
  • Stay up-to-date on advancements in cloud computing and data engineering, recommending best practices and industry standards.

PythonSQLCloud ComputingAzureData engineering

Posted 2024-11-07
Apply