Applyπ Ukraine
- 4+ years of experience in software/data engineering, data architecture, or a related field.
- Strong programming skills in at least one language: Java, Scala, Python, or Go.
- Experience with SQL and data modeling.
- Hands-on experience with Apache Big Data frameworks such as Hadoop, Hive, Spark, Airflow, etc.
- Proficiency in AWS cloud services.
- Strong understanding of distributed systems, large-scale data processing, and data storage/retrieval.
- Experience with data governance, security, and compliance is a plus.
- Familiarity with CI/CD and DevOps practices is a plus.
- Excellent communication and problem-solving skills.
- Design, build, and maintain scalable and reliable data storage solutions.
- Optimize and scale the platform for increasing data volumes and user requests.
- Improve data storage, retrieval, query performance, and overall system performance.
- Collaborate with data scientists, analysts, and stakeholders for tailored solutions.
- Ensure proper integration of data pipelines, analytics tools, and ETL processes.
- Troubleshoot and resolve platform issues in a timely manner.
- Develop monitoring and alerting systems to ensure platform reliability.
- Participate in code reviews and design discussions.
- Evaluate new technologies to enhance the data platform.
AWSPythonSQLApache AirflowApache HadoopKafkaKubernetesData engineeringScalaData modeling
Posted 8 days ago
Apply