Apply

Data Platform Engineer

Posted 15 days agoViewed

View full description

📍 Location: United Kingdom, EU

🔍 Industry: FinTech

🏢 Company: Vitesse PSP

🗣️ Languages: English

🪄 Skills: AWSPythonSQLApache AirflowETLGCPAzureCI/CDMicroservicesScalaData modeling

Requirements:
  • Strong software engineering foundation (e.g., microservices, automated testing, containerization)
  • Strong experience with building and maintaining data pipelines and platforms
  • Proficiency in programming languages (e.g., Python, Java, Scala)
  • General knowledge of data engineering tools (e.g., Databricks, Apache Spark, dbt, Airflow)
  • Knowledge of semantic layer concepts and tools (e.g., LookML, Cube.js, dbt).
  • Experience with relational and non-relational databases
  • Understanding of data modeling, ETL processes, and data governance
  • Familiarity with cloud platforms like AWS, Azure, or GCP
  • Strong problem-solving skills and ability to work collaboratively across teams
  • Experience with CI/CD practices and tools, including GitHub
  • Experience in Agile development methodologies
Responsibilities:
  • Design and implement a semantic layer or data framework to streamline the development of data-heavy features within the engineering department
  • Collaborate with engineering teams to understand their data requirements and ensure the framework meets their needs
  • Integrate the framework with existing tools, including Databricks, and ensure seamless interoperability
  • Build scalable and efficient pipelines to support data-driven applications
  • Develop and enforce best practices for data access, storage, and processing across the organization
  • Provide technical guidance and support to teams using the framework to build dashboards and features
  • Stay updated with industry trends and recommend tools, frameworks, or technologies that align with our goals
Apply

Related Jobs

Apply

📍 Spain

🔍 HealthTech and AI

🏢 Company: Idoven

  • 3-4 years of experience in a similar ML platform engineering role, ideally with production model deployment experience.
  • Strong passion for building robust and scalable ML platforms.
  • Solid understanding of optimization techniques, multithreading, and distributed system concepts.
  • Foundation in computer science principles, including data structures, algorithms, and complexity analysis.
  • Experience building and maintaining software systems, preferably in a cloud environment (e.g., AWS, GCP, Azure).
  • Experience managing GPU resources, including driver management, access control, allocation, and memory management (NVidia, CUDA).
  • Familiarity with machine learning frameworks such as TensorFlow or PyTorch.
  • Experience with experiment tracking and model management tools (e.g., MLflow, TensorBoard).
  • Experience with containerization technologies (Docker, Kubernetes) and version control systems (e.g., GitHub).
  • Excellent problem-solving, communication, and collaboration skills.
  • Ability to work independently and as part of a team.
  • Comfortable with CI/CD practices, code reviews, and collaborative development.
  • Design, develop, and maintain tools and infrastructure for ML model training, experimentation, and deployment.
  • Develop systems for efficient access to and management of large datasets.
  • Create solutions for optimizing GPU utilization and resource allocation.
  • Integrate and maintain experiment tracking and monitoring tools (e.g., MLflow, TensorBoard).
  • Develop processes for deploying ML models to production environments.
  • Collaborate closely with ML engineers to understand their needs and provide effective solutions.
  • Contribute to improving ML development lifecycle and best practices.
  • Troubleshoot and resolve ML platform-related issues.
  • Stay current with advancements in ML platform technologies and best practices.

AWSDockerPythonGCPGitKubernetesMachine LearningMLFlowPyTorchAlgorithmsAzureData StructuresTensorflowCI/CD

Posted 3 months ago
Apply
Apply
🔥 Data Platform Engineer
Posted 5 months ago

📍 Cyprus

🧭 Full-Time

🔍 FinTech

🏢 Company: Zeal Group

  • Experience in Data Engineering (Data Pipelines, ETL, Data Quality, OLAP, Big data frameworks)
  • Experience in Infrastructure and DevOps (Cloud infrastructure, Terraform, k8s, Docker, Prometheus)
  • Experience in Analytics Engineering (Data Modelling, Data warehouses, Workflow Managers)
  • Experience in Backend Engineering (API, Micro-services, OLTP storages, Stream processing)
  • Experience with Continuous Integration Tools (Gitlab Ci, Teamcity, Jenkins)
  • Experience with Python, SQL
  • Develop and manage robust data pipelines, ensuring high data quality
  • Develop and own the data Infrastructure using IaC and DevOps practices
  • Develop and own the data lakehouse system through effective data modeling and data governance
  • Develop and maintain real-time data applications (microservices and stream processing systems)
  • Develop and maintain CI/CD pipelines for streamlining code integration, testing, and deployment

DockerPythonSQLBusiness IntelligenceETLJenkinsData engineeringPrometheusSparkCI/CD

Posted 5 months ago
Apply