Senior Data Scientist

New
Source API remote eligibility restrictions: Australia, Canada, New Zealand, United Kingdom, United States Location: EU (Remote) Residing in and legally permitted to work in the EU. Remote-First Flexibility – Work from anywhere in the EUFull-TimeSenior
Salary not disclosed
Apply NowOpens the employer's application page

Job Details

Experience
5+ years of experience in Python, with ~10 years of overall software engineering or data experience
Required Skills
DockerGraphQLPythonKubernetesMLFlowFastAPICI/CDRESTful APIsGitHub ActionsPrompt EngineeringMLOpsLangChain

Requirements

  • 5+ years of experience in Python
  • ~10 years of overall software engineering or data experience
  • Proven experience building and deploying production-grade APIs (preferably with FastAPI; REST/GraphQL experience is a plus)
  • Hands-on experience working with large language models (LLMs) and LLMOps, including prompt engineering, fine-tuning, and evaluation (e.g., GPT, Claude)
  • Strong experience fine-tuning open-source models (e.g., Hugging Face ecosystem)
  • Practical experience designing and working with vector databases (e.g., Pinecone, Weaviate, Chroma, pgvector)
  • Experience building AI agents using frameworks such as LangGraph, LangChain, CrewAI, or similar
  • Solid understanding of model deployment and serving (e.g., vLLM, TGI, or managed endpoints)
  • Experience with CI/CD pipelines and modern deployment practices (Docker, Kubernetes, GitHub Actions)
  • Strong experience working with and processing large-scale text datasets

Responsibilities

  • Develop and Deploy Production-Grade AI Systems
  • Build, deploy, and maintain scalable APIs that serve AI/ML models in production environments
  • Own end-to-end delivery of AI solutions, from prototyping to fully productionized systems
  • Design and implement Retrieval-Augmented Generation (RAG) systems using vector databases
  • Build and orchestrate AI agents using frameworks such as LangGraph, CrewAI, or similar
  • Evaluate and select appropriate large language models (LLMs) and foundation models based on specific use cases
  • Optimize model inference for latency, cost efficiency, and throughput at scale
  • Implement robust monitoring, logging, and alerting for deployed models and services
  • Build and maintain CI/CD pipelines for seamless testing, deployment, and iteration of AI systems
  • Work closely with product and engineering teams to integrate AI capabilities into core products
View Full Description & ApplyYou'll be redirected to the employer's site
View details
Apply Now