Machine Learning Engineer - Training Optimization

Remote (world)Full-TimeMiddle
Salary not disclosed
Apply NowOpens the employer's application page

Job Details

Required Skills
PyTorchDistributed Systems

Requirements

  • Strong experience training large neural networks (LLMs or similarly large models)
  • Hands-on experience with training optimization (not just model usage)
  • Solid understanding of Backpropagation, optimization algorithms, and training dynamics
  • Solid understanding of Distributed systems for ML training
  • Experience with PyTorch (required)
  • Comfort working close to hardware (GPUs, memory, networking constraints)
  • Ability to move fluidly between research ideas and production-ready code

Responsibilities

  • Optimize large-scale model training pipelines (throughput, convergence, stability, and cost)
  • Improve distributed training strategies (data, model, and pipeline parallelism)
  • Tune optimizers, schedulers, batch sizing, and precision (bf16 / fp16 / fp8)
  • Reduce training time and compute cost via profiling, bottleneck analysis, and systems-level improvements
  • Collaborate with researchers on architecture-aware training strategies
  • Build and maintain robust training infrastructure (checkpointing, fault tolerance, reproducibility)
  • Evaluate and integrate new training techniques (e.g. gradient checkpointing, ZeRO, FSDP, custom kernels)
  • Own training performance metrics and continuously push them forward
View Full Description & ApplyYou'll be redirected to the employer's site
View details
Apply Now