Build end-to-end training pipelines: data → training → eval → inference Design new model architectures or adapt open-source frontier models Fine-tune models using state-of-the-art methods (LoRA/QLoRA, SFT, DPO, distillation) Architect scalable inference systems using vLLM / TensorRT-LLM / DeepSpeed Build data systems for high-quality synthetic and real-world training data Develop alignment, safety, and guardrail strategies Design evaluation frameworks across performance, robustness, safety, and bias Own deployment: GPU optimization, latency reduction, scaling policies Shape early product direction, experiment with new use cases, and build AI-powered experiences from zero Explore frontier techniques: retrieval-augmented training, mixture-of-experts, distillation, multi-agent orchestration, multimodal models