Strong software engineering background, with experience in AI/ML frameworks like TensorFlow, PyTorch, and their application in generative models. Proven experience integrating AI/ML models into real-world applications, particularly those involving production use cases of LLMs. Deep understanding of models, their context windows, model behavior as well as strong practical experience implementing and optimizing RAG systems. Hands-on experience with cloud infrastructure and deployment tools, particularly Terraform, and deploying services on platforms like AWS, GCP, or Azure. Proficiency in unit, integration, and system testing, with a focus on ensuring robustness, reliability, and performance of AI-powered systems. Comfortable working in cross-functional teams, collaborating with Product Managers, Designers, and Lead Engineers to build cohesive, user-centered solutions. Experience building and maintaining MCP (Model Context Protocol) or similar internal/external server infrastructures is a strong plus.