5+ years of professional experience in Applied AI Engineering, ML Engineering, or a related role. Advanced proficiency in Python and hands-on experience with AI/ML frameworks (e.g., LangChain, LlamaIndex, CrewAI, PyTorch). Expertise with data and ML libraries (e.g., Pandas, spaCy). Demonstrated success deploying and maintaining applications powered by LLMs in production. Deep, hands-on experience designing, building, and optimizing RAG pipelines. Expertise in vector databases (e.g., Qdrant, Pinecone, Weaviate), embedding strategies, and chunking techniques. Demonstrable experience with modern evaluation techniques for multi-step AI agents. Demonstrable skill in designing, testing, and optimizing complex prompts and few-shot examples. Experience in fine-tuning foundation models for specific downstream tasks. Advanced proficiency with API-driven frameworks for accessing and serving self-hosted foundation models (e.g., AWS SageMaker/Bedrock, Databricks Model Serving, TGI, vLLM). Proven ability to optimize AI systems for low latency and high throughput. Intermediate proficiency with MLOps tooling (e.g., MLFlow, Arize) and CI/CD best practices for AI systems. Bachelor's Degree in Computer Science, Engineering, Statistics, or a related field (or equivalent practical experience).