Bachelor’s degree in Data Science, Statistics, Computer Science, or related field Minimum 5 years of experience in Software Engineering, Data Engineering, or Data Science At least 2 years of experience working with large language models Demonstrated ability to design, develop, and deploy ML and Generative AI solutions at scale Proficiency in Python data manipulation and analysis libraries (NumPy, Pandas, Scikit-learn) Strong engineering fundamentals (OOP, data structures, algorithms, clean/testable code) Strong knowledge of SQL and experience with data warehousing, wrangling, preprocessing, and data pipelines Experience with building AI tooling and frameworks (Cortex, OpenAI API, Anthropic API, LangChain, semantic search, vector databases) Familiarity with automation software (e.g., Zapier, UiPath, Gumloop) Familiarity with Model Context Protocol (MCP) and agent frameworks (LangChain, PydanticAI, n8n) Experience building applications using AWS services and infrastructure (AWS Bedrock, Sagemaker a plus) Strong communication skills with ability to convey complex information to non-technical audiences Ability to think strategically about AI solutions and business outcomes Ability to work in a remote-first, fast-paced environment and manage multiple projects