Bachelor’s degree in Data Science, Statistics, Computer Science, or a related field. Minimum 5 years of experience in software engineering, data engineering, or data science. At least 2 years working with large language models (LLMs). Proven ability to design, develop, and deploy machine learning and generative AI solutions at scale. Strong proficiency in Python and data libraries such as NumPy, Pandas, and Scikit-learn. Solid engineering fundamentals: OOP, data structures, algorithms, and production-ready coding practices. Experience with SQL, data warehousing, data wrangling, and pipeline development. Familiarity with AI tooling and frameworks (Cortex, OpenAI API, Anthropic API, LangChain, vector databases). Knowledge of automation tools (e.g., Zapier, UiPath, Gumloop) and agent frameworks (LangChain, PydanticAI, n8n). Experience building applications with AWS infrastructure; AWS Bedrock and SageMaker a plus. Strong communication skills and ability to explain complex concepts to non-technical audiences. Must be located in the United States and legally eligible to work, able to work core U.S. business hours, and available for occasional travel or after-hours support.