Applied AI Engineer - Agentic Systems & Reputation Intelligence
New
IndiaFull-TimeMiddle
Salary not disclosed
Apply NowOpens the employer's application page
Job Details
- Required Skills
- PythonC#.NETReactAWS Lambda
Requirements
- Deep experience designing and shipping autonomous systems using ReAct, Planning, Reflection, and Multi-Agent Orchestration.
- Experience moving agents from demo to production, handling failure, state management, and non-deterministic outputs.
- Expert-level implementation experience with frontier model APIs from OpenAI, Google, and Perplexity, including function calling, structured outputs, and streaming.
- Proficiency with agentic developer tools (e.g., Claude Code, Cursor, or custom agentic CLI workflows).
- Demonstrated ability to manage multiple autonomous agents across a codebase: writing, testing, and auditing code with minimal manual intervention.
- Strong proficiency in Python for AI orchestration.
- Working knowledge of C# / .NET for integration with the core platform.
- Practical experience deploying and operating serverless architectures on AWS (Lambda, Step Functions, EventBridge) for high-frequency, agentic workloads.
- Experience building custom evaluation frameworks (LLM-as-a-Judge, trace-based testing, regression suites) to maintain and improve agent output quality.
Responsibilities
- Architect Agentic Systems: Design and implement production-grade agentic workflows (Reflection, Self-Correction, Planning, Multi-Agent Orchestration) for AI-driven product capabilities.
- Operationalize Agents in Production: Own the full lifecycle from rapid prototype to production deployment, considering token budgets, latency targets, graceful degradation, and cost observability.
- Evolve Multi-Model Intelligence: Develop systems that synthesize outputs from ChatGPT, Gemini, and Perplexity to identify sentiment discrepancies, detect brand hallucinations, and surface actionable intelligence.
- Build Scalable Cloud Infrastructure: Design and maintain serverless backend services using Python and AWS Lambda, ensuring efficient and observable agent performance at scale.
- Drive Evaluation and Quality: Build LLM-as-a-Judge evaluation frameworks, trace-based testing pipelines, and quality feedback loops for reliable agent output.
- Collaborate Across Teams: Work with Software Engineers and Product Managers to integrate agentic behaviors into production frameworks, ensuring stateful, observable, and resilient AI systems.
View Full Description & ApplyYou'll be redirected to the employer's site