Own the full QA lifecycle for Agentic AI products: strategy, design, execution, reporting, and release sign-off. Design and run test plans covering functional, regression, smoke, exploratory, and usability testing for AI behavior and decision chains. Validate multi-step decision flows and reasoning to catch logic gaps, guardrail failures, or requirement mismatches. Perform structured exploratory testing to uncover unexpected behaviors, edge cases, and cascading AI failures. Build synthetic test scripts for UI elements, APIs, and end-to-end flows to verify functionality. Test across platforms (web, mobile, integrations) for consistency and performance. Maintain dashboards tracking test coverage, failures, and quality KPIs for all stakeholders. Improve test reliability: fix flakiness, optimize parallel runs, and cut execution time. Partner with Product, Design, and Engineering to refine requirements and set clear go/no-go criteria. Monitor pre- and post-release quality; use data to enhance AI evaluation and guardrails.