Conduct QA and validation on outputs from AI models, data pipelines, and automation workflows. Identify and document errors, inconsistencies, and anomalies in largescale datasets or model-generated outputs. Collaborate with data scientists, engineers, and product teams to triage and investigate data quality issues. Develop and maintain SQL-based validation scripts and reports to track data integrity across systems. Perform manual and exploratory testing of AI features, classification systems, and automation logic. Create clear documentation and QA reports that describe findings, reproducible test cases, and impact assessments. Support continuous improvement of model evaluation frameworks and QA processes. Participate in cross-functional reviews to ensure AI systems meet quality and performance standards prior to deployment.