Home » Job Details
AI Quality Engineer: Global Analytic
Duties and Responsibilities:
Develop and implement comprehensive testing strategies for AI models and systems.
Design and execute test cases to evaluate the accuracy, performance, and reliability of AI models.
Automate testing processes to streamline quality assurance workflows and improve efficiency.
Collaborate with data scientists, AI researchers, and software engineers to understand model functionality and identify testing requirements.
Monitor and analyze test results, identify defects and issues, and work with the development team to resolve them.
Conduct performance testing to ensure AI models operate efficiently under various conditions and workloads.
Stay updated with the latest advancements in AI and testing methodologies to continuously improve testing practices.
Provide detailed documentation of testing processes, methodologies, and results.
Ensure compliance with industry standards and best practices for AI quality assurance.
Experience – 5 to 8 Years
Location – Bangalore Client location
Mode of Work – Work from office
Automation Tester who has tested AI Products
Product Testing Strategy - someone who has worked upon
API testing, Automation Framework, BDD, Python + Behave, Git Version Control, Evaluation reporting and communication, CI/CD Integration.
LLM Evaluation and Testing for Accuracy, relevance, and context of responses or outputs, Feedback mechanism and continuous validation
Testing for Bias, Safety and non-subjective response handling / fall back
Basic perf testing p50/ p95, response time SLA, load testing
Data storage, traceability, and auditability.
Handling edge cases, harmful prompts, etc.
Any legal framework application for testing before production