Skip to main content
QATraining
All guides
Free guide

AI Testing

AI, ML and LLM testing for working QA professionals

Premium hands-on course for QA professionals testing AI systems: lifecycle strategy, data quality, model validation, risk, fairness, robustness, monitoring and LLM application evaluation.

10 guide sections Self-paced reading

What changed?

This material is now presented as a free guide instead of a course. Progress tracking, exams, certificates, and paid course positioning are hidden from the public experience. The useful QA content remains available for reading and reference.

Guide sections

1

AI Testing Mindset, Lifecycle, and QA Role

Establish how AI testing differs from conventional software testing and what a QA professional contributes across the full AI lifecycle.

2

AI Quality Characteristics, Risk, and Acceptance Criteria

Teach learners to turn AI risk, trustworthiness, and quality characteristics into measurable release criteria.

3

Data, Labelling, Provenance, and Leakage Testing

Make data testable: provenance, labelling quality, representativeness, leakage, privacy, and data pipeline correctness.

4

ML Workflow, Models, Neural Networks, and Development Testing

Give QA professionals enough ML workflow knowledge to test development practices, training pipelines, and model artifacts with confidence.

5

Metrics, Calibration, Statistical Confidence, and Model Comparison

Teach learners how to choose, calculate, interpret, and challenge model performance metrics.

6

Test Oracles, Metamorphic Testing, Back-to-Back Testing, and A/B Testing

Teach AI-specific test design techniques for systems where exact expected outputs are unavailable or unstable.

7

Explainability, Fairness, Bias, and Responsible AI Evidence

Help QA professionals evaluate explainability and fairness as testable quality concerns, not vague ethical slogans.

8

Robustness, Security, Adversarial Testing, and AI-Specific Threats

Teach practical testing for AI-specific robustness and security threats, including poisoning, evasion, extraction, prompt injection, and confidentiality attacks.

9

Production Monitoring, Drift, Observability, and Incident Response

Show how AI testing continues after release through observability, drift detection, alerting, incident response, and model change control.

10

Generative AI and LLM Application Testing

Teach QA professionals how to test LLM applications, RAG systems, prompt-driven workflows, structured outputs, tools, safety behaviour, and regression quality.

Need practical help?

Use the free tools and prompt library alongside these guides.