Back to Glossary
ai-literacyintermediateunit-6

Evaluation Criteria

Definition

The standards used to judge whether an AI response is good enough for its purpose.

In Plain English

Evaluation criteria are like the grading rubric for AI output.

Real-World Example

A proposal-review prompt may grade answers on accuracy, completeness, risk identification, clarity, and source grounding.

Why It Matters for Your Work

Clear criteria make AI output easier to review, compare, improve, and safely use in business workflows.

Common Mistake

Calling an AI response good because it sounds polished, even if it is incomplete or unsupported.

Related Terms

View Human Review
Human Review

Having a person verify AI output before using or publishing it.

View Source Checking
Source Checking

Verifying AI claims against reliable sources.

View Prompt Engineering
Prompt Engineering

Designing instructions and context so an AI model produces better, more consistent results.

More ai-literacy Terms