Hallucination

< Glossary
Safety

When an AI model generates plausible but factually incorrect output. A key challenge of LLMs, addressed through RAG, ground truth data, and verification.

Related terms