Attachments
  • Mark as Completed
  • 12
  • More
Previous class
Section 2: No personal data

Section 3: What hallucinations look like

Module 7 — Data Safety & Common Mistakes

Section 3: What Hallucinations Look Like

Purpose of This Section

This section explains what AI hallucinations are, why they occur, and why they are one of the most common and dangerous failure modes when using AI at work.

Hallucinations are not rare. They are a known limitation.

The Core Idea

A hallucination occurs when an AI produces information that sounds correct but is factually wrong.

AI systems generate responses based on patterns, not verification. As a result, they may confidently invent details, references, or explanations that do not exist.

Confidence does not equal accuracy.

Why Hallucinations Are Dangerous

Hallucinations are difficult to detect because they are often:

  • fluent and well-structured
  • logically explained
  • supported by plausible-sounding references

They do not usually appear obviously broken. They appear almost right, which makes them easy to trust and repeat.

Common Examples

Hallucinations often take forms such as:

  • incorrect dates or timelines
  • laws or policies that were never enacted
  • fabricated studies or citations
  • features or capabilities that do not exist
  • misattributed quotes or actions

These errors can propagate quickly if not caught.

When Hallucinations Are More Likely

Hallucinations are more likely when:

  • information is recent or rapidly changing
  • topics are niche or poorly documented
  • prompts request definitive answers without sources
  • users push beyond established knowledge

Speed and ambiguity increase risk.

How to Use AI Safely

AI should be used as an orientation and exploration tool, not a final authority.

When accuracy matters:

  • verify dates, names, and figures
  • check original sources
  • confirm claims independently

Human judgment is required to catch errors.

Common Failure Mode

A common mistake is assuming that fluent, confident responses are reliable by default.

Another failure mode is repeating AI-generated information without verification, allowing errors to scale across teams or documents.

Errors become more costly as they spread.

The Conjugo Rule

If it matters, verify it.

AI can help you think faster, but it cannot guarantee correctness.

Best Practices

Managing hallucination risk works best when:

  • critical information is cross-checked
  • sources are explicitly requested
  • outputs are reviewed skeptically
  • verification is built into workflows

Accuracy requires intention.

Section Takeaway

  • Hallucinations are confident errors
  • They often sound polished and complete
  • Risk increases with novelty and ambiguity
  • Verification remains human responsibility

Hallucinations are a limitation to manage, not a flaw to ignore.

This concludes Section 3.