
The Good Hallucinations
You can't avoid AI hallucinations. Learn to love them: they force better engineering, documentation, types, and tests. Build systems that catch bad ones automatically.
January 20, 2026•7 min read•By Chris Hartwig

You can't avoid AI hallucinations. Learn to love them: they force better engineering, documentation, types, and tests. Build systems that catch bad ones automatically.

LLM hallucinations measure code unpredictability. If AI struggles with your code, it's a quality problem. The same improvements that help AI also help humans.