Are You Fighting Hallucinations?
I see LLM hallucinations as a measure of how unpredictable code actually is.
When an AI consistently struggles to understand or work with your codebase, it’s rarely an AI problem. It’s a code quality problem. The factors that increase LLM hallucinations are the same ones that make code hard for humans to reason about. Untyped languages remove constraints that help both AI and developers understand data flow. Bad naming creates ambiguity that forces the AI to guess at intent. Poor project organization and scattered files destroy context. Obscure frameworks reduce the AI’s training coverage, and bad context management means the right information isn’t available at the right time.
If you find yourself fighting LLM hallucinations often, your code is unpredictable. There’s something you must do, not the LLM. Use a typed language and apply good semantic naming throughout your codebase. Restructure your project for clarity and embrace modern, well-documented frameworks. Manage context more effectively when working with AI tools. Use Plan Mode for complex tasks, or even better, use OpenSpec or SpecKit for larger projects.
The predictability that AI agents need is the same predictability new team members need to become productive quickly. Predictable code means predictable AI assistance and faster onboarding for humans. The better your codebase, the better the AI can help you and the sooner new developers can contribute.