The Good Hallucinations
TL;DR: you can’t avoid AI hallucinations. Learn to love them: they lead to better projects.
You can’t avoid AI hallucinations. That’s how AI works. Some hallucinations you like, some you don’t. Every token is a probabilistic guess, so you should control the odds.
When AI invents a clever solution to your problem, that’s a hallucination. When it dreams up a clean architecture, that’s a hallucination. When it calls a function that doesn’t exist? Also a hallucination.
The difference is whether the hallucination survives in your codebase.
You Can Prevent Bad Hallucinations
Documentation provides facts. Give the model actual API references, real function signatures, concrete examples. It’ll still hallucinate, but now it’s hallucinating
from a factual base instead of pure imagination. I’m talking documentation about your project, your tools, your coding style and preferences. Use a /docs folder
and get into the habit of adding documentation to it. The refer the AI to it and get better results.
Web access provides facts. Let the model search for what it needs. It finds the real API, the actual documentation, the current syntax.
The hallucination self-corrects before it becomes code. Don’t hesitate to give it actual URLs to the documentation of the tools and libraries you’re using. Sometimes,
they ship an llms.txt file meant to be read by the AI.
Good naming prevents hallucinations. They say the 3 hardest problems in coding are naming things and off-by-one errors. Use clear names for functions,
variables, and classes in you code. Functions called getUserEmailAndValidate() tell the model exactly what they do. Functions called process() force the model
to guess. Clear names are executable documentation.
Clear APIs prevent hallucinations. Simple, obvious interfaces leave less room for creative interpretation. The model can’t hallucinate five different ways to use your function if there’s only one way that makes sense. The more clear your API is, the less likely the model will hallucinate.
Strongly semantic code prevents hallucinations. Types that encode meaning, structures that enforce rules, APIs that make invalid states impossible: these constrain what the model can hallucinate into existence. The more direct the code, the less likely the model will hallucinate.
Conventions prevent hallucinations. Ruby on Rails and Elixir work way better with AI than their dynamically typed nature would suggest. I think it’s because they put so much emphasis on convention and documentation. Conventions will be learned by models, while types will require reasoning. So when the AI doesn’t respect your conventions, make sure to document them better in your codebase.
Use the AI itself to prevent hallucinations. Models are very good at producing documentation they can refer to later, and to document the actual code. All the things you know about good engineering with humans also works with AI. You’re asking your developers to write documentation, right? Also ask it to refactor your codebase, make the structure and the intent clearer for you: the AI will benefit too.
If you catch the model hallucinating, read the above again and find what you can improve in the codebase you’re feeding the AI.
Help the AI Correct Its Hallucinations
Type checking filters out hallucinations. The compiler rejects code that invents non-existent functions, uses wrong types, calls methods that don’t exist. Bad hallucinations die before they compile.
Testing filters out hallucinations. The model might generate code that compiles but does the wrong thing. Tests catch that. Bad hallucinations die before they pass tests.
This is the key insight: you don’t need to prevent hallucinations if the AI can catch them automatically.
If Nothing Works, Rethink Your Engineering
If you’re still fighting hallucinations, your codebase doesn’t play well with AI..
You might have an exotic codebase organization (“legacy in-house framework”), or you might rely on code that isn’t very semantic (“generic framework”), you might have a lot
of implicit behavior (“business logic is the code”), or you might have loose types (any is generic magic). If so, you will have to invest time and effort to make
your codebase easier for an AI to understand.
For instance, if you’re using JavaScript instead of TypeScript, you can ask the AI to document types with JSDoc comments, for its own use! If your code looks idiomatic only within your team, then it’s time to document what you expect precisely, like you’d have to explain to a new hire. If you’re using the web gateway of golang’s GRPC layer as your HTTP REST implementation, then you’ll have to document your exotic architecture choice before the model tries to add an HTTP handler in an architecture not made for it. Last example, if you’re using a 100% pure Cloudflare Workers codebase, the AI will benefit from having links to the documentation of each of the Cloudflare Workers SDKs, like D1, KV, Workflows, etc.
If a human who doesn’t know your codebase can guess wrong, assume that the AI will guess wrong too. Hallucinations are an engineering problem that AI is exposing.
Cheap Models Work Better
If you’re thinking hallucinations are an AI problem, then maybe you’re waiting for AI to get better.
Stop waiting.
Better models won’t fix lacking engineering.
Give an AI good documentation, clear idiomatic code, strong types, and decent tests and it doesn’t matter if it’s an expensive or a cheap model. It works.
In that sense, the model’s cost becomes a proxy for your engineering skills: the more you invest in your codebase, the less you need to invest in the model.
Cheap models force you to build better systems. They need clear context. They need good structure. They need proper documentation. And when you hit a real issue, you can always upgrade to a more expensive model to fix it. That’s very comparable to human tech teams: the more a rookie developer can do on his first day, the more confident you can be in your engineering skills. If only one the most senior team member understands the codebase, you’re in trouble.
Conclusion
Think about it: if hallucinations force you to come up with better engineering, better documentation, better types, better tests, it is undeniable that you’ll end up with a better project. One that is nicer to work on, is easier to maintain, and allows higher velocity.
Do you still think hallucinations are a problem for AI coding? Continue the discussion on HackerNews.
Photo by Marko Korb on Unsplash