LLM hallucinations aren’t bugs, they’re compression artefacts. And we just figured out how to predict them before they happen.
When your LLM confidently states that “Napoleon won the Battle of Waterloo,” it’s not broken. It’s doing exactly what it was trained to do: compress the entire internet into model weights, then decompress on demand. Sometimes, there isn’t enough information to perfectly reconstruct rare facts, so it fills gaps with statistically plausible but wrong content […]
Leave a Reply