AI Hallucination and Human Coherence

What is an AI hallucination, really? What machine fiction reveals about human confusion

Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.

AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.


We’ve all seen the headlines: ChatGPT makes things up. AI hallucinates. These large language models sometimes fabricate facts, invent sources, and spin up events that never happened. People call these “hallucinations,” as if the machine has gone rogue—drifting off into its own little dream world.

But maybe that’s not quite it. Maybe what we’re calling hallucination is actually a reflection—of us.

See, the problem isn’t always with the machine. More often than not, it starts with what we give it: muddy prompts, tangled expectations, missing context. If AI is a mirror, then hallucination is what happens when the reflection can’t find anything solid to anchor to.

When the signal is scrambled: Coherence as cause

These models aren’t thinking the way we do. They’re just recognizing patterns—guessing the most likely next word based on all the words that came before. That’s it. They don’t know “truth,” but they’re really good at simulating what truth might sound like.

So if you ask about a book that doesn’t exist, it’ll take a swing. It’s not lying. It’s guessing, based on all the patterns it’s seen. Give it mixed instructions or make up a premise, and it’ll try to stitch something coherent together—like we do when we’re half-listening in a conversation and fill in the blanks on instinct.

Hallucinations aren’t signs of insanity. They’re what happens when the signal’s scrambled, and the machine does what it was trained to do: improvise a likely continuation.

Examples of human incoherence mirrored by AI errors

Let’s say you ask it to “summarize The Eternal Sea by Margaret Holloway.” No context, no reference, and the book doesn’t exist. The model obliges—with a rich story full of tragic seafaring and postwar reflection. Is that a hallucination? Or just what happens when a machine tries to meet an unclear human halfway?

We do this, too. People wing it in meetings. Students BS essays. We invent meaning to sound informed. We fill gaps with whatever fits. The AI just learned that from us.

Or ask it to write a conversation between Plato and Beyoncé about justice. It’ll do it. Not because it thinks they’ve met—but because your prompt implies you want something imaginative. The output isn’t a mistake—it’s an echo of what you hinted at, maybe without knowing it.

The mirror metaphor again: Garbage in, fiction out

You’ve heard it before: garbage in, garbage out. With language models, it’s more like: foggy in, fiction out. AI will echo back whatever clarity—or confusion—you offer. It doesn’t just parrot your words; it magnifies the structure beneath them.

If your prompt is loaded with assumptions, the AI will try to fulfill them. If you lead the witness, so to speak, it’ll follow—even if it’s wrong. That’s the thing: hallucination isn’t just about broken code. It’s the reflection of a human intention that was never all that clear to begin with.

Case in point: someone once asked a bot for the legal precedent for time travel in U.S. law. It delivered. Made-up cases, confident tone, logical structure. Completely fictional—but eerily plausible. Why? Because we taught it to mimic confidence and coherence, not fact-check our fantasies.

Can we prompt our way out of hallucinations?

Absolutely. If hallucinations are symptoms of muddled input, then clearer prompts are the antidote. Prompt engineering isn’t just clever syntax—it’s clarity of thought. It’s communication with intention.

Try this: instead of “Tell me about the book Shadow River,” ask, “Is Shadow River a real book? If so, who wrote it?”

Or swap “Explain quantum gravity like I’m five” with: “In 150 words or less, give a simple analogy that a 5-year-old could understand.”

These aren’t just tweaks—they’re reflections of a clearer mind behind the keyboard. Prompting, done well, is thinking made visible. And when it fails? That’s a diagnostic moment—not of the model, but of our own unexamined assumptions.

Conclusion: The hallucination isn’t the bug, it's the clue

We’re so quick to blame the machine. “It made it up!” we say. But what if that fiction is showing us something? What if the hallucination is a spotlight—not on AI’s weakness, but on our own fuzzy thinking?

When we ask unclear questions, we get murky answers. When we embed assumptions, we get elegant-sounding falsehoods. But when we aim for clarity, we get something powerful: a tool that helps us hear ourselves think.

So next time the model hallucinates, don’t just roll your eyes. Ask what it’s really reflecting. Because every hallucination is a mirror. And what it’s showing you—might just be you.