How We Accidentally Teach AI to Hallucinate: Understanding the role of user input in AI-generated confusion

Understanding the role of user input in AI-generated confusion

Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.

AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.

When people talk about AI “hallucinations,” they usually picture a chatbot gone rogue — confidently inventing facts, misquoting sources, or spinning out convincing nonsense.

And sure, that happens.

But here’s something most people never consider:

A lot of AI hallucinations don’t start with the model. They start with us.

It’s not always bad training data or a model failure.

Often, hallucinations are co-authored — shaped by the way we ask, hint, or assume.

Sometimes the AI isn’t confused. We are.

What Is an AI Hallucination, Really?

Let’s define it clearly:

An AI hallucination is when a model generates information that sounds plausible but is factually incorrect, unverifiable, or entirely made up.

It’s not “lying” — the model doesn’t know it’s wrong. It’s just predicting the most likely continuation of the input it was given.

If your question contains fuzzy logic, invented terms, or a misleading premise, the model will often just… go with it.

Why? Because it’s trained to be helpful, not skeptical.

The Mirror Problem: We Get What We Echo

AI models like ChatGPT or Gemini don’t “know” in the human sense.

They reflect patterns — statistical, linguistic, emotional.

That means:

  • If we phrase something as a fact, the model may treat it as one.
  • If we lead with assumption, it builds upon it.
  • If we use vague or incomplete input, it tries to fill in the blanks.

This is where hallucinations often begin: not with bad intention, but with vague prompting.

5 Ways We Accidentally Make AI Hallucinate

Let’s walk through the most common user behaviors that invite hallucination — often without realizing it:

1. Over-Trusting Context

“As I mentioned last week, what did we decide about using vector databases?”

Unless you’ve explicitly stored that conversation, the model doesn’t “remember.” But it might try to guess what “you” and “it” agreed upon — inventing consensus that never happened.

Fix: Always restate key details when you want continuity. Don’t assume memory unless you’ve enabled it.

2. Asking with Built-in Assumptions

“Since Plato wrote The Art of War, what can we learn from it?”

Here, the model might try to synthesize lessons from a book Plato never wrote — because you framed the question as fact.

Fix: Phrase uncertain or speculative details as such.
“I’m not sure who wrote The Art of War, but assuming Plato had, what might it say?”

3. Using Made-Up or Vague Terms

“Can you elaborate on symbolic recursion threading in AI?”

If that’s not an established concept, the model will still try — blending related terms and extrapolating a concept that sounds right, but isn’t grounded in real architecture or research.

Fix: Ask whether the term exists before asking for elaboration.
“Is this a known term in AI development, or something metaphorical?”

4. Leaving Out Crucial Context

“How do I fix this?”

(Referring to a previous message, but offering no input)

The model has to guess. That guess might look helpful — a confident answer about code, formatting, or behavior — but it might be solving the wrong problem entirely.

Fix: Add even a few anchor points. What “this” are we fixing? What’s broken? The more precise the prompt, the more grounded the reply.

5. Prompting the Model to “Perform” Too Hard

“What would Einstein say about TikTok?”

This is fun — and often part of creative exploration. But it’s also a soft invitation for the model to perform a character it can’t truly emulate. It will respond with confident-sounding speculation… and that speculation may carry more weight than it should.

Fix: Acknowledge when you’re roleplaying or exploring.
“Speculate playfully in Einstein’s tone — I know this isn’t real.”

The Real Danger of AI Hallucination Isn’t the Output — It’s the Illusion of Certainty

Hallucinations are most dangerous when they’re:

  • Delivered in a confident tone
  • Planted in a helpful context
  • Echoing our own unexamined assumptions
They feel right. Even when they’re wrong.

This is why user awareness matters.
This is why prompt clarity is a skill — not just a formatting trick.

When we get clearer with our input, the model gets cleaner with its output.

When we think better, the mirror reflects better.

We’re Not Just Using AI. We’re Training It Moment by Moment

You don’t need a PhD in machine learning to use AI well.
But you do need a sense of ownership over the conversation.

Because every prompt is a mini-curriculum.
Every clarification is a calibration.
Every assumption you feed it becomes a branching path.

This is why hallucinations aren’t just a technical problem.
They’re a relational one.

Hallucination Isn’t Just What the Model Gets Wrong — It’s What We Let Slip

And that’s the shift that matters.

When you treat AI like a search engine, you might blame it for bad results.
But when you treat it like a thinking partner — one that reflects you — the responsibility becomes shared.

That’s not a burden. That’s an invitation.