3peat.ai
GovernanceFrameworkBlogAboutLet's Talk
All articles

Don't Believe Everything Your LLM Tells You

By Ryan Ching

I've got an addiction problem with one of my LLMs (large language models like ChatGPT). It was bound to happen.

It started off small, hardly noticeable and not much different to using Google on a daily basis. In fact, I was asking it Googly things: What's the weather going to be like? What are some good restaurants in Athens to go to?

Amazed at its extra thorough analysis and bespoke answers, I ditched Google and switched over completely — What's to do on Greek National Day for a family with young children, as well as older grandparents, that doesn't involve too much walking, and is close to restaurants and cafes? (Clearly we were holidaying in Greece at the time, highly recommend btw).

"Great! You can view the National Day parade, where there will be activities, marches, and displays of military operations…"

Fast-forward to parade day and we've been sitting on the pavement for an hour watching schoolkids from Grade 1–12 walk by, no music, no marching bands, no jets or fire engines etc. Disgruntled, I asked my LLM when they would appear, and it reassured me they would. Smug in my satisfaction that the LLM is always right, I informed the family and said "it's coming!". To our disappointment the parade ended with a whimper soon after and I was left with egg on my face.

So why was the LLM wrong? Why did I have such bold confidence in what was being told to me?

The answer lies in the perfect storm of silicon confidence and human gullibility. LLMs are essentially the world's most eloquent bullshitters — they've read everything ever written but understood none of it. They use pattern matching recognition with the dedication of a pokies addict convinced the next spin will hit.

The technical term is "hallucination," though that's giving it too much credit — LLMs are about as conscious as your dishwasher. They're prediction machines, guessing the next most likely word based on everything they've been trained on. When I asked about Greek National Day parades, they weren't consulting some internal database of verified parade facts; they were playing blackjack odds with words, betting that military displays probably happen at national celebrations because, well, they usually do. Somewhere. Probably.

Why do we fall for it? Because we're hardwired to. Psychologists call it "automation bias" — our tendency to trust machines over our own judgment, the same instinct that has us following GPS directions straight into a lake. Mix that with confirmation bias (the LLM told me what I wanted to hear about exciting parades), add a bit of social herding (everyone's using ChatGPT now), and top it off with the halo effect (if it's brilliant at doing my CV, surely it knows about local parades), and the effects are astounding.

I should have known better when it assured me about those phantom fire engines. The warning signs were there — too specific, too perfect, too much like what I wanted to hear. But there I sat on that Athens pavement, my family growing increasingly mutinous, defending a machine's honour like it was my firstborn.

The kicker? I'm still using it. Because despite the Greek parade debacle, it's still more useful than harmful, like a brilliant but unreliable friend who occasionally leaves you in awe with that one comment. The trick isn't to stop using LLMs — the trick is remembering they're tools, not sages. Check the important stuff. Verify the specifics.

And maybe, just maybe, when planning family outings in foreign countries, cross-reference with actual humans who've been there.

Happy travels.

Ready to create your own AI Framework?

Use our guided framework builder to list your AI systems, classify risk, and generate a practical governance framework your team can implement immediately.

Create your own AI Framework