

I did something with Perplexity as a test. I asked it a complicated question (which it botched because despite being “search-driven” it searches like a grandma using Google for the first time, and I mean the current slop-based Google). After giving it more information to finally get the right topic, I started asking questions designed to elicit a conclusion. Which it gave. And it gives you the little box saying what steps it’s supposedly following while it works.
Then I asked it to describe the processes it used to reach its conclusion.
Guess which of these occurred:
- The box describing the steps it was following matched the description of the process at the end.
- The two items were so badly mismatched it was like two different AIs were describing a process they’d heard about over a broken phone line.
Edited to add:
I was out of the number of “advanced searches” I’m allowed on the free tier, so I did this manually.
Here is a conversation illustrating what I’m talking about.
Note that I asked it twice directly, and once indirectly, to explain its thinking processes. Also note that:
- Each time it gave different explanations (radically different!).
- Each time it came up with similar, but not the same, conclusions.
- When I called it out at the end it once again described the “process” it used … but as you can likely guess from the differences in previous descriptions it’s making even that part up!
“Reasoning” AI is absolutely lying and absolutely hallucinating even its own processes. It can’t be trusted any more than autocorrect. It cannot understand anything which means it cannot reason.
There’s an easy sentence to learn: “AI doesn’t help.”
It’s only three words (or four if you want to be pedantic) long. And it is true so often that the very few times it isn’t true are a trivial statistical quirk.