Thursday, June 26, 2025
The illusion of thinking
This month, Apple published a scientific paper about Artificial Intelligence titled “The Illusion of Thinking”. While some people pointed out that Apple wasn’t doing so good with AI, and concluded that the paper was “sour grapes”, with me the paper resonated, as it describes what I believe: That AI isn’t actually thinking, but rather just matching patterns. Which makes AI good at solving problems that have frequently been solved before, but bad at original thinking. For example the researchers showed that AI could solve the Tower of Hanoi puzzle only to a certain number of discs, while a thinking human who found the solution for a small number of discs can extrapolate the solution to all numbers of discs, as the solution is iterative. This coincides with other research which shows that the new “reasoning” LLM models are even more prone to hallucinations, that is making stuff up when they don’t know the real answer.
While Apple might have some self-interest in showing that AI isn’t that easy, the companies that push for AI have a much bigger interest in keeping the hype up, up to levels that might be described as a scam. Companies like Nvidia have their share-price directly depending on the belief that you just need to scale up LLM models far enough to reach artificial general intelligence. I consider it likely, that this premise isn’t true, which might at one point burst the AI bubble and destroy billions in investment.
At the same time, AI is growing closer to becoming smarter than humans by the clever trick of making humans less smart. The first studies of the effect of the introduction of ChatGPT on education are devastating: Students using AI to write their homework are less smart in performing similar tasks themselves to the point where a diminished number of connections was evident in their brain scans. As a large percentage of students use AI, we will need to fundamentally rethink our education systems, and fast. “Homework”, where teachers rely on their students to let’s say write an essay on their own, might become a thing of the past, replaced by more educational tools like supervised exams, where the students can’t use AI.
Comments:
<< Home
Newer› ‹Older
I just find it astonishing that anyone believes the current AIs are doing anything other than pattern matching. That's literally what the developers of the software always said they were doing. Anyone who ever read even an article in the popular press about LLMs knows that's how they work. Of course, the way they actually do it is still a mystery even to those developers, who long since lost control of the process, which is now self-iterating and oblique., which is why there are research projects going on just trying to figure out what the software is doing and how it's doing it.
No-one who's paid the least attention to the developing story believes the AIs are actually "thinking", though. It's weird that anyone still believes they are.
No-one who's paid the least attention to the developing story believes the AIs are actually "thinking", though. It's weird that anyone still believes they are.
I disagree with you (and Bhagpuss) that LLM are obviously not thinking. One reason is that "think" is very ill defined.
LLM has been able to understand the structure of Checkers only from the list of moves, able to deduct an internal representation of the chessboard, and the rules. From a neural network only trained to complete a sentence, it is incredible. That match at least some definition of "think". Clearly LLM are able to 'think' far deeper than any of its creator would have guessed.
Another example is that 'ability to reason' have been greatly enhanced by training LLM on code. You train LLM to complete SW code, and it become better at reasoning in other situations.
Now I agree that 'Global intelligence" is out of reach of LLM. As pointed out, it main limitation is its incapacity to understand the concept of truth or real world. And it has difficulty to conduct multi-step reasoning. But two or three steps reasonning : yes it can do that.
LLM has been able to understand the structure of Checkers only from the list of moves, able to deduct an internal representation of the chessboard, and the rules. From a neural network only trained to complete a sentence, it is incredible. That match at least some definition of "think". Clearly LLM are able to 'think' far deeper than any of its creator would have guessed.
Another example is that 'ability to reason' have been greatly enhanced by training LLM on code. You train LLM to complete SW code, and it become better at reasoning in other situations.
Now I agree that 'Global intelligence" is out of reach of LLM. As pointed out, it main limitation is its incapacity to understand the concept of truth or real world. And it has difficulty to conduct multi-step reasoning. But two or three steps reasonning : yes it can do that.
LLM's do not think. That isn't my opinion. It's a fact. They try produce the most likely response based on their training data and user inputs. There is no thinking or reasoning involved in that process.
If I train an LLM on data that exclusively says the sky is brown the LLM will tell me the sky is brown when I ask what color the sky is.
I personally detest how the term AI has become the buzzword for LLM and other generative creation processes. It causes confusion and is largely why people equate these things as having or striving for some sort of actual intelligence.
LLMs are as much "AI" as enemies in a videogame who are scripted to do certain things based on certain situation are "AI", which is not at all.
If I train an LLM on data that exclusively says the sky is brown the LLM will tell me the sky is brown when I ask what color the sky is.
I personally detest how the term AI has become the buzzword for LLM and other generative creation processes. It causes confusion and is largely why people equate these things as having or striving for some sort of actual intelligence.
LLMs are as much "AI" as enemies in a videogame who are scripted to do certain things based on certain situation are "AI", which is not at all.
I think there's a huge divide between applying the concept of "thought" to the processes these AI models are going through as we imagine the concept, and the fact that whether they are thinking in any manner we consider germane to the notion of actual sentience is irrelevant to the fact that a lot of the doomsday scenarios could readily happen without the AI ever meeting the "thinking, sentient being" criteria.
thinking could be thought of as a iterative process of pattern matching. if you prompt your llm to do chain of thought, thats exactly what it is doing. you can improve the thinking process with agent like feedback loops, that control that process in more detail. i.e. you define the method of how to think in your prompts the agent uses.
As Ettesiun says, we've actually detected mental models of systems (such as gameboards) inside neural networks trained only on moves. That is the beginnings of intelligence. Rats and mice can solve only simple mazes, but who can doubt that some of the mechanisms of their intelligence are some of those that also underpin ours?
I agree that LLMs alone will not produce an Artificial General Intelligence, unless some are modified into components very different from our idea of an LLM. And I suppose it's true that the amazing recent successes of neural models in some fields may have given rise to a bit of overenthusiasm in the field (which has periodically gone through 'winters' too). All the same, I think that AGI is not too far around the corner now.
Post a Comment
I agree that LLMs alone will not produce an Artificial General Intelligence, unless some are modified into components very different from our idea of an LLM. And I suppose it's true that the amazing recent successes of neural models in some fields may have given rise to a bit of overenthusiasm in the field (which has periodically gone through 'winters' too). All the same, I think that AGI is not too far around the corner now.
<< Home


