Arnaud Bertrand

Arnaud Bertrand

Share this post

Arnaud Bertrand
Arnaud Bertrand
Apple just killed the AGI myth

Apple just killed the AGI myth

The hidden costs of humanity's most expensive delusion

Arnaud Bertrand's avatar
Arnaud Bertrand
Jun 09, 2025
∙ Paid
98

Share this post

Arnaud Bertrand
Arnaud Bertrand
Apple just killed the AGI myth
2
19
Share

About 2 months ago I was having an argument on Twitter with someone telling me they were “really disappointed with my take“ and that I was “completely wrong“ for saying that AI was “just a extremely gifted parrot that repeats what it's been trained on“ and that this wasn’t remotely intelligence.

Fast forward to today and the argument is now authoritatively settled: I was right, yeah! 🎉

How so? It was settled by none other than Apple, specifically their Machine Learning Research department, in a seminal research paper entitled “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity“ that you can find here.

What does the paper say? Exactly what I was arguing: AI models, even the most cutting-edge Large Reasoning Models (LRMs), are no more than a very gifted parrots with basically no actual reasoning capability. They’re not “intelligent” in the slightest, at least not if you understand intelligence as involving genuine problem-solving instead of simply parroting what you’ve been told before without comprehending it.

That’s exactly what the Apple paper was trying to understand: can “reasoning“ models actually reason? Can they solve problems that they haven’t been trained on but that would normally be easily solvable with their “knowledge”? The answer, it turns out, is an unequivocal “no“.

A particularly damning example from the paper was this river crossing puzzle: imagine 3 people and their 3 agents need to cross a river using a small boat that can only carry 2 people at a time. The catch? A person can never be left alone with someone else's agent, and the boat can't cross empty - someone always has to row it back.

This is the kind of logic puzzle you might find in a children brain teaser book - figure out the right sequence of trips to get everyone across the river. The solution only requires 11 steps.

Turns out this simple brain teaser was impossible for Claude 3.7 Sonnet, one of the most advanced "reasoning" AIs, to solve. It couldn't even get past the 4th move before making illegal moves and breaking the rules.

Yet the exact same AI could flawlessly solve the Tower of Hanoi puzzle with 5 disks - a much more complex challenge requiring 31 perfect moves in sequence.

Why the massive difference? The Apple researchers figured it out: Tower of Hanoi is a classic computer science puzzle that appears all over the internet, so the AI had memorized thousands of examples during training. But a river crossing puzzle with 3 people? Apparently too rare online for the AI to have memorized the patterns.

This is all devastating evidence that these models aren't reasoning at all. A truly reasoning system would recognize that both puzzles involve the same type of logical thinking (following rules and constraints), just with different scenarios. But since the AI never learned the River Crossing pattern by heart, it was completely lost.

This wasn’t a question of compute either: the researchers gave the AI models unlimited token budgets to work with. But the really bizarre part is that for puzzles or questions they couldn’t solve - like the river crossing puzzle - the models actually started thinking less, not more; they used fewer tokens and gave up faster. A human facing a tougher puzzle would typically spend more time thinking it through, but these 'reasoning' models did the opposite: they basically “understood” they had nothing to parrot so they just gave up - the opposite of what you'd expect from genuine reasoning.

Conclusion: they’re indeed just gifted parrots, or incredibly sophisticated copy-paste machines, if you will.

This has profound implications for the AI future we’re all sold. Some good, some more worrying.

The first one being: no, AGI isn’t around the corner. This is all hype. In truth we’re still light-years away.

The good news about that is that we don’t need to be worried about having "AI overlords" anytime soon. The bad news is that we might potentially have trillions in misallocated capital.

Keep reading with a 7-day free trial

Subscribe to Arnaud Bertrand to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Arnaud Bertrand
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share