I don't think that the scaling hypothesis gets recognized enough for how radical it is. For decades, AI sought some kind of master algorithm of intelligence. The scaling hypothesis says that there is none: intelligence is the ability to use more compute on more data.
Joscha Bachx.comI don't think that the scaling hypothesis gets recognized enough for how radical it is. For decades, AI sought some kind of master algorithm of intelligence. The scaling hypothesis says that there is none: intelligence is the ability to use more compute on more data.
LeCun points to four essential characteristics of human intelligence that current AI systems, including LLMs, can’t replicate: reasoning, planning, persistent memory, and understanding the physical world. He stresses that LLMs’ reliance on textual data severely limits their understanding of reality: “We’re easily fooled into thinking they are intel... See more
Azeem Azhar • 🧠 AI’s $100bn question: The scaling ceiling

i wrote a new essay called
The Problem with Reasoners
where i discuss why i doubt o1-like models will scale beyond narrow domains like math and coding (link below)
Yes. A few miscellaneous thoughts.
(1) First, the new bottleneck on AI is prompting and verifying. Since AI does tasks middle-to-middle, not end-to-end. So business spend migrates towards the edges of prompting and verifying, even as AI speeds up the middle.
(2) Second, AI really means amplified intelligence, not agentic intelligence. The smarter... See more
Balajix.com