I don't think that the scaling hypothesis gets recognized enough for how radical it is. For decades, AI sought some kind of master algorithm of intelligence. The scaling hypothesis says that there is none: intelligence is the ability to use more compute on more data.
Joscha Bachx.comI don't think that the scaling hypothesis gets recognized enough for how radical it is. For decades, AI sought some kind of master algorithm of intelligence. The scaling hypothesis says that there is none: intelligence is the ability to use more compute on more data.
LeCun points to four essential characteristics of human intelligence that current AI systems, including LLMs, can’t replicate: reasoning, planning, persistent memory, and understanding the physical world. He stresses that LLMs’ reliance on textual data severely limits their understanding of reality: “We’re easily fooled into thinking they are... See more
Azeem Azhar • 🧠 AI’s $100bn question: The scaling ceiling

I don’t find Amodei’s "country of geniuses in a datacenter" view particularly insightful. It focuses too much on narrow applications—ways that AI can be applied to specific problems in biology or neuroscience.
The Industrial Revolution wasn’t driven by progress in one industry alone. Simultaneous and complementary... See more

I still haven't heard a good answer to this question, on or off the podcast.
AI researchers often tell me, "Don't worry bout it, scale solves this."
But what is the rebuttal to someone who argues that this indicates a fundamental limitation? https://t.co/nl24bOCwM3