🧠AI’s $100bn question: The scaling ceiling
LeCun points to four essential characteristics of human intelligence that current AI systems, including LLMs, can’t replicate: reasoning, planning, persistent memory, and understanding the physical world. He stresses that LLMs’ reliance on textual data severely limits their understanding of reality: “We’re easily fooled into thinking they are... See more
Azeem Azhar • 🧠AI’s $100bn question: The scaling ceiling

I don’t find Amodei’s "country of geniuses in a datacenter" view particularly insightful. It focuses too much on narrow applications—ways that AI can be applied to specific problems in biology or neuroscience.
The Industrial Revolution wasn’t driven by progress in one industry alone. Simultaneous and complementary... See more
I’ve got bad news.
The AI cycle is over—for now.
I’ve been an unapologetic AI maximalist since the first time I tricked GPT-4 into writing a working Python back-test for a volatility strategy back in early 2023. I’m still convinced it will take the wider economy years—maybe decades—to fully digest the... See more
Adam Butlerx.comSam Altman asks: why are scaling laws a property of the universe?
Daniel Selsam says intelligence emerges from compression—and the universe’s knowledge is a fractal you can keep mining.
In other words, scaling laws keep working because important concepts are sparse but inexhaustible—and... See more
vitrupox.com
Ilya Sutskever, perhaps the most influential proponent of the AI "scaling hypothesis," just told Reuters that scaling has plateaued. This is a big deal! This comes on the heels of a big report that OpenAI's in-development Orion model had disappointing results. đź§µ https://t.co/DiRb9sOHt8