Yann LeCun: AI one-percenters seizing power forever is real doomsday scenario | Hacker News
That scenario could create an awful social disruption.
The Great Decoupling
Hendrik added
A quote from 2014, even more relevant today
a longstanding pattern in reform movements of this kind. The actors within movements like these fall into two categories – “Baptists” and “Bootleggers” – drawing on the historical example of the prohibition of alcohol in the United States in the 1920’s:
Baptists
“Baptists” are the true believer social reformers who legitimate... See more
Marc Andreessen • Why AI Will Save the World
David Cahn • AI’s $600B Question
Abie Cohen added
But if the next breakthrough on the scale of deep learning occurs soon, and it happens within a hermetically sealed corporate environment, all bets are off. It could give one company an insurmountable advantage over the other Seven Giants and return us to an age of discovery in which elite expertise tips the balance of power in favor of the United
... See moreKai-Fu Lee • AI Superpowers: China, Silicon Valley, and the New World Order
In three words: deep learning worked.
In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.
That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “ru... See more
samaltman.com • The Intelligence Age
How did we get to the doorstep of the next leap in prosperity?
In three words: deep learning worked.
In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.
That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.
Such as?
I guess things like recursive self-improvement. You wouldn’t want to let your little AI go off and update its own code without you having oversight. Maybe that should even be a licensed activity—you know, just like fo... See more