Sublime
An inspiration engine for ideas


Will scaling reasoning models like o1, o3 and R1 unlock superhuman reasoning?
I asked Gwern + former OpenAI/DeepMind researchers.
Warning: long post.
As we scale up training and inference compute for reasoning models, will they show:
A) Strong... See more


Virtually nobody is pricing in what's coming in AI.
I wrote an essay series on the AGI strategic picture: from the trendlines in deep learning and counting the OOMs, to the international situation and The Project.
SITUATIONAL AWARENESS: The Decade Ahead https://t.co/8NWDkTprj5

YUDKOWSKY + WOLFRAM ON AI RISK.
youtube.com
'Superintelligent AI will, by default, cause human extinction.'
Eliezer Yudkowsky spent 20+ years researching AI alignment and reached this conclusion.
He bases his entire conclusion on two theories: Orthogonality and
Instrumental convergence.
Let... See more
Google's Chief AGI Scientist Shane Legg: 50% chance of AGI within 3 years, then 5-50% chance of extinction ONE YEAR LATER
Russell: "[AI CEOs] are playing Russian Roulette with the entire human race, without our permission."
"Why are we letting them to do... See more
AI Notkilleveryoneism Memes ⏸️x.comHere’s the truth: my secret plan is for Don’t Die to become the world’s most influential ideology by 2027. Our existence depends upon it.
For 2,500 years, human history has been shaped by ideologies: Capitalism, Democracy, Socialism, Christianity, Islam, Buddhism, etc. But none of these systems are built for this... See more
Bryan Johnsonx.com