Sublime
An inspiration engine for ideas

'Superintelligent AI will, by default, cause human extinction.'
Eliezer Yudkowsky spent 20+ years researching AI alignment and reached this conclusion.
He bases his entire conclusion on two theories: Orthogonality and
Instrumental convergence.
Let... See more
YUDKOWSKY + WOLFRAM ON AI RISK.
youtube.comEliezer Yudkowsky
rationalwiki.orgSo rationality is about forming true beliefs and making winning decisions.
Eliezer Yudkowsky • Rationality
Epistemic rationality: systematically improving the accuracy of your beliefs.
Eliezer Yudkowsky • Rationality
To argue against an idea honestly, you should argue against the best arguments of the strongest advocates. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates. If you want to argue against transhumanism or the intelligence explosion, you have to directly challenge the arguments of Nick
... See moreEliezer Yudkowsky • Rationality

what yann lecun said: "the more tokens an llm generates, the more likely it is to go off the rails and get everything wrong"
what actually happened: "we get extremely high accuracy on arc-agi by generating billions of tokens, the more tokens we throw at it the better it gets" https://t.co/2bUkl4udmK
