Sublime
An inspiration engine for ideas

'Superintelligent AI will, by default, cause human extinction.'
Eliezer Yudkowsky spent 20+ years researching AI alignment and reached this conclusion.
He bases his entire conclusion on two theories: Orthogonality and
Instrumental convergence.
Let... See more
YUDKOWSKY + WOLFRAM ON AI RISK.
youtube.comEliezer Yudkowsky
rationalwiki.orgSo rationality is about forming true beliefs and making winning decisions.
Eliezer Yudkowsky • Rationality
Epistemic rationality: systematically improving the accuracy of your beliefs.
Eliezer Yudkowsky • Rationality
To argue against an idea honestly, you should argue against the best arguments of the strongest advocates. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates. If you want to argue against transhumanism or the intelligence explosion, you have to directly challenge the arguments of Nick
... See moreEliezer Yudkowsky • Rationality

A gem form the LessWrong community: 'Humans are not automatically strategic.'
"A large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped." https://t.co/5eR7t5YXLN
Robin Hanson and I share a belief that two rationalists should not agree to disagree: they should not have common knowledge of epistemic disagreement unless something is very wrong.