Sublime
An inspiration engine for ideas

'Superintelligent AI will, by default, cause human extinction.'
Eliezer Yudkowsky spent 20+ years researching AI alignment and reached this conclusion.
He bases his entire conclusion on two theories: Orthogonality and
Instrumental convergence.
Let... See more
Epistemic rationality: systematically improving the accuracy of your beliefs.
Eliezer Yudkowsky • Rationality
Finally had time to read & process this great post. I run into the pattern quite often, it goes:
" is good actually, because "
Galaxy brain reasoning is the best way to justify anything while looking / feeling good about it.
From this perspective for example, ... See more
Andrej Karpathyx.comYUDKOWSKY + WOLFRAM ON AI RISK.
youtube.comTo argue against an idea honestly, you should argue against the best arguments of the strongest advocates. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates. If you want to argue against transhumanism or the intelligence explosion, you have to directly challenge the arguments of Nick
... See moreEliezer Yudkowsky • Rationality

"it may be that today's large neural networks are slightly conscious"
- Ilya Sutskever https://t.co/V7Mh7zbvfa
IMO — Fei-fei is spot on... and this is why #SyntheticData for LLMs is a weak strategy for AGI
Consider the following example:
- Step 1: take a large LLM training dataset (e.g. "the whole internet")
- Step 2: remove every sentence with the words "Kim" or "Kardashian" in it... See more
⿻ Andrew Traskx.com