Sublime
An inspiration engine for ideas
Eliezer Yudkowsky
en.wikipedia.org
Here is "AI Consciousness: A Centrist Manifesto". I've been working on this feverishly because the issue seems to me so urgent - and I'm worried extreme positions on both sides are becoming locked in, when the best way forward is in the centre. Please read it! (1/2) https://t.co/mWAn9Rq0Da
I suggest that an analogous bias in psychologically realistic search is motivated stopping and motivated continuation: when we have a hidden motive for choosing the “best” current option, we have a hidden motive to stop, and choose, and reject consideration of any more options. When we have a hidden motive to reject the current best option, we have
... See moreEliezer Yudkowsky • Rationality
The raw intelligence of models is good enough for 90% of use cases.
What we need now is scaffolding:
* Model routing
* Agentic frameworks
* AI coding frameworks
* Memory management
* Guardrails
* Computer/browser use
*... See more
Matthew Bermanx.comI have increased confidence that the core AGI algorithm will be less than 10k loc (intelligence is a process, not a model).
Mike Knoopx.com
Thesis: one should optimize for beliefs to anticipate the future maximally accurately.
Antithesis: one should optimize for holding beliefs which lead to good outcomes, accurate or not.
Fristonian synthesis: chadyes.png https://t.co/3ccNYrN4be
If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form.
Lauren McCann • Super Thinking
François Chollet says some people in San Francisco have a Messiah complex about building AGI first and becoming gods, which is closely tied to the quest for eternal life https://t.co/UGe36uxoDS
Tsarathustrax.comAI Alignment is the trust problem between humans and AI
It is the most important techno-social challenge of our time. By building trust infrastructure as a public good, we aim to scale trust and safety at the pace of AI progress
This is just the beginning https://t.co/ZKzWRffUtL
kirill averyx.com