AI safetyism has become so dominant that the obsession with alignment between humans and AI could, by inhibiting accelerated progress in the field, become an existential risk in itself. There’s
AI safetyism has become so dominant that the obsession with alignment between humans and AI could, by inhibiting accelerated progress in the field, become an existential risk in itself. There’s
the nature of the AI societal risk claim is its own term, “AI alignment”. Alignment with what? Human values. Whose human values? Ah, that’s where things get tricky.
Marc Andreessen • Why AI Will Save the World
There are two factions working to prevent AI dangers. Here’s why they’re deeply divided.
LessWrong • Superintelligence FAQ - LessWrong
Will Douglas Heaven • DeepMind’s Cofounder: Generative AI Is Just a Phase. What’s Next Is Interactive AI.
The AI-risk community has also learned that novel corporate-governance structures cannot constrain executives who are hell-bent on acceleration. That was the big lesson of OpenAI's boardroom fiasco. “The governance model at OpenAI was supposed to prevent financial pressures from overrunning things,” Ord said. “It didn't work. The people who were me
... See moreRoss Andersen • AI Doomers Had Their Big Moment
Zach Tratar • Tweet
OpenAI CEO Sam Altman | AI for the Next Era
Generally, though, consider containment more as a set of guardrails, a way to keep humanity in the driver’s seat when a technology risks causing more harm than good. Picture those guardrails operating at different levels and with different modes of implementation. In the next chapter we’ll consider what they might look like at a more granular level
... See more