AI safetyism has become so dominant that the obsession with alignment between humans and AI could, by inhibiting accelerated progress in the field, become an existential risk in itself. There’s
AI safetyism has become so dominant that the obsession with alignment between humans and AI could, by inhibiting accelerated progress in the field, become an existential risk in itself. There’s
Does that mean I see nothing but steady material progress and glorious human flourishing in our AI future? Not at all. Instead, I believe that civilization will soon face a different kind of AI-induced crisis. This crisis will lack the apocalyptic drama of a Hollywood blockbuster, but it will disrupt our economic and political systems all the same,
... See moreKai-Fu Lee • AI Superpowers: China, Silicon Valley, and the New World Order
Understanding and integrating these diverse perspectives is essential for the development of AI systems that are not only technologically advanced but also socially responsible.
Samuel Thorpe • The Essential Beginner’s Guide to AI
Something is deeply wrong with a society where people are no longer excited about miraculous technology like AI.
There are two factions working to prevent AI dangers. Here’s why they’re deeply divided.
Kelsey Pipervox.com
ASI existential risk: reconsidering alignment as a goal
michaelnotebook.com