AI safetyism has become so dominant that the obsession with alignment between humans and AI could, by inhibiting accelerated progress in the field, become an existential risk in itself. There’s
AI safetyism has become so dominant that the obsession with alignment between humans and AI could, by inhibiting accelerated progress in the field, become an existential risk in itself. There’s
Saffron Huang • Value Beyond Instrumentalization — Letters to a Young Technologist
The potential threat of superintelligence is not down to the intelligence of the machines. It’s down to our own stupidity, our intelligence being blinded by our arrogance, greed and political agendas.
Mo Gawdat • Scary Smart: Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
the nature of the AI societal risk claim is its own term, “AI alignment”. Alignment with what? Human values. Whose human values? Ah, that’s where things get tricky.
Marc Andreessen • Why AI Will Save the World
Today, we don’t have the same level of risk tolerance. People want an extremely high level of safety, but they don’t realize we can be too conservative. Being too conservative on safety actually leads to systemic risk. Systemic risk happens when you stop taking risks and get stuck with a system that no longer improves.
Eric Jorgenson • The Anthology of Balaji: A Guide to Technology, Truth, and Building the Future
an off-policy (“possibilist”) agent could get itself into trouble, precisely by always trying to do the “best thing possible.”