
If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All

But one thing that is predictable is that AI companies won’t get what they trained for. They’ll get AIs that want weird and surprising stuff instead.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
Betting that humanity can solve this problem with their current level of understanding seems like betting that alchemists from the year 1100 could build a working nuclear reactor. One that worked in the depths of space. On the first try.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
It’s much easier to grow artificial intelligence that steers somewhere than it is to grow AIs that steer exactly where you want.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things—praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts—not huddled together like frightened sheep and thinking about bombs.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
When it comes to AI, the challenge humanity is facing is not surmountable with anything like humanity’s current level of knowledge and skill. It isn’t close. Attempting to solve a problem like that, with the lives of everyone on Earth at stake, would be an insane and stupid gamble that NOBODY SHOULD BE ALLOWED TO TRY.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
In our view,i intelligence is about two fundamental types of work: the work of predicting the world, and the work of steering it.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
Problems like this are why we say that if anyone builds it, everyone dies. If all the complications were visible early, and had easy solutions, then we’d be saying that if any fool builds it, everyone dies, and that would be a different situation. But when some of the problems stay out of sight? When some complications inevitably go unforeseen?
... See moreEliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
When it comes to AI alignment, companies are still in the alchemy phase. They’re still at the level of high-minded philosophical ideals, not at the level of engineering designs. At the level of wishful grand dreams, not carefully crafted grand realities. They also do not seem to realize why that is a problem.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
The way humanity finally got to the level of ChatGPT was not by finally comprehending intelligence well enough to craft an intelligent mind. Instead, computers became powerful enough that AIs can be churned out by gradient descent, without any human needing to understand the cognitions that grow inside. Which is to say: Engineers failed at crafting
... See more