
If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All

If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things—praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts—not huddled together like frightened sheep and thinking about bombs.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
When it comes to AI alignment, companies are still in the alchemy phase. They’re still at the level of high-minded philosophical ideals, not at the level of engineering designs. At the level of wishful grand dreams, not carefully crafted grand realities. They also do not seem to realize why that is a problem.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
The inner workings of batteries and rocket engines are well understood, governed by known physics recorded in careful textbooks. AIs, on the other hand, are grown, and no one understands their inner workings. There are fewer equations to constrain one’s thinking… and so, many opportunities to think about high-minded ideals like truth-seeking
... See moreEliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
When it comes to AI, the challenge humanity is facing is not surmountable with anything like humanity’s current level of knowledge and skill. It isn’t close. Attempting to solve a problem like that, with the lives of everyone on Earth at stake, would be an insane and stupid gamble that NOBODY SHOULD BE ALLOWED TO TRY.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
Betting that humanity can solve this problem with their current level of understanding seems like betting that alchemists from the year 1100 could build a working nuclear reactor. One that worked in the depths of space. On the first try.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
The preferences that wind up in a mature AI are complicated, practically impossible to predict, and vanishingly unlikely to be aligned with our own, no matter how it was trained.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
Problems like this are why we say that if anyone builds it, everyone dies. If all the complications were visible early, and had easy solutions, then we’d be saying that if any fool builds it, everyone dies, and that would be a different situation. But when some of the problems stay out of sight? When some complications inevitably go unforeseen?
... See moreEliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
But one thing that is predictable is that AI companies won’t get what they trained for. They’ll get AIs that want weird and surprising stuff instead.
Eliezer Yudkowsky • If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
It’s much easier to grow artificial intelligence that steers somewhere than it is to grow AIs that steer exactly where you want.