How to navigate the AI apocalypse as a sane person
Packy McCormick and added
A superintelligent system could have disastrous effects even if it had a neutral goal and lacked self-awareness. “We cannot blithely assume,” Bostrom wrote, “that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans—scientific curiosity, benevolent concern f
... See moreMeghan O'Gieblyn • God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning
At one extreme, you might fear that AI will take your job. Certainly, you’ve read as much in the press, and it does seem to be getting better and bette... See more
alex and added
Artificial Intelligence: A Guide for Thinking Humans
amazon.comEsther Dyson • Don’t Fuss About Training AIs. Train Our Kids
newyorker.com • Why Computers Won’t Make Themselves Smarter
Alex Wittenberg added
Juan Orbea added
The fear is that if human beings presented an obstacle to achieving one of those goals—reverse global warming, for example—a superintelligent agent could easily, even accidentally, wipe us off the face of the earth. For a computer program whose intellectual imagination so dwarfed our own, this wouldn’t require anything as crude as gun-toting robots
... See more