
The Alignment Problem

The Present Future: AI's Impact Long Before Superintelligence
Ethan Mollick from One Useful Thingreadwise.ioThe tipoff to
the nature of the AI societal risk claim is its own term, “AI alignment”. Alignment with what? Human values. Whose human values? Ah, that’s where things get tricky.
the nature of the AI societal risk claim is its own term, “AI alignment”. Alignment with what? Human values. Whose human values? Ah, that’s where things get tricky.
Marc Andreessen • Why AI Will Save the World
we have an AI that acts very much like a person, but in ways that aren’t quite human. Something that can seem sentient but isn’t (as far as we can tell). We have invented a kind of alien mind. But how do we ensure the alien is friendly? That is the alignment problem.