AI safetyism has become so dominant that the obsession with alignment between humans and AI could, by inhibiting accelerated progress in the field, become an existential risk in itself. There’s
AI safetyism has become so dominant that the obsession with alignment between humans and AI could, by inhibiting accelerated progress in the field, become an existential risk in itself. There’s
AI is creating problems faster than it can solve them.
Well, and it yields a question I’ve thought about a lot as someone who follows the politics of regulation pretty closely. My sense is always that human beings are just really bad at regulating against problems that we haven’t experienced in some big, profound way.