focus less on AI as something separate from humans, and more on tools that enhance human cognition rather than replacing it… If we want a future that is both superintelligent and "human", one where human beings are not just pets, but actually retain meaningful agency over the world, then it feels like something like this is the most natural option.
By contrast, the definition I proposed—AGI is achieved when it makes economic sense to keep your agent running continuously—is a binary, irreversible, and immovable threshold: Once we are running our agents 24/7, we’ve hit it, and there’s no going back. (After all, we can’t uninvent it.)
I like this definition because in order to meet it we will... See more