AI that can start to be an AI scientist and self-improvement could help us solve the hard alignment problems that we don’t know how to solve. The alignment problem is: how do we build AGI that does what is in the best interest of humanity? How do we make sure humanity gets to determine the future of humanity. It is also interesting to note how this... See more
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed,... See more
The two most important problems (at least how I am thinking about them currently), are:
Finding a rigorous scientific framework for how different agent skills, personalities, and instructions combine to be most capable for different problems (think of this as social management science for AI agents) .
Figuring out how you formally validate and verify... See more
Integrating AI into our workflows has created a "meta-optimization problem." When everyone suddenly gets 10x more powerful, the hard part isn't doing things—deciding what's worth doing in the first place. My friend at a high-profile AI startup told me recently that their biggest challenge isn't training better models, but figuring out which problem... See more