The problem we have in AI right now is not that it’s getting too powerful, it’s that it’s not nearly powerful enough. Very little has changed thus far because of AI, and won’t until models get faster, cheaper, more accurate, and more “intelligent”. Building safe AI is insanely important, but any goal that is half in and half out of driving progress... See more
Aaron Levietwitter.comThe problem we have in AI right now is not that it’s getting too powerful, it’s that it’s not nearly powerful enough. Very little has changed thus far because of AI, and won’t until models get faster, cheaper, more accurate, and more “intelligent”. Building safe AI is insanely important, but any goal that is half in and half out of driving progress overall seems to make little sense.
How many more model releases do we need for folks to realize we are not getting to magical superintelligence with what we got?
How many times do you have to see a model benchmaxxing to realize Humanity's Last Exam is a freaking idiotic name and that answering questions on it doesn't tell us shit about the true... See more
Daniel Jeffriesx.comMy summary thoughts on why I think AI take-off will be relatively slow: https://t.co/OfxDW1wmwC
tylercowenx.com