The problem we have in AI right now is not that it’s getting too powerful, it’s that it’s not nearly powerful enough. Very little has changed thus far because of AI, and won’t until models get faster, cheaper, more accurate, and more “intelligent”. Building safe AI is insanely… Show more
The problem we have in AI right now is not that it’s getting too powerful, it’s that it’s not nearly powerful enough. Very little has changed thus far because of AI, and won’t until models get faster, cheaper, more accurate, and more “intelligent”. Building safe AI is insanely… Show more
Integrating AI into our workflows has created a "meta-optimization problem." When everyone suddenly gets 10x more powerful, the hard part isn't doing things—deciding what's worth doing in the first place. My friend at a high-profile AI startup told me recently that their biggest challenge isn't training better models, but figuring out which problem... See more
Tina He • Jevons Paradox: A personal perspective
Some people within the field seem finally to be recognizing these points. University of Montreal professor Yoshua Bengio, one of the pioneers of deep learning, recently acknowledged that “deep [neural networks] tend to learn surface statistical regularities in the dataset rather than higher-level abstract concepts.” In an interview near the end of
... See more