The third contributor to the AI Chasm is what we call the robustness gap. Time and again we have seen that once people in AI find a solution that works some of the time, they assume that with a little more work (and a little more data) it will work all of the time. And that’s just not necessarily so.
Ernest Davis • Rebooting AI: Building Artificial Intelligence We Can Trust
This predictive approach turns the problem of driving into something almost perfectly analogous to the ImageNet competition of labeling images. Instead of being shown a picture and needing to categorize it as a dog, cat, flower, or the like, the system is shown a picture from the front dashboard and “categorizes” it as “accelerate,” “brake,” “turn
... See moreBrian Christian • The Alignment Problem
@epsilon3141 I fully agree. I’m very skeptical on self-driving cars for example.
But self-driving cars live in the complex real world. School education doesn’t. It lives in a simplified world, where what is taught overlaps with that a GPT-3-level ML could learn.
https://t.co/PdRseJGmhL
Luca Dellannax.com