Rebooting AI: Building Artificial Intelligence We Can Trust
possible that DeepMind’s AlphaGo could follow a similar path. Go and chess are games of “perfect information”—both players can see the entire board at any moment. In most real-world contexts, nobody knows anything with complete certainty; our data is often noisy and incomplete. Even in the simplest cases, there’s plenty of uncertainty; when we deci
... See moreErnest Davis • Rebooting AI: Building Artificial Intelligence We Can Trust
Some people within the field seem finally to be recognizing these points. University of Montreal professor Yoshua Bengio, one of the pioneers of deep learning, recently acknowledged that “deep [neural networks] tend to learn surface statistical regularities in the dataset rather than higher-level abstract concepts.” In an interview near the end of
... See moreErnest Davis • Rebooting AI: Building Artificial Intelligence We Can Trust
It’s not that it is impossible in principle to build a physical device that can drive in the snow or manage to be ethical; it’s that we can’t get there with big data alone.
Ernest Davis • Rebooting AI: Building Artificial Intelligence We Can Trust
Classical AI techniques, of the sort that were common long before deep learning become popular, are much better at compositionality, and are a useful tool for building cognitive models, but thus far they haven’t been nearly as
Ernest Davis • Rebooting AI: Building Artificial Intelligence We Can Trust
In the language of cognitive psychology, what you do when you read any text is to build up a cognitive model of the meaning of what the text is saying. This can be as simple as compiling what Daniel Kahneman and the late Anne Treisman called an object file—a record of an individual object and its properties—or as complex as a complete understanding
... See moreErnest Davis • Rebooting AI: Building Artificial Intelligence We Can Trust
captured in blood tests and admission records. But the problem is so far beyond what current AI can do that many doctors’ notes are never read in detail. AI tools for radiology are starting to be explored; they are able to look at images and to distinguish tumors from healthy tissue, but we have no way yet to automate another part of what a real ra
... See moreErnest Davis • Rebooting AI: Building Artificial Intelligence We Can Trust
(Often, a single application, like the traffic routing algorithms used by Waze, will involve multiple techniques,
Ernest Davis • Rebooting AI: Building Artificial Intelligence We Can Trust
The second challenge we call the illusory progress gap: mistaking progress in AI on easy problems for progress on hard problems. That is what happened with IBM’s overpromising about Watson, when progress on Jeopardy! was taken to be a bigger step toward understanding language than it really was.
Ernest Davis • Rebooting AI: Building Artificial Intelligence We Can Trust
The first we call the gullibility gap, which starts with the fact that we humans did not evolve to distinguish between humans and machines—which leaves us easily fooled. We attribute intelligence to computers because we have evolved and lived among human beings who themselves base their actions on abstractions like ideas, beliefs, and desires.
Ernest Davis • Rebooting AI: Building Artificial Intelligence We Can Trust
Language is almost never like that. Virtually every sentence that we encounter requires that we make inferences about how a broad range of background knowledge interrelates with what we read. Deep learning lacks a direct way of representing that knowledge, let alone performing