Saved by Prashanth Narayan and
Deep Learning Is Hitting a Wall
To think that we can simply abandon symbol-manipulation is to suspend disbelief.
Gary Marcus • Deep Learning Is Hitting a Wall
Deep-learning systems are outstanding at interpolating between specific examples they have seen before, but frequently stumble when confronted with novelty.
Gary Marcus • Deep Learning Is Hitting a Wall
Classical computer science, of the sort practiced by Turing and von Neumann and everyone after, manipulates symbols in a fashion that we think of as algebraic, and that’s what’s really at stake.
Gary Marcus • Deep Learning Is Hitting a Wall
Such signs should be alarming to the autonomous-driving industry, which has largely banked on scaling, rather than on developing more sophisticated reasoning. If scaling doesn’t get us to safe autonomous driving, tens of billions of dollars of investment in scaling could turn out to be for naught.
Gary Marcus • Deep Learning Is Hitting a Wall
What does “manipulating symbols” really mean? Ultimately, it means two things: having sets of symbols (essentially just patterns that stand for things) to represent information, and processing (manipulating) those symbols in a specific way, using something like algebra (or logic, or computer programs) to operate over those symbols.
Gary Marcus • Deep Learning Is Hitting a Wall
Manipulating symbols has been essential to computer science since the beginning, at least since the pioneer papers of Alan Turing and John von Neumann, and is still the fundamental staple of virtually all software engineering—yet is treated as a dirty word in deep learning.To think that we can simply abandon symbol-manipulation is to suspend... See more
Gary Marcus • Deep Learning Is Hitting a Wall
Gary Marcus • Deep Learning Is Hitting a Wall
Indeed, we may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense
Gary Marcus • Deep Learning Is Hitting a Wall
There are serious holes in the scaling argument. To begin with, the measures that have scaled have not captured what we desperately need to improve: genuine comprehension.