Muddles about Models
I have long been a subscriber to the idea that any intelligence needs to be embodied and situated. The very first problems a baby has to solve are, How do I move my body? How do I move around in the world? Those tasks are entirely missing from existing large language models. And, as many people noted, the phrase “large language model” is a misnomer... See more
Alison Gopnik • Developing AI Like Raising Kids
Indeed, we may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense
Gary Marcus • Deep Learning Is Hitting a Wall
Prashanth Narayan added
Prashanth Narayan and added
sari added
Sam Lessin on LLMs
these models are not “intelligences.” People mistake them for entities with volition, even sentience. This is because of the anthropomorphic fallacy: people tend to think of other things as humans if you give them half an excuse. But it is also because of a linguistic mistake: we call them AI, “artificial intelligence.”
Language models are not being... See more
Language models are not being... See more
Max Anton Brewer • The Mirror of Language
Keely Adler added
For that matter, should we be using models at all?