added by Josh and · updated 1y ago
AI and the Limits of Language
- these models are not “intelligences.” People mistake them for entities with volition, even sentience. This is because of the anthropomorphic fallacy: people tend to think of other things as humans if you give them half an excuse. But it is also because of a linguistic mistake: we call them AI, “artificial intelligence.”
Language models are not being... See morefrom The Mirror of Language by Max Anton Brewer
Keely Adler added
- If AI starts to generate intelligence by itself, there’s no guarantee that it will be human-like. Rather than humans teaching machines to think like humans, machines might teach humans new ways of thinking.
from AI is learning how to create itself by Will Douglas Heaven
Kasper Jordaens and added
- Powerful AI systems can help us interpret the neurons of weaker AI systems. And those interpretability insights often tell us a bit about how models work. And when they tell us how models work, they often suggest ways that those models could be better or more efficient. —Dario Amodei, Anthropic
from What Builders Talk About When They Talk About AI | Andreessen Horowitz by Sarah Wang
Nicolay Gerold added
It's a little bit of a conundrum. A model we do not understand explains another model we do not understand.