New Anthropic research: Tracing the thoughts of a large language model.
We built a "microscope" to inspect what happens inside AI models and use it to understand Claude’s (often complex and surprising) internal mechanisms. https://t.co/PboGlLFnHG
We knew very little about how LLMs actually work...until now.
@AnthropicAI just dropped the most insane research paper, detailing some of the ways AI "thinks."
And it's completely different than we thought.
Here are their wild findings: 🧵 https://t.co/S8sar0Rn0M
Why is ~no one in the field of AI talking about Anthropic's On the Biology of a Large Language Model?
For the first time, we get a pretty good glimpse of how LLMs reason through complex problems internally, but no one seems to be curious enough to care. https://t.co/CQRkYpfvGS
This is a beautiful paper by Anthropic!
My intuition: neural networks are voting networks.
Imagine millions of entities voting: "In my incoming information, I detect this feature to be present with this strength".
Aggregate a pyramid of votes to add up to output... See more