
LLMs are much worse than humans at learning from experience

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
machinelearning.apple.com
the science problem. We’ve all spent the last year talking about ‘error rates’ or ‘hallucinations’ (indeed, I also wrote about them here). The breakthrough of LLMs is to create a statistical model that can be built by machine at huge scale, instead of a deterministic model that (today) must be built by hand and doesn’t scale. This is why they work,... See more
Benedict Evans • Unbundling AI
LeCun points to four essential characteristics of human intelligence that current AI systems, including LLMs, can’t replicate: reasoning, planning, persistent memory, and understanding the physical world. He stresses that LLMs’ reliance on textual data severely limits their understanding of reality: “We’re easily fooled into thinking they are intel... See more
Azeem Azhar • 🧠 AI’s $100bn question: The scaling ceiling
