You know how some people seem to have a magic touch with LLMs? They get incredible, nuanced results while everyone else gets generic junk.
The common wisdom is that this is a technical skill. A list of secret hacks, keywords, and formulas you have to learn.
But a new paper suggests this... See more
🧵 1/8 The Illusion of Thinking: Are reasoning models like o1/o3, DeepSeek-R1, and Claude 3.7 Sonnet really "thinking"? 🤔 Or are they just throwing more compute towards pattern matching?
The new Large Reasoning Models (LRMs) show promising gains on math and coding benchmarks, but we found their fundamental limitations... See more