
Reasoning skills of large language models are often overestimated

What can LLMs never do?
strangeloopcanon.com

LLMs don’t reason. But that’s OK, neither do we. What we imagine to be “reasoning” is an abstraction that has never been realized in any strict sense in any stochastic natural phenomenon, such as human brain. https://t.co/oYG65VJ4QL

Is Chain-of-Thought Reasoning of LLMs a Mirage?
... Our results reveal that CoT reasoning is a brittle mirage that vanishes when it is pushed beyond training distributions. This work offers a deeper understanding of why and when CoT reasoning fails, emphasizing the ongoing challenge of achieving genuine and... See more