
Reasoning skills of large language models are often overestimated

What can LLMs never do?
strangeloopcanon.com

I don't recall another time in CS history where a result like this is obvious to the point of banality to one group. And heretical to another. https://t.co/PsKo3wz8x3

LLMs don’t reason. But that’s OK, neither do we. What we imagine to be “reasoning” is an abstraction that has never been realized in any strict sense in any stochastic natural phenomenon, such as human brain. https://t.co/oYG65VJ4QL