
Why large language models struggle with long contexts

LLMs are extremely good at a type of knowledge called tacit knowledge (knowledge about something as it pertains to something else) but are extremely poor at pretty much any other type as far as I can see. It just so happens that tacit knowledge also happens to be the type of knowledge that drives natural language so it makes them looks super smart,... See more
Column: These Apple researchers just showed that AI bots can't think, and possibly never will — Apple’s AI researchers gave these AI systems a simple arithmetic problem that schoolkids can solve. The bots flunked.
Anthropic can now track the bizarre inner workings of a large language model
Will Douglas Heaventechnologyreview.com
Perhaps the biggest lesson of #websim is that we are vastly underestimating informational entities (LL*s) by expecting them to answer in words @repligate
CMDR MrIndigox.com