
Muddles about Models

For one thing, they don’t actually “know” anything. Because they are simply predicting the next word in a sequence, they can’t tell what is true and what is not.
Ethan Mollick • Co-Intelligence: Living and Working with AI
Consequently, what has been offered as a criticism of LLM technology — that these algorithms only circulate different words without access to the real-world embodied referents — might not be the indictment critics think it is. LLMs are structuralist machines — they are practical actualizations of structural linguistic theory, where words have meani... See more
David J. Gunkel • AI Signals The Death Of The Author | NOEMA
language model like ChatGPT has a powerful way of ‘understanding’ language by turning words into patterns that represent meaning and context, even though it doesn’t understand meaning and context the same way humans do. Its approach to reasoning, by predicting the next likely word from countless possibilities, is nothing like how humans use languag
... See more