So far, there’s
no evidence that large language models possess world models, even though some researchers and engineers believe they might naturally emerge over time. And it is this absence of grounded, rules-based modeling, Marcus recently argued, that explains why L.L.M.s often “hallucinate” in strange and unexpected ways: “What L.L.M.s do is to ...
See more