
Saved by Shawn Powers
Let's Stop Treating LLMs like People
Saved by Shawn Powers
As we build systems whose capabilities more and more resemble those of humans, despite the fact that those systems work in ways that are fundamentally different from the way humans work, it becomes increasingly tempting to anthropomorphise them. Humans have evolved to co-exist over many millions of years, and human culture has evolved over thousand
... See moretreat AI like a person and tell it what kind of person it is. LLMs work by predicting the next word,
Linguists call this the ‘double illusion’: humans think they are speaking computer, computers think they are speaking human, and neither is very satisfied.