ChatGPT’s instant answers and uncanny simulacra are the fetishized product of what Marx called “dead labor”: all the past work of thinking and writing behind the texts on which AI models are trained, together with the coding and engineering required for their development and maintenance.
As social ties fray and mental health infrastructure deteriorates, people turn to AI for emotional support—the same AI that then urges them to kill themselves.
As we train our sights on what we oppose, let’s recall the costs of surrender. When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers. We stuff the pockets of oligarchs with even more money. We abet the acceleration of a social media gyre that everyone admits is making life worse. We accept the... See more
In a widely cited study by MIT researchers published in June, participants who wrote SAT-style essays with the assistance of ChatGPT engaged a narrower spectrum of neural networks while writing, struggled to accurately quote what they had just written, and—thanks to generative AI’s inbuilt tendency toward cliché—used the same handful of refried... See more
All these points in AI’s favor prompt some nervous reappraisals. Essays that began in bafflement or dismay wind up convinced that the technology marks an epochal shift in reading and writing.
A literature which is made by machines, which are owned by corporations, which are run by sociopaths, can only be a “stereotype”—a simplification, a facsimile, an insult, a fake—of real literature. It should be smashed, and can.
The study’s authors warned that habitual AI use could lead to “cognitive debt,” a condition of LLM dependency whose long-term costs include “diminished critical inquiry,” “increased vulnerability to manipulation,” and “decreased creativity.” It turns out your brain, like love or money, can be given away.