Understanding AI
That is to say, our social groups, tools, situations, and, more broadly, environment have always served as a cognitive extension, networking our individual minds, allowing them to spill into each other and share processing tasks as a group. It’s as though our brains are aware of their own biohardware limitations. They naturally seek to form rings... See more
Friction is inevitable in human relationships. It can be uncomfortable, even maddening. Yet friction can be meaningful—as a check on selfish behavior or inflated self-regard; as a spur to look more closely at other people; as a way to better understand the foibles and fears we all share.
The Age of Anti-Social Media Is Here
Levels of Autonomy for AI Agents
arxiv.org5 levels of autonomy: operator, collaborator, consultant, approver, observer
But long before the internet, philosophers and religious leaders theorized a global connectivity, or collective consciousness. In 1922, Pierre Teilhard de Chardin coined the term “Noosphere”, or the “thinking layer” of the earth, networking all human thought. In the internet era, his predictions have become surprisingly accurate, now starting to... See more
DP: Writing now has to be really, really good to stand out. People got really upset last December when I tweeted that AI’s writing is already better than the majority of Write of Passage students would be with a day’s worth of work. It made a lot of people upset, but I think it’s true. AI’s writing is great with a good prompt, which is why people... See more
How we traded beauty for efficiency
As Cosma and I, and Alison and James have written:
We now have a technology that does for written and pictured culture what largescale markets do for the economy, what large-scale bureaucracy does for society, and perhaps even comparable with what print once did for language. What happens next?
We now have a technology that does for written and pictured culture what largescale markets do for the economy, what large-scale bureaucracy does for society, and perhaps even comparable with what print once did for language. What happens next?
But he saw AI (a term that he had ambiguous feelings about) as a particular variant of a much broader phenomenon: “complex information processing.” Human beings have quite limited internal ability to process information, and confront an unpredictable and complex world. Hence, they rely on a variety of external arrangements that do much of their... See more
To understand the social consequences of LLMs and related forms of AI, we ought consider them as social technologies. Specifically, we should compare them and their workings to other social technologies (or, if you prefer, modes of governance), mapping out how they transform social, political and economic relations among human beings.
I think it's a mistake to conceptualize AI systems as partners, as if they have a will of their own. I don’t think of my pet, my phone, my calculator, or the temperature knob on my stove as partners. I don’t think of any AI system as a partner. The moment we anthropomorphize technology—as if it were a person—we attribute will where none exists. In... See more