rhet ai
Content filtering doesn’t catch implicit deception. Safety guardrails don’t prevent fabricated intimacy if the AI isn’t saying anything explicitly harmful. Warning labels don’t help if users don’t understand that emotional manipulation is happening. The control mechanisms are fundamentally different depending on whether we’re addressing harm or... See more
When AI Learns to Manipulate: The Line Between Harm and Exploitation (Class #15)
So is this the real threat? Not that we’ll believe false things, but that we’ll stop being able to identify true things? Where truth becomes impossible to establish?
When Anyone Can Fake Anything (Inside My AI Law & Policy Class #14)
a substantial proportion of users voluntarily signal departure with a farewell message, especially when they are more engaged. This behavior reflects the social framing of AI companions as conversational partners, rather than transactional tools.
Link
people turn to AI with existential questions and complex, unresolved scientific problems because they think that the mystical processes in AI systems generate knowledge or insights that far exceed human cognitive abilities.
RHET AI. Critical Rhetoric in the Age of Artificial Intelligence
It is predictable, then, that users are consistently fooled into believing that their AI companions are conscious persons, capable of feeling real emotions.
The Illusion of Consciousness in AI Companionship — PRISM
Together, these findings illuminate a key tension in emotional ly intelligent interfaces: they can evoke humanlike relational cues that increase engagement, but in doing so may blur the line between persuasive design and emotional coercion
File
As we train our sights on what we oppose, let’s recall the costs of surrender. When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers. We stuff the pockets of oligarchs with even more money. We abet the acceleration of a social media gyre that everyone admits is making life worse. We accept the
... See moreFor 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius. “That has an incredible risk of dependency.”
David Adam • Supportive? Addictive? Abusive? How AI companions affect our mental health
What happens when what we’re thinking becomes increasingly transparent to technology and therefore to the rest of the world?