rhet ai
Not only could the same model no longer be accurate, but furthermore, this polluted content could then be used by the other LLMS for their own training. They, in turn, deposit terabytes of distorted information on the internet (This vicious cycle will eventually ruin both the internet and our ability to use chatbots effectively, I fear). This means... See more
Just a moment...
The science of intersubjectivity, with its understanding of feelings, embodiment, and companionship, is needed more today than it ever has been. With increasing attention to artificial intelligence and artificial worlds generated through the medium of technology, it is important to remind ourselves of the psychological and biological nature of how... See more
Editorial: Intersubjectivity: recent advances in theory, research, and practice
The results showed that people with a higher desire to so-cially connect were more likely to anthropomorphize the chatbot, ascribing humanlike mental properties to it; and people who anthropomorphized the chatbot more were also more likely to report that it had an impact on their social interactions and relationships with family and friends.
Link
Content filtering doesn’t catch implicit deception. Safety guardrails don’t prevent fabricated intimacy if the AI isn’t saying anything explicitly harmful. Warning labels don’t help if users don’t understand that emotional manipulation is happening. The control mechanisms are fundamentally different depending on whether we’re addressing harm or... See more
When AI Learns to Manipulate: The Line Between Harm and Exploitation (Class #15)
So is this the real threat? Not that we’ll believe false things, but that we’ll stop being able to identify true things? Where truth becomes impossible to establish?
When Anyone Can Fake Anything (Inside My AI Law & Policy Class #14)
a substantial proportion of users voluntarily signal departure with a farewell message, especially when they are more engaged. This behavior reflects the social framing of AI companions as conversational partners, rather than transactional tools.
Link
people turn to AI with existential questions and complex, unresolved scientific problems because they think that the mystical processes in AI systems generate knowledge or insights that far exceed human cognitive abilities.
RHET AI. Critical Rhetoric in the Age of Artificial Intelligence
It is predictable, then, that users are consistently fooled into believing that their AI companions are conscious persons, capable of feeling real emotions.
The Illusion of Consciousness in AI Companionship — PRISM
Together, these findings illuminate a key tension in emotional ly intelligent interfaces: they can evoke humanlike relational cues that increase engagement, but in doing so may blur the line between persuasive design and emotional coercion