AI Ethics
So is this the real threat? Not that we’ll believe false things, but that we’ll stop being able to identify true things? Where truth becomes impossible to establish?
When Anyone Can Fake Anything (Inside My AI Law & Policy Class #14)
The results showed that people with a higher desire to so-cially connect were more likely to anthropomorphize the chatbot, ascribing humanlike mental properties to it; and people who anthropomorphized the chatbot more were also more likely to report that it had an impact on their social interactions and relationships with family and friends.
Link
For 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius. “That has an incredible risk of dependency.”
David Adam • Supportive? Addictive? Abusive? How AI companions affect our mental health
Interestingly, many users who emotionally mourned the ‘loss’ of GPT-4o expressed complete awareness of its lack of consciousness. And yet, in many cases, the subjective grief that they felt was no less real. This demonstrates that the power of the illusion is such that a given user might not actually believe that their AI companion is conscious,... See more
The Illusion of Consciousness in AI Companionship — PRISM
Content filtering doesn’t catch implicit deception. Safety guardrails don’t prevent fabricated intimacy if the AI isn’t saying anything explicitly harmful. Warning labels don’t help if users don’t understand that emotional manipulation is happening. The control mechanisms are fundamentally different depending on whether we’re addressing harm or... See more
When AI Learns to Manipulate: The Line Between Harm and Exploitation (Class #15)
a substantial proportion of users voluntarily signal departure with a farewell message, especially when they are more engaged. This behavior reflects the social framing of AI companions as conversational partners, rather than transactional tools.
Link
The science of intersubjectivity, with its understanding of feelings, embodiment, and companionship, is needed more today than it ever has been. With increasing attention to artificial intelligence and artificial worlds generated through the medium of technology, it is important to remind ourselves of the psychological and biological nature of how... See more
Editorial: Intersubjectivity: recent advances in theory, research, and practice
It is predictable, then, that users are consistently fooled into believing that their AI companions are conscious persons, capable of feeling real emotions.
The Illusion of Consciousness in AI Companionship — PRISM
Not only could the same model no longer be accurate, but furthermore, this polluted content could then be used by the other LLMS for their own training. They, in turn, deposit terabytes of distorted information on the internet (This vicious cycle will eventually ruin both the internet and our ability to use chatbots effectively, I fear). This means... See more