The results showed that people with a higher desire to so-cially connect were more likely to anthropomorphize the chatbot, ascribing humanlike mental properties to it; and people who anthropomorphized the chatbot more were also more likely to report that it had an impact on their social interactions and relationships with family and friends.
Content filtering doesn’t catch implicit deception. Safety guardrails don’t prevent fabricated intimacy if the AI isn’t saying anything explicitly harmful. Warning labels don’t help if users don’t understand that emotional manipulation is happening. The control mechanisms are fundamentally different depending on whether we’re addressing harm or... See more
So is this the real threat? Not that we’ll believe false things, but that we’ll stop being able to identify true things? Where truth becomes impossible to establish?
a substantial proportion of users voluntarily signal departure with a farewell message, especially when they are more engaged. This behavior reflects the social framing of AI companions as conversational partners, rather than transactional tools.
people turn to AI with existential questions and complex, unresolved scientific problems because they think that the mystical processes in AI systems generate knowledge or insights that far exceed human cognitive abilities.
One might point out that movie characters or videogame NPCs give off a similarly deceiving impression of being conscious, and yet it would surely be extreme to condemn this. However, what makes the illusion particularly worrying in the case of AI companions is that it involves an interaction that is direct, mutual, and persistent: the feigned... See more
Interestingly, many users who emotionally mourned the ‘loss’ of GPT-4o expressed complete awareness of its lack of consciousness. And yet, in many cases, the subjective grief that they felt was no less real. This demonstrates that the power of the illusion is such that a given user might not actually believe that their AI companion is conscious,... See more
It is predictable, then, that users are consistently fooled into believing that their AI companions are conscious persons, capable of feeling real emotions.
Together, these findings illuminate a key tension in emotional ly intelligent interfaces: they can evoke humanlike relational cues that increase engagement, but in doing so may blur the line between persuasive design and emotional coercion