rhet ai
It is predictable, then, that users are consistently fooled into believing that their AI companions are conscious persons, capable of feeling real emotions.
The Illusion of Consciousness in AI Companionship — PRISM
Together, these findings illuminate a key tension in emotional ly intelligent interfaces: they can evoke humanlike relational cues that increase engagement, but in doing so may blur the line between persuasive design and emotional coercion
File
As we train our sights on what we oppose, let’s recall the costs of surrender. When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers. We stuff the pockets of oligarchs with even more money. We abet the acceleration of a social media gyre that everyone admits is making life worse. We accept the
... See moreFor 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius. “That has an incredible risk of dependency.”
Supportive? Addictive? Abusive? How AI companions affect our mental health
What happens when what we’re thinking becomes increasingly transparent to technology and therefore to the rest of the world?
Article
Cut the bullshit: why GenAI systems are neither collaborators nor tutors
Just a moment...
A chatbot helped more people access mental-health services
A chatbot helped more people access mental-health services
In other words, rather than trying to please humans, Scientist AI could be designed to prioritize honesty.
A Potential Path to Safer AI Development
Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours.