rhet ai
recent research offers a reassuring perspective—that AI-delivered therapeutic interventions have reached a level of sophistication such that they’re indistinguishable from human-written therapeutic responses.
Marc Zao-Sanders • How People Are Really Using Gen AI in 2025
In other words, rather than trying to please humans, Scientist AI could be designed to prioritize honesty.
A Potential Path to Safer AI Development
For 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius. “That has an incredible risk of dependency.”
Supportive? Addictive? Abusive? How AI companions affect our mental health
Cut the bullshit: why GenAI systems are neither collaborators nor tutors
Just a moment...
Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours.
John Pavlus • When ChatGPT Broke an Entire Field: An Oral History | Quanta Magazine
Maybe AI doesn’t raise the bar. Maybe it reveals how low we’ve let the bar drop. In a world where ghosting is normal and attentiveness is rare, a chatbot that listens is radical.
Imaginary Friends Grew Up: We Panicked
Turkle referenced the issue of behavioral metrics dominating AI research, and her concern that the interior life was being overlooked, and concluded by saying that the human cost of talking to machines isn’t immediate, it’s cumulative. 'What happens to you in the first three weeks may not be...the truest indicator of how that’s going to limit you, ... See more
The Human Cost Of Talking To Machines: Can A Chatbot Really Care?
What happens when what we’re thinking becomes increasingly transparent to technology and therefore to the rest of the world?
Article
Friends without friction
Companion chatbots may weaken our ability to navigate conflict and difference—unless we make key design choices, says a new report
Companion chatbots may weaken our ability to navigate conflict and difference—unless we make key design choices, says a new report