rhet ai
recent research offers a reassuring perspective—that AI-delivered therapeutic interventions have reached a level of sophistication such that they’re indistinguishable from human-written therapeutic responses.
Marc Zao-Sanders • How People Are Really Using Gen AI in 2025
If an AI companion becomes someone’s most consistent emotional presence, the right question isn’t “how do we stop this?” It’s “what does that say about the world around them?” Technological relationships are not new. What’s new is how effective they’ve become; and how clearly they mirror the gaps we ’ve refused to address.
We’ve tried prohibition... See more
We’ve tried prohibition... See more
Noah Weinberger • Imaginary Friends Grew Up: We Panicked
Cut the bullshit: why GenAI systems are neither collaborators nor tutors
Just a moment...
Friends without friction
Companion chatbots may weaken our ability to navigate conflict and difference—unless we make key design choices, says a new report
Companion chatbots may weaken our ability to navigate conflict and difference—unless we make key design choices, says a new report
Friends without friction
Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours.
John Pavlus • When ChatGPT Broke an Entire Field: An Oral History | Quanta Magazine
The science of intersubjectivity, with its understanding of feelings, embodiment, and companionship, is needed more today than it ever has been. With increasing attention to artificial intelligence and artificial worlds generated through the medium of technology, it is important to remind ourselves of the psychological and biological nature of how... See more
Editorial: Intersubjectivity: recent advances in theory, research, and practice
What happens when what we’re thinking becomes increasingly transparent to technology and therefore to the rest of the world?
Article
A new study shows that the large language models (LLMs) deliberately change their behavior when being probed—responding to questions designed to gauge personality traits with answers meant to appear as likeable or socially desirable as possible.
Chatbots, Like the Rest of Us, Just Want to Be Loved
As we train our sights on what we oppose, let’s recall the costs of surrender. When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers. We stuff the pockets of oligarchs with even more money. We abet the acceleration of a social media gyre that everyone admits is making life worse. We accept the
... See more