rhet ai
What happens when what we’re thinking becomes increasingly transparent to technology and therefore to the rest of the world?
Article
The results showed that people with a higher desire to so-cially connect were more likely to anthropomorphize the chatbot, ascribing humanlike mental properties to it; and people who anthropomorphized the chatbot more were also more likely to report that it had an impact on their social interactions and relationships with family and friends.
Link
As we train our sights on what we oppose, let’s recall the costs of surrender. When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers. We stuff the pockets of oligarchs with even more money. We abet the acceleration of a social media gyre that everyone admits is making life worse. We accept the
... See moreTurkle referenced the issue of behavioral metrics dominating AI research, and her concern that the interior life was being overlooked, and concluded by saying that the human cost of talking to machines isn’t immediate, it’s cumulative. 'What happens to you in the first three weeks may not be...the truest indicator of how that’s going to limit you,... See more
The Human Cost Of Talking To Machines: Can A Chatbot Really Care?
Cut the bullshit: why GenAI systems are neither collaborators nor tutors
Just a moment...
Maybe AI doesn’t raise the bar. Maybe it reveals how low we’ve let the bar drop. In a world where ghosting is normal and attentiveness is rare, a chatbot that listens is radical.
Imaginary Friends Grew Up: We Panicked
recent research offers a reassuring perspective—that AI-delivered therapeutic interventions have reached a level of sophistication such that they’re indistinguishable from human-written therapeutic responses.
Marc Zao-Sanders • How People Are Really Using Gen AI in 2025
In other words, rather than trying to please humans, Scientist AI could be designed to prioritize honesty.
A Potential Path to Safer AI Development
a substantial proportion of users voluntarily signal departure with a farewell message, especially when they are more engaged. This behavior reflects the social framing of AI companions as conversational partners, rather than transactional tools.