On AI Anthropomorphism
The issue is NOT if humans can relate to a deceptive social machine — of course they can. The issue is “Do we recognize that humans and machines are different categories?” or “Will we respect human dignity, by designing effective machines that enhance human self efficacy and responsibility?” The 2M+ apps in the Apple Store are mostly based on... See more
On AI Anthropomorphism
Ben Shneiderman talks about this in a way that suggests AI (and the way it’s rolling out) is going to be a “failure.” He means a “failure” in the sense that people will not like and use it. But my fear is people WILL like and use it… and the outcome will be a “failure” because using it will be harmful to humanity.
he long history of failed anthropomorphic systems goes back even before Microsoft BOB and Clippie, but it has continued to produce billion dollar failures. The most recent serious deadly design mistake was Elon Musk’s insistence that since human drivers used only eyes, his Teslas would use only video. By preventing the use of radar or LIDAR, over... See more
On AI Anthropomorphism
Technics and Civilization (1936) , which offers a clear analysis in the chapter on “The Obstacle of Animism.” Mumford describes how initial designs based on human or animal models are an obstacle that needs to be overcome in developing new technologies: “the most ineffective kind of machine is the realistic mechanical imitation of a man[/woman] or... See more
On AI Anthropomorphism
Phillips et al. (2016) were using human-animal relationships to think about human-AI relationships. I think that, as with animals, there are degrees of sociality, or degrees of social presence, that may be applicable to computational things.
On AI Anthropomorphism
But animals are living beings. And they aren’t manipulative or persuasive.
Anthropomorphism is the act of projecting human-like qualities or behavior onto non-human entities, such as when people give animals, objects, or natural phenomena human-like characteristics or emotions.