This is particularly important because there are already indications that many people who often interact with chatbots attribute consciousness to these systems. At the same time, the consensus among experts is that current AI systems are not conscious.
While LLMs are designed to emulate human-like responses, this does not mean that this analogy extends to the underlying cognition giving rise to those responses
Stunning piece of work well worth watching from beginning to end.
Turkle referenced the issue of behavioral metrics dominating AI research, and her concern that the interior life was being overlooked, and concluded by saying that the human cost of talking to machines isn’t immediate, it’s cumulative. 'What happens to you in the first three weeks may not be...the truest indicator of how that’s going to limit you,... See more
The community remains puzzled about whether these models genuinely generalize to unseen tasks, or seemingly succeed by memorizing the training data. This paper makes important strides in addressing this question. It constructs a suite of carefully designed counterfactual evaluations, providing fresh insights into the capabilities of... See more
The more I use language models, the more monstrous they seem to me. I don’t mean that in a particularly negative sense. Frankenstein’s monster is sad, but also amazing. Godzilla is a monster, and Godzilla rules.