
These Strange New Minds

It seems likely that in the near future, LLMs will actively seek to attain states rather than just passively guessing what comes next. This will dramatically change how AI systems function, and make them more powerful and more dangerous. In a notorious thought experiment, the philosopher Nick Bostrom imagines a powerful AI system that is programmed
... See moreChristopher Summerfield • These Strange New Minds
When commentators deride LLMs for 'just making predictions', they have overlooked the fact that predicting immediate sensory information is literally how learning happens in all biological systems, from the humblest worms and flies to the billion-neuron brains of humans and our nearest primate cousins. Learning and predicting go hand in hand.
Christopher Summerfield • These Strange New Minds
… even when supposedly conveying an objective reality, people become storytellers, and which stories they tell depends on the groups to which they belong.
Christopher Summerfield • These Strange New Minds
… a personalized AI will need to be able to learn continually about the human user, so that it can keep up to date with their changing views, tastes, and circumstances, and ensure that their digital actions or advice remain relevant.
Christopher Summerfield • These Strange New Minds
So the pertinent question is not really whether current AI systems are like you and me (they are not) but what the limits of their abilities might be. AI sceptics have argued vehemently that LLMs are forever limited by the basic design choices of AI developers, and especially that they are trained to predict (or 'guess') the next token in a
... See moreChristopher Summerfield • These Strange New Minds
Current LLMs are simply not equipped with the memory systems needed to form a durable impression of the user, which would allow them to explicitly personalize content to suit our views or tastes.
Christopher Summerfield • These Strange New Minds
as of early 2024
Recommender systems are susceptible to a strange phenomenon called 'auto-induced distribution shift', whereby they can inadvertently manipulate the user as a side-effect of learning to maximize approval.
Christopher Summerfield • These Strange New Minds
By stacking transformers one on top of the other, each using self-attention, language can be filtered through a computational skyscraper that learns the connections that each word in every position has to every other. Combined with gigantic training data, these innovations allow LLMs to begin to model very long-range interactions in text—not just
... See moreChristopher Summerfield • These Strange New Minds
The problem with the more sensationalist worries about superintelligence is that they rely on an as-yet-untested extrapolative principle. The logic goes roughly as follows: an intelligent system is one that can achieve its goals, ergo, a super-duper intelligent system is one that can literally do anything. Even things that seem to us impossible,
... See more