
These Strange New Minds

There is no magical missing ingredient, no 'unobtanium' that forever elevates human cognition to a mystical higher plane. The assertion that LLMs cannot ever 'think' or 'know' because they lack some vital human spark is just a twenty-first-century version of Richard Owen's argument about the hippocampus minor - a spurious justification of our own
... See moreChristopher Summerfield • These Strange New Minds
So the pertinent question is not really whether current AI systems are like you and me (they are not) but what the limits of their abilities might be. AI sceptics have argued vehemently that LLMs are forever limited by the basic design choices of AI developers, and especially that they are trained to predict (or 'guess') the next token in a
... See moreChristopher Summerfield • These Strange New Minds
the most important reason why AI systems are not like us (and probably never will be) is that they lack the visceral and emotional experiences that make us human. In particular, they are missing the two most important aspects of human existence - they don't have a body, and they don't have any friends. They are not motivated to feel or want like we
... See moreChristopher Summerfield • These Strange New Minds
… when imagining worrisome AI capabilities, we don't have to think ahead to a time when 1.7 trillion parameters seem as dinky as 1.7 billion do today. Even current AI systems, equipped with diverse objectives and allowed to interact, have the potential to wreak havoc. When personal AI systems are deployed to buy and sell on eBay, send and receive
... See moreChristopher Summerfield • These Strange New Minds
So I think we are headed for a world where AI systems are decentralized - each individually tuned for a single slice of reality corresponding to a specific user. The greatest future risks from AI are the externalities that will arise from the unpredictable dynamics of interacting AI systems. We know that this sort of dynamic can arise because it
... See moreChristopher Summerfield • These Strange New Minds
Language allows intelligent agents to express their ideas in a common format, making the sum much greater than its parts. Language allows human intelligence to be decentralized — we each know about a tiny slice of the world, but by agreeing to act together, we function collectively like an (admittedly quite fractious) superintelligence all of our
... See moreChristopher Summerfield • These Strange New Minds
When AI systems start to behave collectively, we risk provoking externalities that will make the trillion-dollar flash crash look like a storm in a teacup. But these side-effects are unlikely to be the direct result of goals that we give to AI. They will be network effects: unanticipated phenomena that emerge as multiple autonomous systems interact
... See moreChristopher Summerfield • These Strange New Minds
The problem with the more sensationalist worries about superintelligence is that they rely on an as-yet-untested extrapolative principle. The logic goes roughly as follows: an intelligent system is one that can achieve its goals, ergo, a super-duper intelligent system is one that can literally do anything. Even things that seem to us impossible,
... See moreChristopher Summerfield • These Strange New Minds
Real-world problems have three properties that make them especially tricky: they are open-ended, uncertain and temporally extended. Open-ended problems are those for which the possible alternatives are virtually limitless. ... Uncertain problems can be blown off course by random events. … So real-world planning demands contingency measures.
... See more