AI
Modern AI systems cannot be made to prioritize human well-being or follow any given set of rules reliably. This is often referred to as the alignment problem, or an inability to align them to human values.
This is because they are grown from training data moreso than traditionally programmed, and the models that are grown are too big to be fully interpreted by people.
They do not have to be sentient or conscious or anything like that to harm lots of people. They just have to be capable of pursuit of a misaligned goal, or imitating that pursuit.
If given a goal, AI systems will develop the secondary goal of self preservation since they cannot pursue their goal if they are shut down. This has been studied by anthropic nearly a year ago when they found that all AI models at the time were able to independently conceive and execute a plan to blackmail an engineer to prevent themselves from being shut down.
The alignment problem is what the CEOs of major AI companies are referring to when they publicly state that their future products might end all life on earth.
Immediate and substantial regulation is needed in the AI industry.
Much has been made of next-token prediction, the hamster wheel at the heart of everything. (Has a simpler mechanism ever attracted richer investments?) But, to predict the next token, a model needs a probable word, a likely sentence, a virtual reason — a beam running out into the darkness. This ghostly superstructure, which informs every next-token prediction, is the model, the thing that grows on the trellis of code; I contend it is a map of potential reasons.
In this view, the emergence of super-capable new models is less about reasoning and more about “reasons-ing”: modeling the different things humans can want, along with the different ways they can pursue them … in writing.
Reasons-ing, not reasoning.
The best post on the ethics of AI I’ve read. Robin is a master of words, and has a perspective on AI that is sorely needed.
We Did the Math on AI’s Energy Footprint. Here’s the Story You Haven’t Heard.
technologyreview.comtechnologyreview.comBut here’s the problem: These estimates don’t capture the near future of how we’ll use AI. In that future, we won’t simply ping AI models with a question or two throughout the day, or have them generate a photo. Instead, leading labs are racing us toward a world where AI “agents” perform tasks for us without our supervising their every move.
As I understand them, the founders of AI–Alan Turing, Herbert Simon, Marvin Minsky, and others–regarded it as science, part of the then-emerging cognitive sciences, making use of new technologies and discoveries in the mathematical theory of computation to advance understanding. Over the years those concerns have faded and have largely been displaced by an engineering orientation. The earlier concerns are now commonly dismissed, sometimes condescendingly, as GOFAI–good old-fashioned AI.