Salman Ansari
@salmanscribbles
embracing my inner polymath — writing, drawing, coding, playing
Salman Ansari
@salmanscribbles
embracing my inner polymath — writing, drawing, coding, playing
AI accelerationists believe potential economic shocks are speed-bumps on the road to abundance. Once true AI arrives, it will solve some or all of society’s major problems better than we can, and humans can enjoy the bounty of its labor. The immense profits accruing to AI companies will be taxed and shared with all via Universal Basic Income(UBI).
This feels hopelessly naïve. We have profitable megacorps at home, and their names are things like Google, Amazon, Meta, and Microsoft. These companies have fought tooth and nail to avoid paying taxes (or, for that matter, their workers). OpenAI made it less than a decade before deciding it didn’t want to be a nonprofit any more. There is no reason to believe that “AI” companies will, having extracted immense wealth from interposing their services across every sector of the economy, turn around and fund UBI out of the goodness of their hearts.
Modern AI systems cannot be made to prioritize human well-being or follow any given set of rules reliably. This is often referred to as the alignment problem, or an inability to align them to human values.
This is because they are grown from training data moreso than traditionally programmed, and the models that are grown are too big to be fully interpreted by people.
They do not have to be sentient or conscious or anything like that to harm lots of people. They just have to be capable of pursuit of a misaligned goal, or imitating that pursuit.
If given a goal, AI systems will develop the secondary goal of self preservation since they cannot pursue their goal if they are shut down. This has been studied by anthropic nearly a year ago when they found that all AI models at the time were able to independently conceive and execute a plan to blackmail an engineer to prevent themselves from being shut down.
The alignment problem is what the CEOs of major AI companies are referring to when they publicly state that their future products might end all life on earth.
Immediate and substantial regulation is needed in the AI industry.
It is easy to find a logical and virtuous reason for not doing what you don’t want to do.
A great story about simplicity from Akio Morita, the instigator of the Walkman project at Sony:
Engineers had the technology to add the recording function to the Walkman and it would’ve cost only 50 cents to a dollar per unit. Morita decided against it. He wanted the device to have one function, which it performs very well. Walkman should only play
This is the sad, but perhaps logical, endpoint of the anonymity problem. If fans don’t know anything about their favorite artists, does it matter if they even exist?