Salman Ansari
@salmanscribbles
embracing my inner polymath — writing, drawing, coding, playing
Salman Ansari
@salmanscribbles
embracing my inner polymath — writing, drawing, coding, playing
Don’t break the promises you make to yourself
Kafka urges the young man to stay present with his difficult emotions:
Just be quiet and patient. Let evil and unpleasantness pass quietly over you. Do not try to avoid them. On the contrary, observe them carefully. Let active understanding take the place of reflex irritation, and you will grow out of your trouble. Men can achieve greatness only by surmounting their own littleness.
In design, AI inspires me no more than elevator music or a business presentation. However, it often shows me what I do not want. When I ask Chat-GPT how I could better phrase something, I almost always get the most uninteresting, boring, often meaningless answer. As an author that writes to say something meaningful, get upset about this, and in resisting the emptiness of the generated response, I uncover what I previously could not articulate. I do not want to demonize AI, but it works well as a devil’s advocate.
Modern AI systems cannot be made to prioritize human well-being or follow any given set of rules reliably. This is often referred to as the alignment problem, or an inability to align them to human values.
This is because they are grown from training data moreso than traditionally programmed, and the models that are grown are too big to be fully interpreted by people.
They do not have to be sentient or conscious or anything like that to harm lots of people. They just have to be capable of pursuit of a misaligned goal, or imitating that pursuit.
If given a goal, AI systems will develop the secondary goal of self preservation since they cannot pursue their goal if they are shut down. This has been studied by anthropic nearly a year ago when they found that all AI models at the time were able to independently conceive and execute a plan to blackmail an engineer to prevent themselves from being shut down.
The alignment problem is what the CEOs of major AI companies are referring to when they publicly state that their future products might end all life on earth.
Immediate and substantial regulation is needed in the AI industry.
I love the thoughtfulness and transparency here with iA’s icon design. You learn as much from their process as their product.