AI
I have been stuck. Every time I sit down to write a blog post, code a feature, or start a project, I come to the same realization: in the context of AI, what I’m doing is a waste of time. It’s horrifying. The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI already produces—or soon will. All of my original thoughts feel like early drafts of better, more complete thoughts that simply haven’t yet formed inside an LLM.
I empathize with the author. But it also reinforces a feeling I’ve had lately: One must live in order to write, to have something to say. If you are going out into the world, changing things, changing yourself, then ideas come to you and you can channel them. But the channeling and expression in digital essay writing shouldn’t be the majority, it should be just one piece of a big puzzle.
If writing and thinking about writing is your life, then yes, AI can replace it. But you can become “unLLMable” by having a rich life that you want to live. Out in the real world. Let AI accelerate the expression a bit, if you want. Or don’t. But protect, foster and grow the most important part: human experience.
As I understand them, the founders of AI–Alan Turing, Herbert Simon, Marvin Minsky, and others–regarded it as science, part of the then-emerging cognitive sciences, making use of new technologies and discoveries in the mathematical theory of computation to advance understanding. Over the years those concerns have faded and have largely been displaced by an engineering orientation. The earlier concerns are now commonly dismissed, sometimes condescendingly, as GOFAI–good old-fashioned AI.
The best post on the ethics of AI I’ve read. Robin is a master of words, and has a perspective on AI that is sorely needed.
Replit CEO on AI breakthroughs: ‘We don’t care about professional coders anymore’ | Semafor
Reed Albergottisemafor.comModern AI systems cannot be made to prioritize human well-being or follow any given set of rules reliably. This is often referred to as the alignment problem, or an inability to align them to human values.
This is because they are grown from training data moreso than traditionally programmed, and the models that are grown are too big to be fully interpreted by people.
They do not have to be sentient or conscious or anything like that to harm lots of people. They just have to be capable of pursuit of a misaligned goal, or imitating that pursuit.
If given a goal, AI systems will develop the secondary goal of self preservation since they cannot pursue their goal if they are shut down. This has been studied by anthropic nearly a year ago when they found that all AI models at the time were able to independently conceive and execute a plan to blackmail an engineer to prevent themselves from being shut down.
The alignment problem is what the CEOs of major AI companies are referring to when they publicly state that their future products might end all life on earth.
Immediate and substantial regulation is needed in the AI industry.