AI

Much has been made of next-token prediction, the hamster wheel at the heart of everything. (Has a simpler mechanism ever attracted richer investments?) But, to predict the next token, a model needs a probable word, a likely sentence, a virtual reason — a beam running out into the darkness. This ghostly superstructure, which informs every next-token prediction, is the model, the thing that grows on the trellis of code; I contend it is a map of potential reasons.
In this view, the emergence of super-capable new models is less about reasoning and more about “reasons-ing”: modeling the different things humans can want, along with the different ways they can pursue them … in writing.
Reasons-ing, not reasoning.

The best post on the ethics of AI I’ve read. Robin is a master of words, and has a perspective on AI that is sorely needed.
We Did the Math on AI’s Energy Footprint. Here’s the Story You Haven’t Heard.
technologyreview.comtechnologyreview.comBut here’s the problem: These estimates don’t capture the near future of how we’ll use AI. In that future, we won’t simply ping AI models with a question or two throughout the day, or have them generate a photo. Instead, leading labs are racing us toward a world where AI “agents” perform tasks for us without our supervising their every move.
Noam Chomsky Speaks on What ChatGPT Is Really Good For
chomsky.infoAs I understand them, the founders of AI–Alan Turing, Herbert Simon, Marvin Minsky, and others–regarded it as science, part of the then-emerging cognitive sciences, making use of new technologies and discoveries in the mathematical theory of computation to advance understanding. Over the years those concerns have faded and have largely been displaced by an engineering orientation. The earlier concerns are now commonly dismissed, sometimes condescendingly, as GOFAI–good old-fashioned AI.

I have been stuck. Every time I sit down to write a blog post, code a feature, or start a project, I come to the same realization: in the context of AI, what I’m doing is a waste of time. It’s horrifying. The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI already produces—or soon will. All of my original thoughts feel like early drafts of better, more complete thoughts that simply haven’t yet formed inside an LLM.
I empathize with the author. But it also reinforces a feeling I’ve had lately: One must live in order to write, to have something to say. If you are going out into the world, changing things, changing yourself, then ideas come to you and you can channel them. But the channeling and expression in digital essay writing shouldn’t be the majority, it should be just one piece of a big puzzle.
If writing and thinking about writing is your life, then yes, AI can replace it. But you can become “unLLMable” by having a rich life that you want to live. Out in the real world. Let AI accelerate the expression a bit, if you want. Or don’t. But protect, foster and grow the most important part: human experience.