Salman Ansari
@salmanscribbles
embracing my inner polymath — writing, drawing, coding, playing
Salman Ansari
@salmanscribbles
embracing my inner polymath — writing, drawing, coding, playing
Kafka urges the young man to stay present with his difficult emotions:
Just be quiet and patient. Let evil and unpleasantness pass quietly over you. Do not try to avoid them. On the contrary, observe them carefully. Let active understanding take the place of reflex irritation, and you will grow out of your trouble. Men can achieve greatness only by surmounting their own littleness.
“I almost never wept for him, I just stopped looking at the sky the way I used to.“ —Kamel Daoud, The Meursault Investigation
He said that it’s a very good idea that after you write a little bit, stop and then copy it. Because while you’re copying it, you’re thinking about it, and it’s giving you other ideas. And that’s the way I work. And it’s marvelous, just wonderful, the relationship between working and copying.
Much has been made of next-token prediction, the hamster wheel at the heart of everything. (Has a simpler mechanism ever attracted richer investments?) But, to predict the next token, a model needs a probable word, a likely sentence, a virtual reason — a beam running out into the darkness. This ghostly superstructure, which informs every next-token prediction, is the model, the thing that grows on the trellis of code; I contend it is a map of potential reasons.
In this view, the emergence of super-capable new models is less about reasoning and more about “reasons-ing”: modeling the different things humans can want, along with the different ways they can pursue them … in writing.
Reasons-ing, not reasoning.