Salman Ansari
@salmanscribbles
embracing my inner polymath — writing, drawing, coding, playing
Salman Ansari
@salmanscribbles
embracing my inner polymath — writing, drawing, coding, playing
Nathan Schneider—whose Governable Spaces I am always recommending—said something that’s been ringing in my ears ever since:
I think of tech as a wildfire—it burns really quickly. And we get a lot of wildfires out here, and there's the front of it, where the blaze is, and then once it's burnt over, that's when cool things start growing up. They grow much slower, and they find their way through…the burned trees and new life happens. I kind of hope we're entering that phase of social media that we're done with the fast burn. And maybe it had to happen.
What’s there after the fire passes over if not that goddamn mushroom at the end of a world?
Our mother, The Earth, the green planet has suffered from her children’s violent and ignorant ways of consuming. We have destroyed our Mother Earth like a type of bacterium or virus destroying the human body, because Mother Earth is also a body.
The best post on the ethics of AI I’ve read. Robin is a master of words, and has a perspective on AI that is sorely needed.
Spiritual awakening is frequently described as a journey to the top of a mountain. We leave our attachments and our worldliness behind and slowly make our way to the top. At the peak we have transcended all pain. The only problem with this metaphor is that we leave all the others behind—our drunken brother, our schizophrenic sister, our tormented
... See moreMuch has been made of next-token prediction, the hamster wheel at the heart of everything. (Has a simpler mechanism ever attracted richer investments?) But, to predict the next token, a model needs a probable word, a likely sentence, a virtual reason — a beam running out into the darkness. This ghostly superstructure, which informs every next-token prediction, is the model, the thing that grows on the trellis of code; I contend it is a map of potential reasons.
In this view, the emergence of super-capable new models is less about reasoning and more about “reasons-ing”: modeling the different things humans can want, along with the different ways they can pursue them … in writing.
Reasons-ing, not reasoning.