the impact of AI
It’s the Real Thing! | Leo Kim
thebaffler.comIn general, LLM-media will likely fill in the holes left by 1) media being expensive to produce, and 2) timely to produce (because the content is time-sensitive), and 3) too niche to provide for a certain market. In other words, LLM-media will come to serve a long-tail of content. Expansive fan-fiction universes.
https://sceneswithsimon.com/p/the-hu
... See moreI use my iPhone nearly every hour that I am conscious, every day, but the new announcements left me craving a dumbphone, since those machines, at least, cannot attempt to think for me
— Apple is bringing AI to your personal life, like it or not
I feel like Eno is exploring this question of what creativity is and what that process looks like for different people. The way OpenAI talks about what a large language model generates is, by definition, incurious about creativity. It’s like, what if we just spit stuff out and you have no idea where it came from?
The making of Eno, the first generative feature film
AI image generation is essentially a truncated exercise in taste; a product of knowing which inputs and keywords to feed the image-mashup machine, and the eye to identify which outputs contain any semblance of artistry. All that is to say: AI itself can’t generate good taste for you.
Elizabeth Goodspeed on the Importance of Taste – And How to Acquire It
The main point of concern here is not that an LLM such as GPT-4 is reductionist in its representation of reality per se; it is that various AI models, each reductionist in their own way, are converging. An echo chamber of echo chambers of echo chambers.
🏡 I Don’t Resonate With You
There’s a difference between an AI agent and an AI copilot . Language matters. The latter implies more of human augmentation, rather than human obfuscation. I imagine copilots and augmentation will be more palatable to people. (This will be especially true as we adjust to the reality of software doing work for us .)
Rex Woodbury • The "Egg Theory" of AI Agents
Something that’s been on my mind is flipping the relationship between the human and language model when going through a creative process. It seems that we often want to ask questions of language models, and we expect them to brainstorm ideas or give us answers, but I wonder if another fruitful pattern here is having models ask questions of us .