Dave
@davekim
Dave
@davekim
“visible, but not fully legible”
Matt Webb describes four areas of AI user experiences.
Looking at this landscape, I’m able to see different UX challenges:
With generative tools, it’s about reliability and connecting to existing workflows. Live tools are about having the right high-level “brushes,”” being able to explore latent space, and finding the balance between steering and helpful hallucination.
With copilots, it’s about integrating the AI into apps that already work, acknowledging the different phases of work. Also helping the user make use of all the functionality… which might mean clear names for things in menus, or it might mean ways for the AI to be proactive.
Agents are about interacting with long-running processes: directing them, having visibility over them, correcting them, and trusting them.
Chat has an affordances problem. As Simon Willison says, tools like ChatGPT reward power users.
The affordances problem is more general, of course. I liked Willison’s analogy here:
It’s like Excel: getting started with it is easy enough, but truly understanding it’s strengths and weaknesses and how to most effectively apply it takes years of accumulated experience.
It’s like Excel: getting started with it is easy enough, but truly understanding it’s strengths and weaknesses and how to most effectively apply it takes years of accumulated experience.
Which is not necessarily the worst thing in the world! But just as there are startups which are essentially an Excel sheet with a good UI and a bunch of integration and workflow, and that’s how value is unlocked, because of the Excel affordances problem, we may see a proliferation of AI products that perform very similar functions only in different contexts.