Saved by sari
Natural language is the lazy user interface
The idea is: learn to prompt chatbots very well = get way better outputs.
Right now, intricate prompting is helpful for some tasks. But over time, we think it’s an overrated skill. Here’s why:
1. As AI models improve, they require less “engineered” prompts. DALL-E 3 is a great example of this (you get top-tier images with < 10-word prompts).
In this c... See more
Right now, intricate prompting is helpful for some tasks. But over time, we think it’s an overrated skill. Here’s why:
1. As AI models improve, they require less “engineered” prompts. DALL-E 3 is a great example of this (you get top-tier images with < 10-word prompts).
In this c... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Nicolay Gerold added
How does text-based prompting supplant or augment GUIs?
Some of the language models’ ability to infer natural language (ChatGPT) is unwieldy so there has been a lot of scrum about how text-based prompts could completely replace graphical user interfaces. My sense is that this won’t happen overnight as GUIs give way to higher fidelity or correctnes
... See moreAashay Sanghvi • 4 questions on AI
sari added
- “Text interfaces are much easier for machines to use. And as more work is done autonomously by the workflows we teach machines to perform, text interfaces are starting to matter just as much as human ones.”
Linus Lee • Text as an interface | thesephist.com
Brian Sholis added
A very common trope is to treat LLMs as if they were intelligent agents going out in the world and doing things. That’s just a category mistake. A much better way of thinking about them is as a technology that allows humans to access information from many other humans and use that information to make decisions. We have been doing this for as long a... See more
Steven Johnson • Revenge of the Humanities
Brian Wiesner and added
The problem in this domain is sycophancy.
When you're first learning something, you don't actually know what to look for . You have too many unknown-unknowns, you don't have any wisdom, you have no abstractive connections to make.
The reason LLMs are hazardous during this period, is that they're always going to let you take the lead on everything. Th... See more
When you're first learning something, you don't actually know what to look for . You have too many unknown-unknowns, you don't have any wisdom, you have no abstractive connections to make.
The reason LLMs are hazardous during this period, is that they're always going to let you take the lead on everything. Th... See more
Link
Nicolay Gerold added
First of all, I'd say you have a bigger problem where your company is trying to find nails with a hammer. That is where your sentiment comes from, and could be an obstacle for both you and the company. It's the same deal when I see people keep on talking about RAG, and nowadays "modular RAG", when really, you could treat everything as a software co... See more
r/MachineLearning - Reddit
Nicolay Gerold added
Google gives you Ten Blue Links, which communicates “it might be one of these - see what you think” (and turns us all into Mechanical Turks, giving Google feedback by picking the best answer). But a chat bot gives you three paragraphs of text with apparent certainty as The Answer, and footnotes, a click-through disclaimer and a ‘be careful!’ boiler... See more