Saved by Isabelle Levent
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
The community remains puzzled about whether these models genuinely generalize to unseen tasks, or seemingly succeed by memorizing the training data. This paper makes important strides in addressing this question. It constructs a suite of carefully designed counterfactual evaluations, providing fresh insights into the capabilities of state-of-the-ar... See more
Zhaofeng Wu • Reasoning skills of large language models are often overestimated
How does text-based prompting supplant or augment GUIs?
Some of the language models’ ability to infer natural language (ChatGPT) is unwieldy so there has been a lot of scrum about how text-based prompts could completely replace graphical user interfaces. My sense is that this won’t happen overnight as GUIs give way to higher fidelity or correctnes
... See moreAashay Sanghvi • 4 questions on AI
sari added
The idea is: learn to prompt chatbots very well = get way better outputs.
Right now, intricate prompting is helpful for some tasks. But over time, we think it’s an overrated skill. Here’s why:
1. As AI models improve, they require less “engineered” prompts. DALL-E 3 is a great example of this (you get top-tier images with < 10-word prompts).
In this c... See more
Right now, intricate prompting is helpful for some tasks. But over time, we think it’s an overrated skill. Here’s why:
1. As AI models improve, they require less “engineered” prompts. DALL-E 3 is a great example of this (you get top-tier images with < 10-word prompts).
In this c... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Nicolay Gerold added
"Role prompting"... telling the model to assume a role has never been a good way to elicit capabilities/style/etc.
For instance, if you ask one of the Claude models to simulate Bing Sydney, assuming you can get it to consent, the simulation will probably be very inaccurate. But if you use a prompt that tricks them into predicting it indirectly (https://t.co/wJEAlPgfz6... See more
Nathan Storey added
Prompts can include examples of similar problems and their solutions. Zero-shot prompting involves no examples, while few-shot prompting includes a small number of examples of relevant problem and solution pairs.
Ben Auffarth • Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT, and other LLMs
LLMs struggle when handling tasks which require extensive knowledge. This limitation highlights the need to supplement LLMs with non-parametric knowledge. This paper Prompting Large Language Models with Knowledge Graphs for Question Answering Involving Long-tail Facts analyze the effects of different types of non-parametric knowledge, such as textu... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Nicolay Gerold added
A new paper says LLMs, on the other hand, seem to treat these as two distinct things?
This is the Reversal Curse. “A is B” and “B is A” are treated as distinct facts.
This is the Reversal Curse. “A is B” and “B is A” are treated as distinct facts.
Owen Evans: Does a language model trained on “A is B” generalize to “B is A”? E.g. When trained only on “George Washington was the first US president”, can models automatically answer “... See more
Zvi Mowshowitz • AI #31: It Can Do What Now?
Nicolay Gerold added