How Far Are Large Language Models from Agents with Theory-of-Mind?
A study examines whether large language models can go beyond understanding mental states to effectively use that understanding in decision-making and taking actions in social scenarios.
"Violation of Expectation via Metacognitive Prompting Reduces Theory of Mind Prediction Error in Large Language Models"
This paper investigates how Violation of Expectation (VoE) and metacognitive prompting can be used to reduce Theory of Mind prediction errors in Large Language Models (LLMs) in the context of human-AI interaction.
🧠🚨New memory paper🚨ðŸ§
A new paper from @Plastic_Labs combines concepts from developmental psychology with LLMs in order to simulate more complex memory
"Violation of Expectation via Metacognitive Prompting Reduces Theory of Mind Prediction Error in Large Language Models"
🧵
Metacognitive Prompting
The overall goal is to create AI systems that can deeply understand and align with individual human users. To achieve this, the researchers propose using a learning technique inspired by cognitive science theories about how human brains work.… Show more