Sublime
An inspiration engine for ideas
Chain of Thought Reasoning without Prompting
https://t.co/75h2QQzT9M (NeurIPS 2024)
Chain of thought (CoT) reasoning ≠ CoT prompting. While the term "chain of thought" was popularized from prompting, it now primarily refers to the generation of step by step reasoning – the original meaning of the phrase "chain of thought." CoT prompting is simply one way to elicit reasoning. However, the most powerful approach is to train models to reason intrinsically across various tasks, rather than relying on task-specific prompts.
The pioneering work in training models to reason in natural language was done by DeepMind in 2017 [1]. As written in their paper, “... derive the final answer through a series of small steps …” in solving math word problems. In 2021 [2], a team at OpenAI built upon this work by creating GSM8K, a large dataset of math word problems and their corresponding natural language solutions, and using it to fine-tune GPT-3.
Our latest research (actually done nearly 1 year ago), "Chain of Thought Reasoning without Prompting," is poised to inspire significant advancements in training LLMs to reason more effectively. Our paper showed impressive performance of our proposed "CoT decoding" method, even with pre-trained LLMs. However, the key takeaway from our work is that pre-trained LLMs already possess an inherent capacity for reasoning. To unlock their full potential, we simply need to bootstrap this ability through carefully designed fine-tuning processes.
[1] https://t.co/lt5QHHqAk5
... See more
Denny Zhoux.com
Still use ⛓️Chain-of-Thought (CoT) for all your prompting? May be underutilizing LLM capabilities🤠
Introducing 🌲Tree-of-Thought (ToT), a framework to unleash complex & general problem solving with LLMs, through a deliberate ‘System 2’ tree search.
https://t.co/V6hjbUNjbt... See more

Iteration of Thought
Proposes the Iteration of Thought (IoT) framework to enhance the LLM responses and reasoning capabilities with adaptive reasoning paths.
It leverages an inner dialogue agent, acting as a guide, to dynamically adjust reasoning paths which allows adaptive cross-path explor... See more

Why is Chain-of-Thought Prompting so powerful in Large Language Models?
Our new work theoretically reveals this mystery via a computer science perspective, showing CoT is crucial for both math/reasoning tasks and can even perform Dynamic Programming!
https://t.co/DAUnXj692M
1/n... See more
Right now, GPT-4 can do a few hundred tokens of chain-of-thought prompting.
Dwarkesh Patel • The Scaling Era: An Oral History of AI, 2019–2025

Some takeaways from the Tree of Thoughts (ToT) paper:
- Introduces ToT, what seems the next iteration of CoT (Chain of Thought), and similar to a search tree algorithm.
- If you take away backtracking & pruning, the success rate of solving games reduces significantly.
___LINEBR... See more
AI Agent Platform / Coordination Layers (with no token yet) ↓
Coordination Layers — Agentic-focused Infra
@TheoriqAI — Modular and Composable AI Agent Base Layer
@TalusNetwork — Next-Gen L1 for AI Agents
@sentient_agi — AI innovations toward a community-built open AGI___LINEBR... See more
0xJeffx.comRight now, GPT-4 can do a few hundred tokens of chain-of-thought prompting.