updated 2mo ago
š§ AIās $100bn question: The scaling ceiling
- Some tasks are easier to improve via system design. While LLMs appear to follow remarkable scaling laws that predictably yield better results with more compute, in many applications, scaling offers lower returns-vs-cost than building a compound system. For example, suppose that the current best LLM can solve coding contest problems 30% of the time
... See morefrom The Shift From Models to Compound AI Systems by Matei Zaharia, Omar Khattab, Lingjiao Chen, et al.
Nicolay Gerold added
- The chief challenge, as Lee sees it, is how to make AI smarter without just throwing more data and computing power at it. His hope rests on the iterative tweaking of algorithms that improve the performance of AI at a āgeometric pace.ā
from Will AI Bring Plentitude or Further Imperil the Planet? | NOEMA by noemamag.com
Leo Guinan added
This is the goal of subminds
- Indeed, we may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense
from Deep Learning Is Hitting a Wall by Gary Marcus
Prashanth Narayan added