
Elon Musk agrees that we've exhausted AI training data | TechCrunch

With smaller amounts of data, deep learning often performs poorly.
Ernest Davis • Rebooting AI: Building Artificial Intelligence We Can Trust
Many of these projects are saving time by training on small, highly curated datasets. This suggests there is some flexibility in data scaling laws. The existence of such datasets follows from the line of thinking in Data Doesn't Do What You Think, and they are rapidly becoming the standard way to do training outside Google
semianalysis.com • Google "We Have No Moat, and Neither Does OpenAI"
Now the talk is of “brain-scale” models with many trillions of parameters.