🧠AI’s $100bn question: The scaling ceiling
My take from GPT-4.5 is that humanity has designed an AGI architecture - it is just prohibitively expensive. This model is not great, because training a $1 billion transformer only gives us a 12.5% improvement over a $100 million one, in a paradigm where, apparently, utility scales logarithmically with training... See more
Taelinx.comReally fascinating discussion around Magnificent Seven, AI, and investing in AI by Gavin Baker, Antonio Gracias, and Bill Gurley. Leaving my notes here:
"...what scaling laws say is if you wanna double the performance of an algorithm, you need to train it on ten more compute and data. So the reason this turbo charges... See more
Mostly Borrowed Ideasx.com