Liang_Wengfeng_interview_DeepSeek_LLM_

Foundation models like GPT and Claude now serve as the index funds of language. Trained on enormous corpora of human text, they do not try to innovate. Instead, they track the center of linguistic gravity: fluent, plausible, average-case language. They provide efficient, scalable access to verbal coherence, just as index funds offer broad exposure ... See more
LLMs as Index Funds
The incentive to find breakthrough science that provides a performance pathway other than scaling is increasing. A GPT-6 class model will cost $10 billion and three years to improve on GPT-5 by some uncertain degree. That’s a ton of time and a lot of cash for an uncertain payout: in other words, a substantial prize for anyone who can figure out pro... See more
Azeem Azhar • 🧠 AI’s $100bn question: The scaling ceiling
I’m often asked what problem I’d solve if I were to start another company. I probably won’t do a startup any time soon (because startups are hard), but here are some of the problems I find interesting. If you’re solving any of them, I’d love to chat.
1. Data synthesis: AI has become really good both at generating and annotating data. The challenge n... See more
1. Data synthesis: AI has become really good both at generating and annotating data. The challenge n... See more