AI - Ongoing Model Improvements
One way of thinking about this is Daniel Kahneman’s simple model of thinking: System 1 and System 2. System 1 thinking is fast and intuitive. Current AI models’ pattern recognition and next-token prediction are good examples of this. System 2 thinking is slow and analytical, akin to genuine reasoning and understanding. It is System 2 thinking where... See more
Azeem Azhar • 🧠 AI’s $100bn question: The scaling ceiling
I isn’t ready to be an entrepreneur, but its ideation-prototype-interview cycle does in a couple of seconds what takes my students months to do. AI isn’t ready to build educational games without errors, but it is able to instantly make an interactive simulation that explains a difficult concept, even if some nuance is missing
Ethan Mollick • Confronting Impossible Futures
Even though the underlying model is no different than the usual GPT-4o, the addition of voice has a lot of implications. A voice-powered tutor works very differently than one that communicates via typing, for example. It can also speak many other languages providing new approaches to cross-cultural communication. And I have no doubt people will... See more
Ethan Mollick • On speaking to AI
Voice will take it to a new level and might make use much more widespread
o, alternative pathways to building Type-2 reasoning-capable AI systems, likely using neurosymbolic approaches, have become much more attractive. People like Gary Marcus have argued for neurosymbolic approaches for decades. Such approaches combine the pattern recognition of neural nets, like LLMs, with symbolic reasoning’s logic and rules. Vinod... See more
Azeem Azhar • 🧠 AI’s $100bn question: The scaling ceiling
The incentive to find breakthrough science that provides a performance pathway other than scaling is increasing. A GPT-6 class model will cost $10 billion and three years to improve on GPT-5 by some uncertain degree. That’s a ton of time and a lot of cash for an uncertain payout: in other words, a substantial prize for anyone who can figure out... See more
Azeem Azhar • 🧠 AI’s $100bn question: The scaling ceiling
Mickey Schafer
Jul 4
This past semester, a student of mine despaired over analyzing a 20+ question survey with 92 responses. She uploaded the spreadsheet to NotebookLM, a tool we'd used in class, which not only cheerfully assured her it would do the task, but also returned basic R values with short statements about strong relationships. She was... See more
Jul 4
This past semester, a student of mine despaired over analyzing a 20+ question survey with 92 responses. She uploaded the spreadsheet to NotebookLM, a tool we'd used in class, which not only cheerfully assured her it would do the task, but also returned basic R values with short statements about strong relationships. She was... See more
Ethan Mollick • Gradually, then Suddenly: Upon the Threshold
GPT-4 and 4-Turbo have always been available for free in Microsoft Copilot. 4o is now free in ChatGPT. Claude 3.5 Sonnet is now free in Claude. So there are many that have never subscribed to a premium plan that have been using the best models all along still
Ethan Mollick • Gradually, then Suddenly: Upon the Threshold
t doesn’t help that the two most impressive implementations of AI for real work - Claude’s artifacts and ChatGPT’s Code Interpreter - are often hidden and opaque
Ethan Mollick • Confronting Impossible Futures
He (Claude Shannon) had this great intuition that information is maximized when you’re most surprised about learning about something.” ~ Tara Javidi
“Whenever we are surprised by something, even if we admit that we made a mistake, we say, ‘Oh I’ll never make that mistake again.’ But, in fact, what you should learn when you make a mistake because you... See more
“Whenever we are surprised by something, even if we admit that we made a mistake, we say, ‘Oh I’ll never make that mistake again.’ But, in fact, what you should learn when you make a mistake because you... See more