
Will scaling reasoning models like o1, o3 and R1 unlock superhuman reasoning?
I asked Gwern + former OpenAI/DeepMind researchers.
Warning: long post.
As we scale up training and inference compute for reasoning models, will they show:
A) Strong... See more

something obviously true to me that nobody believes:
90% of frontier ai research is already on arxiv, x, or company blog posts.
q* is just STaR
search is just GoT/MCTS
continuous learning is clever graph retrieval
+1 oom efficiency gains in... See more


I see almost 0 explorations on one of the AI use cases I'm most excited about: using AI to help humans coordinate.
The coordination costs of human groups notoriously scales quadratically: the number of potential pair-wise interactions is roughly the square of number of people in a group.... See more

My posts last week created a lot of unnecessary confusion*, so today I would like to do a deep dive on one example to explain why I was so excited. In short, it’s not about AIs discovering new results on their own, but rather how tools like GPT-5 can help researchers navigate, connect, and understand our existing body of knowledge in ways that were... See more




Most do not know that computers were invented initially to show that mathematics is inconsistent and incomplete. It was a profoundly blackpilling moment for lots of mathematicians.
The story starts with David Hilbert, a German mathematician, who was insanely jazzed about the future of mathematics. In 1900, at the... See more