
Will scaling reasoning models like o1, o3 and R1 unlock superhuman reasoning?
I asked Gwern + former OpenAI/DeepMind researchers.
Warning: long post.
As we scale up training and inference compute for reasoning models, will they show:
A) Strong... See more

something obviously true to me that nobody believes:
90% of frontier ai research is already on arxiv, x, or company blog posts.
q* is just STaR
search is just GoT/MCTS
continuous learning is clever graph retrieval
+1 oom efficiency gains in... See more


I see almost 0 explorations on one of the AI use cases I'm most excited about: using AI to help humans coordinate.
The coordination costs of human groups notoriously scales quadratically: the number of potential pair-wise interactions is roughly the square of number of people in a group.... See more

Observing some people close to me with chronic health conditions, it's striking how useful Reddit frequently ends up being. I think a core reason is because trials aren’t run for a lot of things, and Reddit provides a kind of emergent intelligence that sits between that which any single physician can marshal and the full rigor of clinical... See more

My posts last week created a lot of unnecessary confusion*, so today I would like to do a deep dive on one example to explain why I was so excited. In short, it’s not about AIs discovering new results on their own, but rather how tools like GPT-5 can help researchers navigate, connect, and understand our existing body of knowledge in ways that were... See more