Sublime
An inspiration engine for ideas
Rohan Bafna
@rbafna




What does DeepSeek R1 & v3 mean for LLM data?
Contrary to some lazy takes I’ve seen, DeepSeek R1 was trained on a shit ton of human-generated data—in fact, the DeepSeek models are setting records for the disclosed amount of post-training data for open-source models:
- 600,000 reasoning data... See more
Anders Pihlgren
@anderspihlgren
Gert-Jan L
@gertjan
Robert Lang
@langr

I still think of this paper from @DanHendrycks often, and still appreciate the frame it uses. Some of the points it makes that I return to periodically: https://t.co/SFeg2Pknvi
Lennart Skoog
@lensko

