Sublime
An inspiration engine for ideas




What does DeepSeek R1 & v3 mean for LLM data?
Contrary to some lazy takes I’ve seen, DeepSeek R1 was trained on a shit ton of human-generated data—in fact, the DeepSeek models are setting records for the disclosed amount of post-training data for open-source models:
- 600,000 reasoning data... See more

Alexander Olma
@alexolma
Editor-in-chief
Andreas Schneider
@00schneider
Alex
@alexunfiltered
Alexander Lange
@alexanderlange
Alexander
@aschmedes
Alexander Gruft
@alexg