Sublime
An inspiration engine for ideas
Alex Grama
@agrama7




What does DeepSeek R1 & v3 mean for LLM data?
Contrary to some lazy takes I’ve seen, DeepSeek R1 was trained on a shit ton of human-generated data—in fact, the DeepSeek models are setting records for the disclosed amount of post-training data for open-source models:
- 600,000 reasoning data... See more
Alexandra
@alexandra
Alex Hardgrave
@ajhardgrave
Alec
@alecv
Alex Krivov
@rrr9879879
Alex Lefebvre
@alexsque
Alex MacCaw
@alexmaccaw
Alejandro RM
@alexrma