Sublime
An inspiration engine for ideas




What does DeepSeek R1 & v3 mean for LLM data?
Contrary to some lazy takes I’ve seen, DeepSeek R1 was trained on a shit ton of human-generated data—in fact, the DeepSeek models are setting records for the disclosed amount of post-training data for open-source models:
- 600,000 reasoning data... See more
Oleksii R
@rennst
Alexander Olma
@alexolma
Editor-in-chief
Andrii Tykhan
@fellix
Oleksandra
@just.obi.here
Makar
@makar
regular creative mind
Alexander
@tobedetermined
The best source of epiphany is to dwell on things that confuse you.
Alexandr Boldin
@alexanderboldin
Aleksandar
@alek