🎥 New talk: "How Might We Learn?"
A (proto-?)vision talk of sorts—a first attempt at a broader picture of the future of learning I want to create, particularly given developments in AI.
Thanks to @HaijunXia and @ProfHollan for hosting me! 🙇♂️
(YT link in thread... See more
navigating the latent space of colors 🌈✨
do LLMs “see” colors differently from humans?
we perceive colors through wavelengths while LLMs rely on semantic relationships between words. to explore this, i mapped two color spaces: https://t.co/JSHCFjYu61
Interesting experiment from Google that creates an NPR-like discussion about any academic paper.
It definitely suggests some cool possibilities for science communication. And the voices, pauses, and breaths really scream public radio. Listen to at least the first 30 seconds. https://t.co/r4ScqenF1d
DeepSeekV2 is a big deal. Not only because its significant improvements to both key components of Transformer: the Attention layer and FFN layer.
It has also completed disrupted the Chines LLM market and forcing the competitors to drop the price to 1% of the original price.
⬇️... See more