Sublime
An inspiration engine for ideas
Tony Cheng
@tonyc100
Mengyao Han
@mengyao
Edmond Lau
@nstlgiaxpress
Tobe Phillips
@tmp2130
In streaming settings, StreamingLLM outperforms the sliding window recomputation baseline by up to 22.2x speedup.
mit-han-lab • GitHub - mit-han-lab/streaming-llm: Efficient Streaming Language Models with Attention Sinks
谢鸿基
@mr.xhj
serving an unmet need
Kai Elmer Sotto • Get Together: How to build a community with your people
Professional
Alex Magee • 3 cards
Shuya Gong
@shuyagong-eec5