
GitHub - google/maxtext: A simple, performant and scalable Jax LLM!

pair-preference-model-LLaMA3-8B by RLHFlow: Really strong reward model, trained to take in two inputs at once, which is the top open reward model on RewardBench (beating one of Cohere’s).
DeepSeek-V2 by deepseek-ai (21B active, 236B total param.): Another strong MoE base model from the DeepSeek team. Some people are questioning the very high MMLU sc... See more
DeepSeek-V2 by deepseek-ai (21B active, 236B total param.): Another strong MoE base model from the DeepSeek team. Some people are questioning the very high MMLU sc... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Building a Knowledge base for custom LLMs using Langchain, Chroma, and GPT4All
cismography.medium.com