GitHub - turboderp/exllamav2: A fast inference library for running LLMs locally on modern consumer-class GPUs

GitHub - turboderp/exllamav2: A fast inference library for running LLMs locally on modern consumer-class GPUs

turboderpgithub.com
Thumbnail of GitHub - turboderp/exllamav2: A fast inference library for running LLMs locally on modern consumer-class GPUs

ghimiresunil GitHub - ghimiresunil/LLM-PowerHouse-A-Curated-Guide-for-Large-Language-Models-with-Custom-Training-and-Inferencing: LLM-PowerHouse: Unleash LLMs' potential through curated tutorials, best practices, and ready-to-use code for custom training and inferencing.

GitHub - arthur-ai/bench: A tool for evaluating LLMs

jmorganca GitHub - jmorganca/ollama: Get up and running with Llama 2 and other large language models locally

Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]

Things we learned about LLMs in 2024

Simon Willisonsimonwillison.net
Thumbnail of Things we learned about LLMs in 2024

Mozilla-Ocho GitHub - Mozilla-Ocho/llamafile: Distribute and run LLMs with a single file.