GitHub - turboderp/exllamav2: A fast inference library for running LLMs locally on modern consumer-class GPUs

GitHub - turboderp/exllamav2: A fast inference library for running LLMs locally on modern consumer-class GPUs

turboderpgithub.com
Thumbnail of GitHub - turboderp/exllamav2: A fast inference library for running LLMs locally on modern consumer-class GPUs
勃勃OCx.com

google GitHub - google/maxtext: A simple, performant and scalable Jax LLM!

unslothai GitHub - unslothai/unsloth: 5X faster 50% less memory LLM finetuning

Edition 22: A Framework to Securely Use LLMs in Companies - Part 2: Managing Risk

Sandesh Mysore Anandboringappsec.substack.com

「macOS」にLLMをインストールするには--「Ollama」を試す (ZDNET Japan)