updated 7mo ago
10 Ways To Run LLMs Locally And Which One Works Best For You
Ollama
Stamati and added
- Ollama
Get up and running with large language models locally.
macOS
Download
Windows
Coming soon!
Linux & WSL2
curl https://ollama.ai/install.sh | sh
Manual install instructions
Docker
The official Ollama Docker image ollama/ollama is available on Docker Hub.
Quickstart
To run and chat with Llama 2:
ollama run llama2
Model library
Ollama supports a lis... See morefrom GitHub - jmorganca/ollama: Get up and running with Llama 2 and other large language models locally by jmorganca
Nicolay Gerold added
- We generally lean towards picking more advanced commercial LLMs to quickly validate our ideas and obtain early feedback from users. Although they may be expensive, the general idea is that if problems can't be adequately solved with state-of-the-art foundational models like GPT-4, then more often than not, those problems may not be addressable usin... See more
from Developing Rapidly with Generative AI
Nicolay Gerold added