r/LocalLLaMA - Reddit
r/LocalLLaMA - Reddit
reddit.com
Related
Highlights
3
3
What We Learned From a Year of Building With LLMs
Bryan Bischof
oreilly.com
2
2
Scaling: The State of Play in AI
Ethan Mollick
oneusefulthing.org
eneral-purpose models
1.1B:
TinyDolphin 2.8 1.1B
. Takes about ~700MB RAM and tested on my Pi 4 with 2 gigs of RAM. Hallucinates a lot, but works for basic conversation.
2.7B:
Dolphin 2.6 Phi-2
. Takes over ~2GB RAM and tested on my 3GB 32-bit phone via llama.cpp on Termux.
7B:
Nous Hermes Mistral 7B DPO
. Takes about ~4-5GB RAM depending on contex
...
See more
r/LocalLLaMA - Reddit
Unlock unlimited Related cards