GitHub - unslothai/unsloth: 5X faster 50% less memory LLM finetuning
slowllama
Fine-tune Llama2 and CodeLLama models, including 70B/35B on Apple M1/M2 devices (for example, Macbook Air or Mac Mini) or consumer nVidia GPUs.
slowllama is not using any quantization. Instead, it offloads parts of model to SSD or main memory on both forward/backward passes. In contrast with training large models from scratch... See more
Fine-tune Llama2 and CodeLLama models, including 70B/35B on Apple M1/M2 devices (for example, Macbook Air or Mac Mini) or consumer nVidia GPUs.
slowllama is not using any quantization. Instead, it offloads parts of model to SSD or main memory on both forward/backward passes. In contrast with training large models from scratch... See more
okuvshynov • GitHub - okuvshynov/slowllama: Finetune llama2-70b and codellama on MacBook Air without quantization
Fine tune 100+ open source LLMs with zero code
Meet LLaMA-Factory! 🦙🔥
a full-stack open source UI for training LLMs and VLMs (LLaMA, Mistral, Qwen, Yi, DeepSeek, Gemma)
-- no boilerplate!
why it matters:
↳ 100+ model... See more
Charly Wargnierx.com