GitHub - okuvshynov/slowllama: Finetune llama2-70b and codellama on MacBook Air without quantization

GitHub - okuvshynov/slowllama: Finetune llama2-70b and codellama on MacBook Air without quantization

okuvshynovgithub.com
Thumbnail of GitHub - okuvshynov/slowllama: Finetune llama2-70b and codellama on MacBook Air without quantization

GitHub - mistralai/mistral-finetune

turboderp GitHub - turboderp/exllamav2: A fast inference library for running LLMs locally on modern consumer-class GPUs

GitHub - ai-hero/llm-research-fine-tuning