GitHub - okuvshynov/slowllama: Finetune llama2-70b and codellama on MacBook Air without quantization
Experimenting with local LLMs on macOS
blog.6nok.orgOops haven't tweeted too much recently; I'm mostly watching with interest the open source LLM ecosystem experiencing early signs of a cambrian explosion. Roughly speaking the story as of now:
1. Pretraining LLM base models remains very expensive. Think: supercomputer + months.
2. But finetuning LLMs is... See more
Andrej Karpathyx.com

I can't believe I've just fine-tuned a 33B-parameter LLM on Google Colab in a few hours.š±
Insane announcement for any of you using open-source LLMs on normal GPUs! š¤Æ
A new paper has been released, QLoRA, which is nothing short of game-changing for the ability to train and fine-tune LLMs on... See more