updated 1y ago
GitHub - promptslab/LLMtuner: Tune LLM in few lines of code
- slowllama
Fine-tune Llama2 and CodeLLama models, including 70B/35B on Apple M1/M2 devices (for example, Macbook Air or Mac Mini) or consumer nVidia GPUs.
slowllama is not using any quantization. Instead, it offloads parts of model to SSD or main memory on both forward/backward passes. In contrast with training large models from scratch (unattainable... See morefrom GitHub - okuvshynov/slowllama: Finetune llama2-70b and codellama on MacBook Air without quantization by okuvshynov
Nicolay Gerold added
promptfoo is a tool for testing and evaluating LLM output quality.
... See more
With promptfoo, you can:
Systematically test prompts & models against predefined test cases
Evaluate quality and catch regressions by comparing LLM outputs side-by-side
Speed up evaluations with caching and concurrency
Score outputs automatically by defining test cases
Use as afrom Testing framework for LLM Part
Nicolay Gerold added
- Fine-Tuning for LLM Research by AI Hero
This repo contains the code that will be run inside the container. Alternatively, this code can also be run natively. The container is built and pushed to the repo using Github actions (see below). You can launch the fine tuning job using the examples in the https://github.com/ai-hero/llm-research-examples pr... See morefrom GitHub - ai-hero/llm-research-fine-tuning
Nicolay Gerold added