peft
https://github.com/huggingface/peft
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
not yet supported1 Subscribers
Add a CodeTriage badge to peft
Help out
- Issues
- DOC: Section on weight tying with LoRA
- ENH Support models with low precision float dtypes
- Refactor layer initialization: PR 2960 continued
- TP support for Finetuning using LoRA and other PEFT techniques
- trainable_token_indices of LoraConfig not working when using more than 1 NVIDIA GPU
- Create Baseline Evaluation Based on DriveLM data off-the-shelf VLMs
- fix: layers_to_transform now correctly matches layer index on MoE models
- [TinyLoRA]tinylora implementation
- LoRA + activation checkpointing breaks model compilation with torch.compile when dropout != 0
- Combining adapters linearly with negative weights is broken
- Docs
- not yet supported