lit-llama
https://github.com/lightning-ai/lit-llama
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
not yet supported0 Subscribers
Add a CodeTriage badge to lit-llama
Help out
- Issues
- converting Adapter to huggingface format
- No response after training an epoch
- reset_cache() Decrease the Generation Quality of Consecutive Inferences
- This codebase has so many errors it is completely useless and unusable
- Convert unsharded model to huggingface format
- combine adapter weights with the base model
- Merge generator.py and generate/full.py
- Restore flash attention support
- Question about FlashAttention and KV-cache
- Why didn't use matrix multiplication in the implememtation of LoRA?
- Docs
- not yet supported