lorax
https://github.com/predibase/lorax
Python
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported1 Subscribers
Add a CodeTriage badge to lorax
Help out
- Issues
- Using Source = Local for Base Model
- decapoda-research/llama-13b-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
- Support custom tokenizer when loading a local model
- how does this differ from s-Lora?
- Sample command with mistral-7b failed
- Error while running the pre-built container using Podman
- Does lorax currently support GPT2 finetuned adapters?
- Is there any plan to support dynamic lora for qwen/chatglm models?
- Project Roadmap
- Docs
- Python not yet supported