text-generation-inference
https://github.com/huggingface/text-generation-inference
Python
Large Language Model Text Generation Inference
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported1 Subscribers
Add a CodeTriage badge to text-generation-inference
Help out
- Issues
- Dynamically serve LoRA modules
- text-generation-inference:latest-trtllm is missing dependencies to run models
- Entire system crashes when get to warm up model
- random text generation from Qwen2-VL-7B-Instruct with TGI3
- Docs for LoRA Availability and support for Qwen models
- Update Dockerfile to use devel image for compatibility
- Cohere2 aka Cohere2ForCausalLM
- TGI hangs when running two extremely long prompts at once
- Server stucks at model warming phase for codestral-22b on 4xH100
- Model warmup fails after adding Triton indexing kernels
- Docs
- Python not yet supported