text-generation-inference
https://github.com/huggingface/text-generation-inference
Python
Large Language Model Text Generation Inference
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported1 Subscribers
Add a CodeTriage badge to text-generation-inference
Help out
- Issues
- feat: improve message content chunks handling
- Do I need to additionally apply an inference template?
- UserWarning: You are using a Backend <class 'text_generation_server.utils.dist.FakeGroup'> as a ProcessGroup. This usage is deprecated since PyTorch 2.0
- feat: move allocation logic to rust
- Failing to start a TGI pod with 2 or more GPUs. Sharding fails.
- TGI crashes with complex json schemas provided as grammar without any information (on debug/trace level)
- Canno launch with error exllamav2_kernels not installed.
- Enable testing TGI on XPU
- Out of Memory Errors When Running text-generation-benchmark Despite Compliant Batch Token Limit
- Process hangs in local run
- Docs
- Python not yet supported