text-generation-inference
https://github.com/huggingface/text-generation-inference
Python
Large Language Model Text Generation Inference
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported1 Subscribers
Add a CodeTriage badge to text-generation-inference
Help out
- Issues
- test(config): add comprehensive tests for router config utilities
- KV-cache / long-context: smallest canonical repro boundary + metric (7-day receipts eval)
- docs: add AWS (EC2/SageMaker) deployment + benchmarking guide
- Update links Inferentia refer docs
- Tokenizer loading fails for mistralai/Ministral-8B-Instruct-2410 using TGI on GCP Vertex AI
- Fix flashinfer plan call to use positional arguments for #3165
- RuntimeError on CUDA capture with FP8 when deploying Llama-4-Maverick on TGI 3.2.3 with using H100 GPUS
- Zero config not working for VLMs
- Conflicting short argument -p
- Strange output when using Structured output with Gemma 3 12b it
- Docs
- Python not yet supported