text-generation-inference
https://github.com/huggingface/text-generation-inference
Python
Large Language Model Text Generation Inference
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported1 Subscribers
Add a CodeTriage badge to text-generation-inference
Help out
- Issues
- InternLM2.5 support
- Disable logging of "grammar" parameter
- [BUG] Running FP8 quantized model fails on NVIDIA L4 (repack_fp8_for_marlin)
- RuntimeError: "weight lm_head.weight does not exist" When Loading qwen2-0.5B-Instruct
- AttributeError: 'Idefics2ForConditionalGeneration' object has no attribute 'model'
- Build Intel CPU optimized image automatically
- Recent issues building text-generation-server with torch+cu118
- TGI does not support DeepSeekCoderV2-gptq
- The "/health" is so slow when generating extra-long text。
- Newer HF Mamba model is not supported
- Docs
- Python not yet supported