vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Fix FlashMLA sparse accuracy with topk_length and zero-init padding
- [Bug]: Does vllm support deploying glm-5 on A800 or A100, or are there any plans to support it?
- [Bugfix] Fix FP8 online quantization premature trigger with TP sharded weights
- [Bugfix] Fix off-by-one in multimodal prefix cache hash boundary check
- [Performance]: W4A16+eagle3 not better than fp8+eagle3 with Qwen2.5-14B
- [Bugfix] Respect scale_attn_weights config flag in GPTBigCode
- [Bug]: KeyError: 'language_model.model.layers.20.linear_attn'
- cumem allocator: double-free and stale error codes during sleep/wake cycles
- [Bug]: Frequent Tool Call Parsing Failures with DeepSeek-V3.2
- [Bug]: Garbled output Qwen3.5-122B-A10B VLLM 0.17.0
- Docs
- Python not yet supported