vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Online Dynamic FP8 Quantization (--quantization="fp8") is slower than BF16/FP16 on RTX 5090
- [Bug]: Return Token Ids not returning Gen Token Ids for GPT-OSS-120b
- [Bug]: vLLM v0.11.0 container in k8s pod Fails to Load GLM-4.6-FP8 Model During CUDA Graph Capture, But v0.10.2 is OK.
- [Bug]: nccl stuck issue
- [Usage]: Does vllm support max_pixels in prompt on Qwen3-VL reasoning?
- [Feature]: WhisperX support
- [Bug]: docker Loading safetensors too slow
- [Feature]: Enabling draft model based speculative decoding for CPUs
- [Bug]: Sporadic out.is_contiguous assert crashes with Kimi-K2-Thinking
- [Feature]: Add use zmq implement AFDConnect
- Docs
- Python not yet supported