vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: `--max-num-seqs` limits total running sequences instead of scheduled sequences, causing severe underutilization in PP
- [Bug]: NVFP4A16 spurious warning that GPU doesn't support Fp4
- [Feature]: GLM4MOE gruff support
- [Feature]: Support nvidia/omnivinci
- [Bug]: Poor logging around assertion error when using PPLX all-to-all backend with microbatching (MoE)
- [Bug]: Qwen2.5 ViT Incorrect QKV Split When projection_size != hidden_size in Tensor Parallelism
- [Bug]: p2pNccl 3P1,D-Node nccl receives data and triggers a crash
- [Bug]: quantized medgemma-27b-text-it producing garbage outputs
- [Bug]: `KeyError: 'layers.47.mlp.experts.w2_weight'` loading a NVFP4 + BF16 mixed-precision `llm-compressor` model
- [Usage]: vllm如何保留qwen3-vl中的special token
- Docs
- Python not yet supported