vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [EPLB] Fix balancedness metric computation and add verbose reporting
- [ROCm][Quantization][2/N] Refactor quark_moe w4a8 w/ oracle
- Revert "[Quantization] Add FlashInfer CuteDSL batched experts backend for NVFP4 MoE" (#38251)
- Added the xpu_grouped_topk feature to support the grouped_topk functi…
- [Bug]: vLLM attempts to download Hugging Face cache file during inference despite local model path (Gemma 4)
- [Bug]: Vllm + Gemma 4 + claude code: tool calling problems
- [Bug]: NVML_SUCCESS == r INTERNAL ASSERT FAILED and OOM
- [Bug]: Deepseek v3.2 RuntimeError: Worker failed with error "Assertion error"
- [Bug]: Gemma4 vision encoder crashes with ValueError: Expected hidden_size to be 5376, but found: 72
- [Bug]: Gemma 4 MoE (26B-A4B-it) crashes at startup — AssertionError: top_k is None in MoEMixin.recursive_replace
- Docs
- Python not yet supported