vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Added the xpu_grouped_topk feature to support the grouped_topk functi…
- [Bug]: vLLM attempts to download Hugging Face cache file during inference despite local model path (Gemma 4)
- [Bug]: Vllm + Gemma 4 + claude code: tool calling problems
- [Bug]: NVML_SUCCESS == r INTERNAL ASSERT FAILED and OOM
- [Bug]: Deepseek v3.2 RuntimeError: Worker failed with error "Assertion error"
- [Bug]: Gemma4 vision encoder crashes with ValueError: Expected hidden_size to be 5376, but found: 72
- [Bug]: Gemma 4 MoE (26B-A4B-it) crashes at startup — AssertionError: top_k is None in MoEMixin.recursive_replace
- [Bug]: Duplicate parameter name in convert_vertical_slash_indexes op schema — kv_seqlens registered as q_seqlens
- [Bug]: gemma4_utils._parse_tool_arguments truncates string values containing internal quotes
- [Bug]: Gemma 4 31B Structured Outputs weird behaviour / character output - might be a quick solve
- Docs
- Python not yet supported