vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Gemma4 on vLLM + PI coding agent: Validation failed for tool "edit": - path: must have required property 'path'
- Fix gemma4 _parse_tool_arguments truncating quoted strings
- [RFC]: Entropy-Gated Online KV Block Expiration During Active Decode
- [Bug]: Sleep-Mode throws an error on DGX-Spark
- fix(reasoning): prevent streaming end-token desync in base and other parsers
- [Bug]: Gemma 4 FP8 dynamic quantization = gibberish output
- [Feature]: Speculative Prefill — Draft-Assisted Sparse Prefill for TTFT Reduction
- [Docs] document cache salting for prefix cache timing side-channel mitigation
- [Bug]: Certain Ranks Take a Look Time to Load Weights
- vLLM 0.19 may lose tool calls for Qwen/Qwen3.5-35B-A3B-FP8 when XML tool_call is emitted inside <think>
- Docs
- Python not yet supported