vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Duplicate parameter name in convert_vertical_slash_indexes op schema — kv_seqlens registered as q_seqlens
- [Bug]: gemma4_utils._parse_tool_arguments truncates string values containing internal quotes
- [Bug]: Gemma 4 31B Structured Outputs weird behaviour / character output - might be a quick solve
- [Bug]: Gemma4 on vLLM + PI coding agent: Validation failed for tool "edit": - path: must have required property 'path'
- Fix gemma4 _parse_tool_arguments truncating quoted strings
- [RFC]: Entropy-Gated Online KV Block Expiration During Active Decode
- [Bug]: Sleep-Mode throws an error on DGX-Spark
- fix(reasoning): prevent streaming end-token desync in base and other parsers
- [Bug]: Gemma 4 FP8 dynamic quantization = gibberish output
- [Feature]: Speculative Prefill — Draft-Assisted Sparse Prefill for TTFT Reduction
- Docs
- Python not yet supported