vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix][Core] Fix negative prompt token counter increments with external KV cache accounting
- [Bugfix][Core] Fix stuck chunked pipeline parallelism with async scheduling
- [Bugfix] Fix `vllm bench serve` to count multimodal tokens in "total input tokens"
- [compile] Invoke split FX graph by codegen.
- [Bugfix] Fix AWQ models batch invariance issues
- [Perf] DSV3.2 Indexer Fused Weights Projection
- [Bugfix] Support [TOOL_CALLS] single-token format in Jamba tool parser
- [ZenCPU] Changes with respect to docker build and relevant cpu tests
- [Bugfix] Fix intra-step KV block corruption from stale prefix cache hits
- [Bugfix] Enable MTP for the official Qwen3.5 NVFP4 checkpoint
- Docs
- Python not yet supported