vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Qwen3.5-9B (BF16/AWQ) Illegal Memory Access in vLLM v0.17.0 (WSL2/RTX3090 Ti)
- [Bugfix][Frontend] Do not persist load_inplace on stored LoRA requests
- [Bugfix] Fix non-streaming/streaming inconsistency for Qwen3 reasoning when enable_thinking is not set
- fix(s3): paginate list_objects_v2 to return all objects
- logging: opt-in per-rank log files (Ray-friendly) (#23761)
- [Bugfix] Fix GLM4 tool parser double serialization issue
- fix: handle escaped <\\think> tags in reasoning parser (closes #36207)
- feat(openai): add per-request timing metrics and completion_tokens_de…
- [Bug]: Inconsistent PP layer indexing in EAGLE model code
- [Mamba] Flashinfer selective_state_update
- Docs
- Python not yet supported