vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Doc] Add note to docker.md on --model arg #32292
- [Usage]: Inconsistent chunk size in streaming mode, possibly related to RequestOutput.add aggregation logic
- [RFC]: vLLM IR: A Functional Intermediate Representation for vLLM
- [Bugfix][Tool Parser] Fix Hermes parser losing closing braces in tool calls
- [Performance]: Standby power saving settings
- [Feature]: Support GPU UUID in `CUDA_VISIBLE_DEVICES`
- Fix UnboundLocalError in unquantized MoE backend selection
- Support sequence parallelism for Qwen3Next
- [Bug]: Multiple tool_calls parsed correctly by hermes_tool_parser, but fail in serving_chat.py with JSONDecodeError
- [Feature]: batch invariance for A100
- Docs
- Python not yet supported