vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported14 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [RL] Support weight update with multi ipc handles + zmq
- [Bugfix] Ensure DP worker has VllmConfig set
- [Bug][RAY]: V1 engine hang with multi-requests on 2 nodes
- [Bug]: VLLM Sleep on NVIDIA H100 leading to model producing slow invalid results
- [Perf] Early return in KVCacheManager.allocate_slots
- fix: Add validation for tool requests that the tool is available
- [Bug]: `assert request.num_output_placeholders >= 0` can fail in async scheduling
- [Bug][ROCm]: `vision_embeddings` in transformers inaccurate without math SDP
- [Bug]: PTXAS error: gpu-name sm_103a not defined when running Qwen3-235B-A22B-Instruct-2507 with vllm-openai:v0.12.0
- add Qwen3OmniMoeAudioEncoder and support torch compile
- Docs
- Python not yet supported