vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [RFC]: KV cache layout combining all layers per block
- [Bug]: `repetition_penalty` leads to engine failure when using vllm serve...
- [Kernel] Improve 2D Triton Attention Kernel
- [Bugfix] Ensure DP worker has VllmConfig set
- [Bug][RAY]: V1 engine hang with multi-requests on 2 nodes
- [Bug]: `assert request.num_output_placeholders >= 0` can fail in async scheduling
- [Bug][ROCm]: `vision_embeddings` in transformers inaccurate without math SDP
- [Bug]: PTXAS error: gpu-name sm_103a not defined when running Qwen3-235B-A22B-Instruct-2507 with vllm-openai:v0.12.0
- [Bug]: Qwen3-32B with MTP, run failed.
- add triton ops fused_qkvzba_split_reshape_cat for qwen3_next
- Docs
- Python not yet supported