vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] utf-8 decoding errors in benchmark endpoint client
- [Bug]: Batch invariance breaks with torch.compile and/or CUDA graphs on SM<90
- [ROCm] Fix AWQ env var scope, shuffle KV cache flag, sparse_attn_indexer dedup
- [Spec Decode] Add attention_backend override option for draft model
- [ROCm][Perf] Expose AITER MoE sorting dispatch policy via env var
- fix: make ColQwen3 work with transformers >=5.0.0
- [EPLB] Remove unused is_profile and rank_mapping params from transfer layer
- [ROCm] Fix UnboundLocalError for prefix_scheduler_metadata in cascade attention
- [Bug]: Gemma 4 31B INT4 on 2×24GB GPUs (TP=2): GPU KV cache size is 25,200 tokens at max_model_len=131072, gpu_memory_utilization=0.96, BF16 KV
- Revert "[MoE Refactor][Test] FusedMoE layer test" (#24675)
- Docs
- Python not yet supported