vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [ROCm][CI] Fix fused RMS norm FP8 quant test on MI250 (gfx90a)
- [Transformers/Bugfix] Fix Gemma4 MoE top_k lookup + duplicate kv_seqlens in op schema
- [Bugfix] Fix gemma4_utils._parse_tool_arguments truncating strings with internal quotes
- Fix RMSNorm hidden_size validation crash for weightless norms
- [ROCM] Optimize all-reduce performances.
- [vLLM IR] Cache the fx_replacement to avoid re-tracing the same impl
- [Bugfix] Improve DCP/PCP error messages with actionable backend guidance
- [Draft][Experimental][CUDA][VLM] Scaffold AttentionPack-style KV compression path
- [Kernel] Implement CUDA kernel for ReLUSquaredActivation (relu^2)
- [Bugfix][MoE] Fix hardcoded SharedExperts output buffer size for DBO ubatches
- Docs
- Python not yet supported