vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [ROCm] DeepSeek-V4-Flash: rocm_dequantize_blocked_k_cache materializes entire KV cache pool causing OOM during decode
- [ROCm/MI325X] DeepSeek-V4-Flash: Triton fp8_mqa_logits kernel requires 96KB shared memory, MI325X limit is 64KB
- [ROCm][Bugfix] Add +256 col guard to preshuffle logits buffer (DSv3.2)
- [Bug]: FlashInfer GDN JIT Compilation Causes Multi-Worker Deadlock
- [WIP][Model Runner V2] Flash rejection sampling
- [Bugfix][Quark] Fix W8A8 INT8 garbage outputs on Step-3.5-Flash (and other 3-key fused-MoE Quark exports)
- [CPU] Fix spec decode kernel signatures for synthetic mode compatibility
- [ROCm] Clean up a bit the AITER FA backend
- [Bug]: VLLM:EngineCore thread consumes 100% CPU idle on ROCm/Radeon AI Pro 9700
- [ROCm] Enable persistent mla for sparse mla backend
- Docs
- Python not yet supported