vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- fix(gdn_attn): prevent CUDA illegal memory access with >4 speculative tokens
- Fix priority preemption regression test in scheduler
- Add FlashInfer fused RoPE + paged KV cache append integration in vLLM #24678
- [Kernel] Porting the TRTLLM minimax_allreduce_rms kernels
- [Core][Feature] Observation Plugin for Intercepting & Routing on Activations
- [WIP][BugFix] Fix PP OOM for Qwen3Next/Qwen3_5 by guarding embed_tokens and lm_head
- [Bug]: EAGLE3 speculative decoding + multimodal crash under high concurrency
- [Feature]: W6A16 Support
- [Bug]: GDN attention backend crashes with mixed decode/spec_decode batch when serving Qwen3.5 family models with MTP
- [Bugfix][Core] Fix gdn kernel mixed batch spec decode crash
- Docs
- Python not yet supported