vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Spec Decode, BugFix] Propagate norm_before_fc from Eagle3 speculator
- [EPLB] Mask padding in EPLB load recording
- [WIP] Remove kv cache dtype enum from csrc
- [Perf] Remove redundant device copies for CPU-only pooling token IDs, 48.9% E2E throughput improvement
- [Bug]: qwen3 235B model with latest vllm is going to generate only 1 token.
- [Draft][MRV2] Experimental `build_attn_metadata` refactor
- Qwen3.5 0325 mtp
- [feature] Implement reasoning_effort
- MiniMaxM2ReasoningParser broken for M2.5: extract_reasoning_streaming assumes no <think> start tag
- Test Failure: test_run_eagle_dp[FLASH_ATTN] produces non-deterministic outputs with EAGLE speculative decoding
- Docs
- Python not yet supported