vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Map reasoning_effort="none" to enable_thinking=False for Qwen3 chat templates
- [Kernel] Add indexer_concat_quant_fp8 kernel for DeepSeek V3.2
- [Bug]: tool_choice="required" + speculative decoding with lukealonso/Qwen3.5-397B-A17B-NVFP4 leads to failed tool calls.
- feat(health): add --health-port for out-of-band health check process
- [Bugfix] Fix shared-object aliasing in n>1 streaming with tool calls
- [Feature] Universal speculative decoding for heterogeneous vocabularies (TLI)
- [Feature] Add auto-detection for reasoning_config when only reasoning_parser is set
- [Nixl][PD] Lease renewal TTL KV blocks on P
- [Bugfix] Fix k_norm weight sharding in MiniMaxM2Attention when total_num_kv_heads < tp_size
- [Bugfix] Fix V2 model runner crash on hybrid attention models (Qwen3.5)
- Docs
- Python not yet supported