vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Fix crash when using PP in multi-node (Issue #37001)
- [Bugfix] Add Qwen3.5 MoE support to benchmark_moe.py
- [Bugfix] Fix harmony parser crash on terminal tokens after end-of-message
- [Bug]: qwen 3.5 memory requirement of int4 model is higher than fp8
- [Misc] Add unit tests for min_p Triton sampling kernel
- Add Mistral Guidance
- [V1, V2] Add temperature for prompt logprobs
- [Profiling] Add optional stage-level NVTX annotations for Nsight Systems
- [UX] Improve DCP error messages with actionable guidance
- [Distributed] Add OfflineState bloom-filter cooperative caching KV connector
- Docs
- Python not yet supported