vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported21 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Olmo-3 vLLM Online API generates gibberish
- [V1][Core] Rename engine_core to engine_core_client for clarity
- [Bugfix] Fix SP compilation shape mismatch errors for multimodal models and prompt embeds
- [Performance][torch.compile]: Inductor partition performance issues
- [Feature]: Add INT8 Support for KV Cache Quantization (Currently FP8-Only)
- Add batched and grouped EPLB communication
- [perf][MLA] Fuse RoPE/FP8 quantization/Q write using mla_rope_quantize_fp8
- [Refactor] Make Int8ScaledMMLinearLayerConfig to use QuantKey
- [RFC]: [P/D] Prefill compute optimizations with bi-directional KV cache transfers between P and D nodes
- [Feature]: Qwen3-Next dual-stream execution in_proj_qkvz in_proj_ba
- Docs
- Python not yet supported