vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Feature] Add FlashInfer cuDNN backend for ViT attention
- [Installation]: MAC M1 installation fails because of bits-and-bytes
- [Bug]: FP8 speed regression in version 0.16.0rc2.dev87+g0b20469c6 (latest nightly)
- [Performance]: qknorm+rope fusion slower than unfused on H100
- [Bug]: Nemotron 3 (all quants) take a LONG time to load
- [Bug]: Qwen3-Coder-Next加载模型自带的qwen3coder_tool_parser_vllm.py,报错No module named 'vllm.entrypoints.openai.protocol'
- [Bug]: Instruction following capability is deteriorating:Output introduces parameter defined in functioncall incorrectly
- [RFC]: Disaggregated Frontend — Separating Online Serving from Engine
- [Feature]: logprobs for gpt-oss harmony
- Add support for LoRA with NVFP4 MoE models
- Docs
- Python not yet supported