vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: infinite empty scheduling when load kv async from external KV Cache with KVConnector
- [Bug]: gpt-oss poor performance
- [Bug]: After turning on VLLM_USE_V1=1, NixlConnector performance is worse!
- [Feature]: Process-level PD Disaggregation within Single Instance
- [Feature]: Allow picking input, output lengths and prefix overlaps from a distribution for PrefixRandom dataset
- [Feature]: Allow vllm bench serve in non-streaming mode with /completions API
- [Feature]: INT8 Support in Blackwell Arch
- [Bug]: Deploying the model unsloth/Qwen3-Coder-480B-A35B-Instruct-GGUF using docker vllm/vllm-openai:v0.10.2 and vllm/vllm-openai:v0.11.0 failed.
- [Usage]: ”@app.post("/generate")“ API is support qwen2_vl or not?
- [Usage]: failed to infer device type on GCP COS despite nvidia container toolkit installed
- Docs
- Python not yet supported