vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Feature]: Allow picking input, output lengths and prefix overlaps from a distribution for PrefixRandom dataset
- [Feature]: Allow vllm bench serve in non-streaming mode with /completions API
- [Feature]: INT8 Support in Blackwell Arch
- [Bug]: Deploying the model unsloth/Qwen3-Coder-480B-A35B-Instruct-GGUF using docker vllm/vllm-openai:v0.10.2 and vllm/vllm-openai:v0.11.0 failed.
- [Usage]: ”@app.post("/generate")“ API is support qwen2_vl or not?
- [Usage]: failed to infer device type on GCP COS despite nvidia container toolkit installed
- args.hf_split is overriden even when set causing some dataset to not be actually supported
- [Bug]: Speculative Decoding Issue with VLLM_ENABLE_V1_MULTIPROCESSING=0
- [Feature]: 请问该模型支持微调吗?
- [Bug]: When spec_tokens count is greater than 1, the adaptation of cuda_graph_sizes causes the decoding process to fall back to eager mode.
- Docs
- Python not yet supported