vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug] `hf_token` argument to `LLM` in Python SDK ignored in `vllm.transformer_utils.config`
- [Feature][Benchmarks] Be able to try a different prompt when sending the first test prompt instead of failing directly
- [Bug]: run deepseek v3.2 failed,not support RTX PRO 6000 * 8?
- [Bug]: [H200] Qwen3-Next-80B-A3B-Instruct-FP8 TP1 DP4 EP4 CUDA illegal memory error
- [Feature]: Speculating with a draft model
- [MLA] Support DCP + FP8
- KV-cache / long-context boundary request (minimal repro + metric) — 7-day receipts eval
- [Feature][DSR1 NVFP4 Model Bash]: FlashInfer Quantize Op
- [Bug]: Recompile in LLama model
- [RFC]: Support function calling using `structural_tag`.
- Docs
- Python not yet supported