vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: @create_new_process_for_each_test("spawn") succeed unconditionally and does not work correctly all usages need to be revistied.
- [Bug]: Qwen3-32B use eagle3 crash
- [Bug]: num_cpu_blocks metrics is None in cache_config_info
- [Bug]: Some compilation tests can not run in the same process due to "Cannot re-initialize CUDA in forked subprocess"
- [Bug]: OpenAI completion error: 500 Unable to allocate 31.6 GiB for an array with shape (65158, 65158) and data type int64
- [Bug]: 0.11.1 A6000x2 120B OSS wrong generations
- [Bug]: vLLM 0.11.1 undefined symbol `cutlass_moe_mm_sm100` on RTX 5080 (SM 12.0) with CUDA 13.0
- [Feature]: Add option to tolerate self-signed certificates to vllm bench serve
- [Performance]: Can we use CUDA graph to accelerate the Qwen2_5omniAudioEncoder in Qwen2.5-Omni-3B?
- [Bug]: --long-prefill-token-threshold & --max-num-batched-tokens Confilct:lead to OOM!
- Docs
- Python not yet supported