vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [draft] OptionalCUDAGuard --> DeviceGuard
- [Bug]: `pplx-kernels` fails to load in vLLM container
- TP > 1 with Ray Serve: Use Multiprocessing Executor (Not Ray Executor)
- [Bug]: vLLM v0.12.0: CUDA Illegal Memory Access During CUDA Graph Capture on Multi-Node GH200 (TP=4, PP=2)
- [Installation]: 'podman run vllm-cpu-release-repo:v0.12.0’ fails on aarch64
- [Usage]: Help Running NVFP4 model on 2x DGX Spark with vLLM + Ray (multi-node)
- [Bug]: GLM-4.6-AWQ model outputs garbled text on vllm/vllm-openai:v0.10.2-x86_64
- [Bug]: How to make vLLM support multi stream torch compile and each stream capture cuda graph.
- [Bug]: vllm run-batch exhausts system memory on VM with big batch job
- [Bug]: NIXL PD disaggregate with host_buffer has accuracy issue - Prefill scheduled num_block mismatch at update_state_after_alloc and request_finished
- Docs
- Python not yet supported