vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Quantization][Deprecation] Remove Petit NVFP4
- Classify new requests as prefills regardless of query length
- [P/D] Mooncake Connector support setting device
- [Bug] AssertionError loading Unsloth-optimized Qwen3-VL-2B-4bit with bitsandbytes in vLLM 0.14.0
- [Bug]: llama4-fp8 tp=2 ep=2 doesn't work on b200
- [Chore]: simplify cuda device count with `torch.cuda.device_count`
- [Bugfix][Hardware][AMD] Fix ROCM_AITER_FA speculative decoding support
- [Fix] Update CUTLASS_REVISION to v4.3.5
- [Bugfix] Fix 'remove_instance_endpoint' method logic in disagg_proxy_demo
- [Bug]: tensorize_vllm_model double gpu
- Docs
- Python not yet supported