vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Use a large enough aiohttp read_bufsize to avoid ContentLengthError
- [Performance]: Is VLLM good for production deployment for processing large data in batches?
- [Bug]: Olmo-3 does not call tools even with auto tool choice enabled
- [Bug]: ZeroDivisionError: float floor division by zero
- [Usage]: otel fastapi instrumentation doesn't work
- [ROCm] [CI] [Release] Update the docker image annotation
- Using the Triton kernel_unified_attention_3d operation with speculative decoding workloads
- [P/D] Mooncake Connector support setting device
- [Bug] AssertionError loading Unsloth-optimized Qwen3-VL-2B-4bit with bitsandbytes in vLLM 0.14.0
- [Bug]: llama4-fp8 tp=2 ep=2 doesn't work on b200
- Docs
- Python not yet supported