vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported14 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: ImportError: libcudart.so.12: cannot open shared object file: No such file or directory
- [Feature] Support weight-shape-unaligned block-scale fp8 models
- [Perf] Slight improvement of ITL with multiple GPUs
- [Feature] Add command-line argument support to basic.py example
- [Frontend][Tracing] Add support for tracing aborted requests
- [Bug]: vLLM engine crash under burst load despite expected request queuing (72 concurrent API calls)
- Support ROCm aiter specific fusion of per_tensor RMSNorm+QuantFP8
- [Misc] Add Device Config to Prometheus Logger
- Adding LoRA support for qwen omni model
- Workspace Reuse for MOE-LoRA Intermediate Buffers
- Docs
- Python not yet supported