vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: V1 engine prefix caching was causing non-deterministic outputs during greedy decoding T=0
- [Bug]: Gemma4 tool-call-parser produces <pad> tokens under concurrent requests
- [Usage]: Run:ai S3 streamer crashing when loading model from an S3 compatible object storage
- [New Model]: Cuda 13 wheels for Blackwell GPUs, Linux-aarm64 and Linux-aamd64 Please
- Remove `raw_inputs` from transformers backend
- Add tokens-per-expert threshold for DeepGemm vs Triton MoE dispatch
- Fix: Propagate child process startup errors to the frontend
- [Core] Disable HMA for eagle/MTP with sliding window models
- [ROCm] Fix AssertionError in ActivationQuantFusionPass when torch.compile is used on ROCm
- [Usage]: RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
- Docs
- Python not yet supported