vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug] Missing Vocabulary Validation for MTP and Eagle Speculative Methods leads to potential OOB Access
- [Installation/Runtime]: Linux ROCM7 / RuntimeError: No HIP GPUs are available
- [perf] silu early return for 0s
- Fix early return sliding window
- feat(whisper): add decoder prefix and custom task tokens for transcription API
- feat(metrics): add configurable Prometheus histogram buckets via CLI flags
- [Feature]: Reasoning output for offline inference
- [Bug]: GLM-5 FP8 on H200 CUDA OOM in sparse_attn_indexer at High Concurrency
- [Bug]: EngineCore exits immediately after startup when vLLM CPU is launched from multiprocessing.Process on macOS
- [Draft] Support model Qwen3_5/Qwen3_5_moe on NPUplatform
- Docs
- Python not yet supported