vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Crash on Transcription (size for tensor a must match the size of tensor b) with reproduce
- [ROCm] Fix UnboundLocalError for prefix_scheduler_metadata in Triton attention
- [Bug] Embedding/pooling models crash on B200 (SM 10.0) — encoder attention hardcodes FA2 which lacks SM100 support
- FP8 MoE ep_scatter Triton illegal-address on H200 in GLM-5-FP8 prefill path
- [Bug]: Inconsistent tool-calling behavior between Chat Completions and Responses API when tool parsing params is not set
- [Bug]: Nemotron 3 super has corrupted output on 0.19.0, no issues on 0.18.1
- [Bug]: CUDA illegal memory access when using extract_hidden_states with multiple generate() calls
- Fix FullAttentionSpec.max_memory_usage_bytes() to respect sliding_window
- Fix Cohere ASR failing to load when librosa is not installed
- fix: allow HMA with KV events when explicitly enabled
- Docs
- Python not yet supported