vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- FP8 MoE ep_scatter Triton illegal-address on H200 in GLM-5-FP8 prefill path
- [Bug]: Inconsistent tool-calling behavior between Chat Completions and Responses API when tool parsing params is not set
- [Bug]: Nemotron 3 super has corrupted output on 0.19.0, no issues on 0.18.1
- [Bug]: CUDA illegal memory access when using extract_hidden_states with multiple generate() calls
- Fix FullAttentionSpec.max_memory_usage_bytes() to respect sliding_window
- Fix Cohere ASR failing to load when librosa is not installed
- fix: allow HMA with KV events when explicitly enabled
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
- Fix engine_id collision + MoRIIO robustness for multi-node disagg DP
- [RFC]: Handle GDN prefill kernel JIT compilation failures - seeking community input
- Docs
- Python not yet supported