vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Qwen3 + DeepGEMM + dummy-load Cannot access data pointer of Tensor that doesn't have storage
- [Bug]: [V1 Engine] Segfault / NCCL init failure when running 4 GPUs across NUMA nodes (v0.17.0)
- [Bugfix] Fix speculative sampler warmup OOM when using EAGLE
- [Performance]: Is SamplingParams support set enable_thinking?
- [Bugfix] Fix MLA KV cache blocks not zeroed on reuse, causing CUDA crashes under concurrent load
- [Bug]: 推理vllm,出现如下报错,KeyError:residual
- [Bug] vLLM 0.17.1: `zai-org/GLM-OCR` has `mtp_graph < no_mtp_graph` despite high acceptance
- [Bug]: Mistral-Small-4-119B-2603 fails on 8x RTX 3090 (SM 8.6) with vLLM v0.17.1: no valid MLA attention backend
- [EPLB][Refactor] Replace boolean state flags with EPLBPhase enum
- “vLLM-deployed Qwen3.5 with Reasoning Parser Shows Empty reasoningContent in Spring AI OpenAI Model”
- Docs
- Python not yet supported