vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Resolve PD scheduler stall via lateral preemption
- [Bugfix] Fix FlashInfer AllReduce benchmark initialization and workspace size
- [Bug]: Enhance KV cache load error handling with detailed error codes / information
- [bug/perf] V4-Pro hangs ~60 min in post-shard-load weight materialization without --safetensors-load-strategy prefetch on EXT4
- [Feature][FP8] Opt-in `ParallelLMHead` quantization in legacy `Fp8Config` (parity with AWQ-Marlin / GPTQ-Marlin / cpu_wna16)
- [Doc]: Embed Agent Friendly Code Score Badge
- [Feature]: FP8 inference fails on Ampere GPUs (RTX A6000, SM 8.6) due to unsupported default fp8e4nv (E4M3FN) format
- [Bug] _align_hybrid_block_size produces TP-dependent block sizes, breaks HMA heterogeneous TP NIXL transfers
- [Bug]: KeyError: 'layers.0.mlp.experts.w13_bias' when running quantized model on vLLM
- [Bug]: vLLM only prints access logs, not performance statistics logs (v0.1.dev15830+g8d599d76a with deepseek-V4-flash)
- Docs
- Python not yet supported