vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: p2pNccl 3P1,D-Node nccl receives data and triggers a crash
- [Bug]: quantized medgemma-27b-text-it producing garbage outputs
- [Bug]: `KeyError: 'layers.47.mlp.experts.w2_weight'` loading a NVFP4 + BF16 mixed-precision `llm-compressor` model
- [Usage]: vllm如何保留qwen3-vl中的special token
- [Bug]: Decode ITL performance issue with DBO at batch size ~200
- [Usage]: DeepseekOCR on CPU missing implementation for fused_topk
- [Bug]: OpenTelemetry Error on V1
- [Feature]: Accelerate penalty calculation by sampler cache
- [Usage]: Is DP + PP a possible way to use vLLM?
- [Usage]: Model performance different from api
- Docs
- Python not yet supported