vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: RCCL RDNA3 gfx1100 Tp2 ROCM at startup
- [Feature]: Better base64 to torch tensor (Fixes #26781)
- [Bugfix] Fix streaming tool call type field defaulting to None instead of "function"
- [Bug]: CUDA error: an illegal memory access was encountered when deploy Qwen3.5-35B-A3B-FP8 on A100
- [torch.compile] refactor: change auto_functionalized return structure to use indexing instead of unpack values
- [KVConnector] Skip `register_kv_caches` on profiling
- [Perf] Optimize mean pooling using chunks and index_add, 5.9% E2E throughput improvement
- AIFQA-399 BLK-001: [vLLM/XPU] Multi-GPU CCL/OFI transport hang — shm_broadcast blocks indefinitely (TP>1)
- [Spec Decode] Implement Mean Pool Connector to return mean pooled vector over prompt tokens in response
- Add nightly b200 test for spec decode eagle correctness
- Docs
- Python not yet supported