vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [RFC]: [KV Connector]: Support KV push from Prefill to Decode node using Nixl Connector
- [CI] Add persistent cache mounts and fix test download paths
- sched/v1: use SRTF tiebreaker for preemption victim selection
- [ROCm][CI] Optimize ROCm Docker build: registry cache, DeepEP, and ci-bake script
- [BugFix] Handle pre-sharded TP MoE expert weights in Grok loader
- [Feature]: FA4 Attention Sinks
- [Bug]: delta_text and delta_token_ids get out of sync when stop sequences are used.
- [Feat][Executor] Introduce RayExecutorV2
- [Bug]: GPU failure during repeated model loading when using --enable-prefix-caching with KV transfer (LMCacheConnectorV1)
- [Usage]: 'LLMEngine' object has no attribute 'collective_rpc'
- Docs
- Python not yet supported