vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported14 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Kernel] adding native nccl4py support
- [BugFix] Fix minimax_m2 tool call parser for stream_interval > 1
- Add MiniMax model support to vLLM
- Drafter Supports Multiple KVCache Groups
- [KVConnector][LMCache] Enable Support for cross-layer Layout
- [Metrics] Add prefix cache state metrics for KV cache monitoring
- [Feature]: Support loading vision layers in VLM LoRA adapters
- Add key latencies to v1 RequestMetrics instance so it can be surfaced…
- [Performance]: Quantized Model Inference
- [Installation]: Hard to find right wheel files to build the release version
- Docs
- Python not yet supported