vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Support parallel tool calls in Responses API streaming
- Fix CLI help keyword state leaking between parse calls
- [Kernel] Add tuning script and config infrastructure for Mamba select…
- [Bugfix][Kernel] Fix int32 overflow in LoRA do_expand_kernel and do_shrink_kernel
- [Bugfix] Fix socket utilities for IPv6 dual-stack support
- Test mi300
- [Feature]: Support per-layer sliding window attention for Qwen3
- [Bug] flash_attn _get_sliding_window_configs asserts FlashAttentionImpl over all attention layers, breaks any non-FA backend
- [Tracking] NIXL >= 1.0.0 Support for NIXL KV Connector
- [Doc]:
- Docs
- Python not yet supported