vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- WIP: b12x updates
- Fix static actorder handling for compressed-tensors WNA16 MoE
- Workaround to make examples/features/lora/multilora_offline.py running through
- Fix sharded_state load for FP8 models with aliased scale keys
- [security] Change VLLM_MEDIA_URL_ALLOW_REDIRECTS default to False
- [Misc] Replace mamba_type string literals with MambaAttentionBackendEnum
- [Core][CI/Build] Make outlines disk cache optional
- [Bugfix][KV Transfer] Reject NixlConnector + expandable_segments:True
- WIP: Add opt-in BF16 linear compile path
- [Kernel] Add H20-3e FP8 block-scaled GEMM tuned configs for DeepSeek-V4-Flash expert shapes
- Docs
- Python not yet supported