vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported21 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Usage]: SymmMemCommunicator: Device capability 10.3 not supported
- Enable GDC for regular Triton MoE by calling `mm_k` from Lora
- [Feature]: CompressedTensors: NVFP4A16 not supported for MoE models
- [RFC]: Per-instance EPLB metrics
- [Bugfix] Handle missing config.json in speculator probe for GGUF models
- [P/D] p2p_nccl: implement async KV loading for decode stage
- [Bug]: Prefix Cache Corruption with LoRA with the same name but different id
- [Feature]: could output the given format logger ?
- Add positional embedding and kv_cache fusion for llama and gpt-oss
- [Doc] Add warning regarding GPU profiling limitations on WSL2
- Docs
- Python not yet supported