vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [vLLM IR] Propagate IR op names to torch profiler annotations
- [BUG]: Detect presence of p2p enable gpus/driver, not just nvlink, to enable direct connection
- [RFC][vLLM IR] `rms_norm` weight passing inconsistency
- DSA module construction corrupts CUDA RNG state (Offset increment outside graph capture)
- [Bugfix] Fix FusedMoE weight_loader for MXFP4 and add strict dtype guard
- [LoRA] Fix PEFTHelper.vllm_max_position_embeddings default from False to None
- [Feature] Phase-aware KV cache quantization for reasoning models (58% distortion reduction measured)
- [Attention] Add Triton fallback for encoder attention on SM100+
- [vLLM IR] pre-commit script for import validation
- [vLLM IR] Minor improvements
- Docs
- Python not yet supported