vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Performance]: non-optimal performance of `linear` for medium batches
- fix(benchmarks): correct peak output token throughput calculation for speculative decoding
- [Bugfix] Fix Fabric/RDMA attribute queries poisoning global error_code in cumem allocator
- fix(phimoe): use config router_jitter_noise instead of hardcoded jitter_eps
- [Core] Proactively free KV cache blocks when aborting finished requests
- [Bug]: OLMoE missing clip_qkv implementation in vLLM
- [RFC]: why block_hash maps not always a single KVCacheBlock.
- [Bug]: vLLM hangs indefinitely with low `num_gpu_blocks_override`
- fix(benchmarks): align ShareGPT token count with legacy script
- [Bugfix] Fix SM121 (DGX Spark) exclusion from Marlin/CUTLASS FP8 paths
- Docs
- Python not yet supported