vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported25 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [cpu] update branch for llguidance for s390x
- [Feat][KVConnector] Refresh CPU LRU cache for eager offloading
- [Tests][Transformers v5] Skip InternVL2 HF-runner tests incompatible with meta device init
- [Bug]: In case chunked prefill is enabled and max-num-batched-tokens > max-model-length the server does not start up and fails
- [Bugfix] Fix turboquant FP8 cast failure for BF16 models on Ampere GPUs
- [Feature]: Priority scheduling supports preemption of requests in the running queue by requests in the waiting queue
- [Bug]: cudaErrorIllegalAddress during PIECEWISE CUDA graph replay with MoE LoRA: stale buffer addresses in `moe_lora_align_block_size`
- [Bug]: LMCache MP fallback adapter rejects cache_salt/cache_salts kwargs after #39837
- [Bug][Tracking Issue]: NaNs in CUDA Graph padding regions corrupt activations in some per-token kernels
- [Feature] Support passing configuration to custom attention backends
- Docs
- Python not yet supported