tvm
https://github.com/apache/tvm
Python
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to tvm
Help out
- Issues
- [Bug] Error occurred during tvm compilation
- Bump form-data from 3.0.1 to 3.0.4 in /web
- KVCache Sequence Padding
- [bug/feature][relax.frontend.torch] from_exported_program rejects randn.default (blocks repro that stresses advanced-indexing write + tuple output)
- [bug][relax.frontend.torch] FFI segfault in tvm::relax::Tuple::Tuple when importing torch.export graph with 4D advanced-indexing write (aten.index_put_) and tuple outputs
- [feature][relax.frontend.torch] Missing coverage for STFT+RNN pipeline: 'rnn_tanh.input', 'real.default', 'imag.default', 'unfold.default', 'fft_fft.default' in from_exported_program
- [Bug] `from __future__ import annotations` breaks type annotation containing local variable
- [Bug] tvm.tir.schedule.schedule.ScheduleError
- [Bug] [relax][torch] from_exported_program segfault with exported MHA using eq(0)/expand mask + in-place masked_fill_ (get_attr lifting warning from PyTorch)
- [Bug] Segfault when applying Parallel during TIR schedule rewriting
- Docs
- Python not yet supported