tvm
https://github.com/apache/tvm
Python
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to tvm
Help out
- Issues
- [Bug] InternalError: Squeeze dimension check too strict compared to PyTorch behavior
- [Feature Request] Support for sparse matrix multiplication and random number generation in PyTorch frontend
- [CI] Update cpplint script to support revision-based linting
- [Bug] [RISC-V RVV] Performance Issue: log operator slower on RVV
- [Bug] Customize Optimization Tutorial Error
- [Release] v0.23.0 release schedule
- [Bug] ONNX Round tie-breaking mismatch on 0.5: TVM lowers to llvm.round (ties-away-from-zero) so Round(sigmoid(0))=1, while ONNX spec requires Round(0.5)=0 (ties-to-even)
- [Bug] [FRONTEND][ONNX] Error converting operator ConvTranspose: InternalError: In Op(relax.add), the first input shape at dim 1 is T.int64(16) and the second input shape at dim 1 is T.int64(32), which are not broadcastable.
- [Bug] Segfault in `tvm.compile` (Relax→TIR, CUDA target) inside `tir::transform::InjectPTXLDG32` / `PTXRewriter::VisitStmt_(BufferStore)` when compiling `torch.export` model returning `(tril, triu)` tuple
- [Bug] `relax.frontend.torch.from_exported_program` aborts on sparse CSR buffer (`layout_impl is only implemented for TensorImpl subclasses`)
- Docs
- Python not yet supported