tvm
https://github.com/apache/tvm
Python
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to tvm
Help out
- Issues
- [Bug][FRONTEND][ONNX] Error converting operator Expand: TVMError: broadcast_to expects the input tensor shape is broadcastable to the target shape.
- [Bug] [FRONTEND][ONNX] Error converting operator Slice: TVMError: Check failed: (IsBaseOf(relax::TensorStructInfo(DataType::Void(), kUnknownNDim), GetStructInfo(data))) is false
- [Bug] [CUDA] not compilable with CUDA 11.4 due to missing symbols
- [Bug] [ONNX][FRONTEND] - Loop and NonMaximalSupression operators missing
- [Bug] TVM cannot build the model correctly: InternalError: Check failed: value <= support::kMaxFloat16
- [Bug] Constant folding cannot process onnx model correctly: InternalError: Check failed: pb->value != 0 (0 vs. 0) : Divide by zero
- [ansor] Does it reasonable to use the axis multiple times ?
- [Bug] Inference - Phi-4 mini instruct
- [WebGPU] Support warp-level shuffle primitives with subgroup
- [Bug] InternalError: Check failed: (!expr->struct_info_.defined()) is false: To ensure idempotency, the expression passed to UpdateStructInfo must not have any prior StructInfo. However, expression # from tvm.script import tir as T @T.prim_func(private=True)
- Docs
- Python not yet supported