-
Notifications
You must be signed in to change notification settings - Fork 505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Linalg] Bring back onnx AveragePool padding asymmetric support #3455
base: main
Are you sure you want to change the base?
Conversation
Base on following Zach comment, We convert it to draft leave this issue unsolved and prioritize the real model related issue request before Jun 30.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI the inception_v1 model from https://github.com/onnx/models/tree/main/validated/vision/classification/inception_and_googlenet/inception_v1 (specifically this .onnx
file has AveragePool ops that could possibly benefit from this. (Might be some other detail of AveragePool though - I'm not 100% sure)
From a new test suite coming in iree-org/iree-test-suites#23, logs as of https://github.com/llvm/torch-mlir/tree/d6cf718f103a50e57d39ffb85a878bc8ba1ca16a:
INFO onnx_models.conftest:conftest.py:160 Launching compile command:
cd D:\dev\projects\iree-test-suites\onnx_models && iree-compile artifacts\vision\classification\inception-v1-12_version17.mlir --iree-hal-target-backends=llvm-cpu -o artifacts\vision\classification\inception-v1-12_version17_cpu.vmfb
ERROR onnx_models.conftest:conftest.py:166 Compilation of 'D:\dev\projects\iree-test-suites\onnx_models\artifacts\vision\classification\inception-v1-12_version17_cpu.vmfb' failed
ERROR onnx_models.conftest:conftest.py:167 iree-compile stdout:
ERROR onnx_models.conftest:conftest.py:168
ERROR onnx_models.conftest:conftest.py:169 iree-compile stderr:
ERROR onnx_models.conftest:conftest.py:170 artifacts\vision\classification\inception-v1-12_version17.mlir:261:12: error: failed to legalize operation 'torch.operator' that was explicitly marked illegal
%257 = torch.operator "onnx.AveragePool"(%256) {torch.onnx.kernel_shape = [7 : si64, 7 : si64], torch.onnx.pads = [0 : si64, 0 : si64, 1 : si64, 1 : si64], torch.onnx.strides = [1 : si64, 1 : si64]} : (!torch.vtensor<[1,1024,6,6],f32>) -> !torch.vtensor<[1,1024,1,1],f32>
^
artifacts\vision\classification\inception-v1-12_version17.mlir:261:12: note: see current operation: %1537 = "torch.operator"(%1536) <{name = "onnx.AveragePool"}> {torch.onnx.kernel_shape = [7 : si64, 7 : si64], torch.onnx.pads = [0 : si64, 0 : si64, 1 : si64, 1 : si64], torch.onnx.strides = [1 : si64, 1 : si64]} : (!torch.vtensor<[1,1024,6,6],f32>) -> !torch.vtensor<[1,1024,1,1],f32>
Logs with this PR patched:
INFO onnx_models.conftest:conftest.py:160 Launching compile command:
cd D:\dev\projects\iree-test-suites\onnx_models && iree-compile artifacts\vision\classification\inception-v1-12_version17.mlir --iree-hal-target-backends=llvm-cpu -o artifacts\vision\classification\inception-v1-12_version17_cpu.vmfb
ERROR onnx_models.conftest:conftest.py:166 Compilation of 'D:\dev\projects\iree-test-suites\onnx_models\artifacts\vision\classification\inception-v1-12_version17_cpu.vmfb' failed
ERROR onnx_models.conftest:conftest.py:167 iree-compile stdout:
ERROR onnx_models.conftest:conftest.py:168
ERROR onnx_models.conftest:conftest.py:169 iree-compile stderr:
ERROR onnx_models.conftest:conftest.py:170 artifacts\vision\classification\inception-v1-12_version17.mlir:261:12: error: 'tensor.cast' op operand type 'tensor<1x1024x0x0xf32>' and result type 'tensor<1x1024x1x1xf32>' are cast incompatible
%257 = torch.operator "onnx.AveragePool"(%256) {torch.onnx.kernel_shape = [7 : si64, 7 : si64], torch.onnx.pads = [0 : si64, 0 : si64, 1 : si64, 1 : si64], torch.onnx.strides = [1 : si64, 1 : si64]} : (!torch.vtensor<[1,1024,6,6],f32>) -> !torch.vtensor<[1,1024,1,1],f32>
^
artifacts\vision\classification\inception-v1-12_version17.mlir:261:12: note: see current operation: %6488 = "tensor.cast"(%6487) : (tensor<1x1024x0x0xf32>) -> tensor<1x1024x1x1xf32>
Follow up of #3235 by @zjgarvey
ae6f5e8