Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Linalg] Bring back onnx AveragePool padding asymmetric support #3455

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

AmosLewis
Copy link
Collaborator

@AmosLewis AmosLewis commented Jun 13, 2024

Follow up of #3235 by @zjgarvey

ae6f5e8

@AmosLewis AmosLewis requested a review from zjgarvey June 13, 2024 03:46
@AmosLewis AmosLewis marked this pull request as draft June 14, 2024 01:18
@AmosLewis
Copy link
Collaborator Author

Base on following Zach comment, We convert it to draft leave this issue unsolved and prioritize the real model related issue request before Jun 30.

I'm double checking the math and it's taking a bit longer than expected. I think the suggestion I made of where to change the code is not completely correct, and this might take a bit more work than expected to remedy.

The way it is currently set up, with ih0 = oh*dh - lowpadH:

  1. left endpoint without pad = max(oh*dh - lowpadH , 0) (correct, since this will shift by the number of included pad elements on the left side of the kernel window used in the oh computation).

  2. As far as I'm aware, ih1 is always equal to ih0 + kH since this would never be greater than Hin + highpadH. (checked using both onnx and pytorch definitions for Hout, and the fact that the largest oh is Hout - 1).

  3. Therefore, right endpoint without pad = min(oh*dh - lowpadH + kH, Hin), which is equivalent to min(oh*dh +kH, Hin + lowpadH) - lowpadH, but should actually be min(oh*dh + kH, Hin + highpadH) - lowpadH in order to correctly exclude the high padding values.

An alternative which would work better:

Use ih0 = oh*dh and ih1 = oh*dh + kH so that ih0,ih1 represent the left and right positions, in the padded input tensor, of the kernel window used for computing at oh.

Then:

  1. Compute right endpoint without pad = max(ih0, lowPad)
  2. Compute right endpoint without pad = min(ih1, Hin + highPad)

Note: these endpoints literally correspond to the kernel window as it sits inside the padded input tensor.

I wouldn't concern yourself with the countIncludePad==true case in this linalg generic since this is handled properly in the original code.


I'm not sure if this is very high priority right now or not. I have an additional concern about the fact that the result shape of the onnx op in the asymmetric case could possibly be inconsistent with the shape inference result of the pytorch op. Before investing too much additional time, it might be a good idea to add a lit test for an exaggeratedly asymmetric op and see.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI the inception_v1 model from https://github.com/onnx/models/tree/main/validated/vision/classification/inception_and_googlenet/inception_v1 (specifically this .onnx file has AveragePool ops that could possibly benefit from this. (Might be some other detail of AveragePool though - I'm not 100% sure)

From a new test suite coming in iree-org/iree-test-suites#23, logs as of https://github.com/llvm/torch-mlir/tree/d6cf718f103a50e57d39ffb85a878bc8ba1ca16a:

INFO     onnx_models.conftest:conftest.py:160 Launching compile command:
  cd D:\dev\projects\iree-test-suites\onnx_models && iree-compile artifacts\vision\classification\inception-v1-12_version17.mlir --iree-hal-target-backends=llvm-cpu -o artifacts\vision\classification\inception-v1-12_version17_cpu.vmfb
ERROR    onnx_models.conftest:conftest.py:166 Compilation of 'D:\dev\projects\iree-test-suites\onnx_models\artifacts\vision\classification\inception-v1-12_version17_cpu.vmfb' failed
ERROR    onnx_models.conftest:conftest.py:167 iree-compile stdout:
ERROR    onnx_models.conftest:conftest.py:168 
ERROR    onnx_models.conftest:conftest.py:169 iree-compile stderr:
ERROR    onnx_models.conftest:conftest.py:170 artifacts\vision\classification\inception-v1-12_version17.mlir:261:12: error: failed to legalize operation 'torch.operator' that was explicitly marked illegal
    %257 = torch.operator "onnx.AveragePool"(%256) {torch.onnx.kernel_shape = [7 : si64, 7 : si64], torch.onnx.pads = [0 : si64, 0 : si64, 1 : si64, 1 : si64], torch.onnx.strides = [1 : si64, 1 : si64]} : (!torch.vtensor<[1,1024,6,6],f32>) -> !torch.vtensor<[1,1024,1,1],f32> 
           ^
artifacts\vision\classification\inception-v1-12_version17.mlir:261:12: note: see current operation: %1537 = "torch.operator"(%1536) <{name = "onnx.AveragePool"}> {torch.onnx.kernel_shape = [7 : si64, 7 : si64], torch.onnx.pads = [0 : si64, 0 : si64, 1 : si64, 1 : si64], torch.onnx.strides = [1 : si64, 1 : si64]} : (!torch.vtensor<[1,1024,6,6],f32>) -> !torch.vtensor<[1,1024,1,1],f32>

Logs with this PR patched:

INFO     onnx_models.conftest:conftest.py:160 Launching compile command:
  cd D:\dev\projects\iree-test-suites\onnx_models && iree-compile artifacts\vision\classification\inception-v1-12_version17.mlir --iree-hal-target-backends=llvm-cpu -o artifacts\vision\classification\inception-v1-12_version17_cpu.vmfb
ERROR    onnx_models.conftest:conftest.py:166 Compilation of 'D:\dev\projects\iree-test-suites\onnx_models\artifacts\vision\classification\inception-v1-12_version17_cpu.vmfb' failed
ERROR    onnx_models.conftest:conftest.py:167 iree-compile stdout:
ERROR    onnx_models.conftest:conftest.py:168
ERROR    onnx_models.conftest:conftest.py:169 iree-compile stderr:
ERROR    onnx_models.conftest:conftest.py:170 artifacts\vision\classification\inception-v1-12_version17.mlir:261:12: error: 'tensor.cast' op operand type 'tensor<1x1024x0x0xf32>' and result type 'tensor<1x1024x1x1xf32>' are cast incompatible
    %257 = torch.operator "onnx.AveragePool"(%256) {torch.onnx.kernel_shape = [7 : si64, 7 : si64], torch.onnx.pads = [0 : si64, 0 : si64, 1 : si64, 1 : si64], torch.onnx.strides = [1 : si64, 1 : si64]} : (!torch.vtensor<[1,1024,6,6],f32>) -> !torch.vtensor<[1,1024,1,1],f32>
           ^
artifacts\vision\classification\inception-v1-12_version17.mlir:261:12: note: see current operation: %6488 = "tensor.cast"(%6487) : (tensor<1x1024x0x0xf32>) -> tensor<1x1024x1x1xf32>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants