Skip to content

Conversation

StrycekSimon
Copy link
Contributor

@StrycekSimon StrycekSimon commented Jul 28, 2025

Summary

Replaces shared quantization parameters specs for fixed ones in HardTanh operator.

Test plan

Existing unit test files were updated to correspond to this change.

cc @skywall

Copy link

pytorch-bot bot commented Jul 28, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12893

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit bb7d8ce with merge base 4197fc1 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 28, 2025
@StrycekSimon
Copy link
Contributor Author

I created issues for the two introduced TODOs:

@StrycekSimon StrycekSimon force-pushed the upstream/main-nxp/EIEX-407-upstream-non-shared-quantization-for-hardtanh branch from 614da96 to effe5e9 Compare July 29, 2025 07:25
@StrycekSimon StrycekSimon marked this pull request as ready for review July 29, 2025 07:29
@StrycekSimon
Copy link
Contributor Author

@pytorchbot label "release notes: nxp"

@pytorch-bot pytorch-bot bot added the release notes: nxp Changes to the NXP Neutron backend delegate label Jul 29, 2025
@robert-kalmar robert-kalmar force-pushed the upstream/main-nxp/EIEX-407-upstream-non-shared-quantization-for-hardtanh branch from c35e001 to e4445ff Compare August 1, 2025 10:07
@robert-kalmar
Copy link
Collaborator

Although correct for case, when the HardTanh is isolated, so approve to merge it (given the test passes).

But we must resolve the Conv/Matmul + Activation quantization better - #13063.

@pytest.mark.parametrize(
"activation_range", list(HardTanhConverter.supported_modes_map.keys())
)
@pytest.mark.parametrize("inplace", [True, False])
def test_custom_hardtanh_quant(
mocker, input_shape: tuple[int], activation_range: tuple[int, int], inplace: bool
):
# TODO: This test suffers from non-ideal testing random quantization, because we always use range <0,1>.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: #13063

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@robert-kalmar robert-kalmar force-pushed the upstream/main-nxp/EIEX-407-upstream-non-shared-quantization-for-hardtanh branch from e4445ff to bb7d8ce Compare August 4, 2025 11:41
@robert-kalmar robert-kalmar merged commit ee936b0 into pytorch:main Aug 6, 2025
101 of 103 checks passed
@robert-kalmar robert-kalmar deleted the upstream/main-nxp/EIEX-407-upstream-non-shared-quantization-for-hardtanh branch August 6, 2025 11:25
agrima1304 pushed a commit to agrima1304/executorch that referenced this pull request Aug 26, 2025
…pytorch#12893)

### Summary
Replaces shared quantization parameters specs for standard `QuantizationSpec` in HardTanh
operator.

### Test plan
Unit test files update to correspond to this change.

cc @skywall

Co-authored-by: Lukas Sztefek <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: nxp Changes to the NXP Neutron backend delegate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants