ggml-cpu: drop support for nnpa intrinsics #15821
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Closes #15721
Supersedes #15739
This Pull Request drops support for the NNPA Vector Intrinsics as the maintenance cost required does not justify the performance improvements for FP32 ↔ FP16 conversion.
Tested with both
-fa off
and-fa on
and ensured that the inference is correct on both modes.For future reference to IBMers that want to bring this acceleration back,
-fa on
, turned on by default) somehow causes tensor data to be invalid i.e.,-inf
andnan
. Make sure to check that the data is clean before determining if the conversion implementation is correct. See: ggml-cpu: fixes instability in NNPA Vector Intrinsics #15739 (comment)ggml_compute_forward_dup_f32
.