-
Notifications
You must be signed in to change notification settings - Fork 4.1k
sve optimization for HNSW::MinimaxHeap::pop_min() #4699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
1. sve optimization for HNSW::MinimaxHeap::pop_min() 2. Add prefetch for ids.data() and dis.data() to reduce memory latency Signed-off-by: Lizhen You <[email protected]>
|
Hi @LizYou! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
| while (i < k_size) { | ||
| svbool_t pg_iter = svwhilelt_b32_u64(i, k_size); | ||
|
|
||
| const size_t prefetch_iterations = 2; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why 2? please add a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review! The "2" here is the best performance I got during benchmarking. The idea is to prefetch the data certain steps ahead (here is 2) which timing is not too early and not too late for cache access. I will add some comment for explaining the usage of here.
|
|
||
| const size_t prefetch_iterations = 2; | ||
| size_t prefetch_idx = i + prefetch_iterations * lanes; | ||
| if (prefetch_idx < k_size) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this if really needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for avoiding the out-of-bound addresses. The prefetch_idx within the range [i, i + lanes] is safe in the loop, however we are prefetching i + 2 * lanes which might overflow the upper bound of the loop which might waste CPU cycles for prefetch. Let me know if you still think we should remove the check
|
overall, lgtm |
Signed-off-by: Lizhen You <[email protected]>
The unit test for pop_min():
$ ./faiss_test --gtest_filter=HNSW.Test_popmin*
WARNING clustering 1000 points to 40 centroids: please provide at least 1560 training points
Running main() from /home/scratch.lyou_gpu/arm/workspaces/faiss-main/build/_deps/googletest-src/googletest/src/gtest_main.cc
Note: Google Test filter = HNSW.Test_popmin*
[==========] Running 3 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 3 tests from HNSW
[ RUN ] HNSW.Test_popmin
[ OK ] HNSW.Test_popmin (0 ms)
[ RUN ] HNSW.Test_popmin_identical_distances
[ OK ] HNSW.Test_popmin_identical_distances (0 ms)
[ RUN ] HNSW.Test_popmin_infinite_distances
[ OK ] HNSW.Test_popmin_infinite_distances (0 ms)
[----------] 3 tests from HNSW (0 ms total)
[----------] Global test environment tear-down
[==========] 3 tests from 1 test suite ran. (0 ms total)
[ PASSED ] 3 tests.
Performance Result:
Benchmark: cuvs bench https://github.com/rapidsai/cuvs/tree/main/cpp/bench/ann
datasets: deep-96-image
Threads No: 1 and 8
Test Machine: Nvidia Grace CPU
1 Thread
Summary (1 Thread)
8 Threads
Summary (8 Threads)