Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

src: remove threadpool and use boost asio thread_pool #9768

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

0xFFFC0000
Copy link
Collaborator

No description provided.

@selsta
Copy link
Collaborator

selsta commented Feb 4, 2025

Do you know why the boost thread_pool wasn't used originally? Is there a benefit / downside to using the boost version or does this just simplify the codebase?

@0xFFFC0000
Copy link
Collaborator Author

0xFFFC0000 commented Feb 4, 2025

Do you know why the boost thread_pool wasn't used originally?

boost asio thread_pool support in monerod is relatively new. It has been added to boost in 1.66 (2017) [1], which we supported pretty recently (I believe few weeks ago).

Is there a benefit / downside to using the boost version or does this just simplify the codebase?

Boost thread_pool is extremely battle tested. Removing extra complexity from our code base, and using a high performance / battle tested library to handle our core threading is a big plus imho.

  1. https://www.boost.org/doc/libs/1_66_0/boost/asio/thread_pool.hpp

@selsta
Copy link
Collaborator

selsta commented Feb 4, 2025

Pinging @hyc for feedback since he added the original thread_pool.

@0xFFFC0000 0xFFFC0000 force-pushed the dev/0xfffc/asio-thread_pool branch from 9144bf0 to a369d77 Compare February 4, 2025 15:24
@hyc
Copy link
Collaborator

hyc commented Feb 4, 2025

The threadpool code I wrote is pretty simple and low overhead. I would request some performance measurements to show that switching to boost offers any benefit before making any changes. What problems have seen with the existing code? In what way is it not already battle tested after so many years of use?

@tobtoht
Copy link
Collaborator

tobtoht commented Feb 4, 2025

Fails to build on Debian 10 and Ubuntu 20.04. Would prefer not replacing known-good implementations with third-party libraries.

@@ -487,17 +494,17 @@ bool load_txt_records_from_dns(std::vector<std::string> &good_records, const std

// send all requests in parallel
std::deque<bool> avail(dns_urls.size(), false), valid(dns_urls.size(), false);
tools::threadpool& tpool = tools::threadpool::getInstanceForIO();
tools::threadpool::waiter waiter(tpool);
int threads = std::thread::hardware_concurrency();
Copy link

@iamamyth iamamyth Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change (and similar ones elsewhere) doesn't just alter the backing implementation, but the semantics, because you end up with many thread pools (one per usage site), rather than just two (one for IO, and one for compute). The prior pattern makes more sense in the vast majority of cases (arguably there should be a pool for disk IO, in addition to the existing IO pool, which appears to be used for network IO).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each thread_pool, will be destroyed when goes out of scope.

So your objection factually is not correct about “we will end up with multiple thread polls”.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So your objection factually is not correct about “we will end up with multiple thread polls”.

Yes, it is: Call site one creates pool, enqueues work. So does call site 2, And 3. Now you have three pools; and so on.

Copy link

@iamamyth iamamyth Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The lifetime of the pools doesn't matter; if anything, one could argue a shorter lifetime makes the problem worse. My objection is purely that this model in no way matches underlying resources, which is the whole reason for pooling in the first place.

@0xFFFC0000
Copy link
Collaborator Author

Fails to build on Debian 10 and Ubuntu 20.04. Would prefer not replacing known-good implementations with third-party libraries.

Wrong!

No it does not fail. It is because second update I did. Which will be fixed in few seconds.

You can see here the build is successful:

https://github.com/monero-project/monero/actions/runs/13138855698
https://github.com/0xFFFC0000/monero/actions/runs/13135782403

@0xFFFC0000
Copy link
Collaborator Author

The threadpool code I wrote is pretty simple and low overhead. I would request some performance measurements to show that switching to boost offers any benefit before making any changes. What problems have seen with the existing code? In what way is it not already battle tested after so many years of use?

Respectfully I disagree. We need to revamp parallelism mechanism, at it is broken. (Cuprate, an alpha project is beating us ).

About requesting numbers, that is fair ask. I will try update this with numbers.

@tobtoht
Copy link
Collaborator

tobtoht commented Feb 4, 2025

You can see here the build is successful:

You're linking to the depends build, which uses Boost 1.84.0. See here for the Debian 10 build: https://github.com/monero-project/monero/actions/runs/13138855699/job/36660772422?pr=9768#step:9:424

Boost 1.66.0 thread_pool does not have the wait function: https://www.boost.org/doc/libs/1_66_0/doc/html/boost_asio/reference/thread_pool.html

@iamamyth
Copy link

iamamyth commented Feb 4, 2025

I would overall agree with hyc's sentiment: Building, and running faster, make this proposal viable.

@0xFFFC0000
Copy link
Collaborator Author

You can see here the build is successful:

You're linking to the depends build, which uses Boost 1.84.0. See here for the Debian 10 build: https://github.com/monero-project/monero/actions/runs/13138855699/job/36660772422?pr=9768#step:9:424

Boost 1.66.0 thread_pool does not have the wait function: https://www.boost.org/doc/libs/1_66_0/doc/html/boost_asio/reference/thread_pool.html

Wrong again. Please check the runs before claiming that.

You could just look at other action to get your answer, as you can see clearly building without any issue:

https://github.com/monero-project/monero/actions/runs/13135976164

The wait method is experiment to see what we can get away. The default one is and was join.

@tobtoht
Copy link
Collaborator

tobtoht commented Feb 4, 2025

Well, you can't "get away" with using wait because it doesn't build. If you don't want developers to comment on your unfinished PR then put [WIP] in the title. Or run your experiment on a separate branch.

@0xFFFC0000
Copy link
Collaborator Author

Well, you can't "get away" with using wait because it doesn't build. If you don't want developers to comment on your unfinished PR then mark it as a draft. Or run your experiment on a separate branch.

No. It is not about comment. It is about insisting that it does not work, while it works. I believe if you looked at the link at first time I sent you the link, there was no dispute.

@0xFFFC0000 0xFFFC0000 marked this pull request as draft February 4, 2025 17:42
@0xFFFC0000
Copy link
Collaborator Author

0xFFFC0000 commented Feb 5, 2025

I think these number should be very interesting for anyone who is willing to take a look at this. Look at the starvation issue monero thread pools suffers from (max/min). Patch included, you can run this test with this command:

./tests/unit_tests/unit_tests  --gtest_filter=*utilization*   --gtest_repeat=1000 --gtest_brief=1
Monero Thread Pool Statistics:
  Minimum execution time: 213us
  Maximum execution time: 126709us
  Max / Min ratio: 594.878
  Mean execution time: 44353.8us
  Median execution time: 45507.5us
  Standard deviation: 9806.03us
  Total time: 34677ms
  Tasks completed: 100000
  # threads: 128

TBB Thread Pool Statistics:
  Minimum execution time: 246us
  Maximum execution time: 45055us
  Max / Min ratio: 183.15
  Mean execution time: 8122.74us
  Median execution time: 8404us
  Standard deviation: 2228.16us
  Total time: 33902ms
  Tasks completed: 100000
  Total threads: 128
[==========] 2 tests from 1 test suite ran. (68592 ms total)
[  PASSED  ] 2 tests.

Monero Thread Pool Statistics:
  Minimum execution time: 216us
  Maximum execution time: 112111us
  Max / Min ratio: 519.032
  Mean execution time: 43382.9us
  Median execution time: 45671us
  Standard deviation: 12251us
  Total time: 33923ms
  Tasks completed: 100000
  # threads: 128

TBB Thread Pool Statistics:
  Minimum execution time: 240us
  Maximum execution time: 46576us
  Max / Min ratio: 194.067
  Mean execution time: 7527.61us
  Median execution time: 7471us
  Standard deviation: 2312.75us
  Total time: 31415ms
  Tasks completed: 100000
  Total threads: 128
[==========] 2 tests from 1 test suite ran. (65354 ms total)
[  PASSED  ] 2 tests.

Monero Thread Pool Statistics:
  Minimum execution time: 216us
  Maximum execution time: 116961us
  Max / Min ratio: 541.486
  Mean execution time: 43990.9us
  Median execution time: 46298us
  Standard deviation: 12240us
  Total time: 34389ms
  Tasks completed: 100000
  # threads: 128

TBB Thread Pool Statistics:
  Minimum execution time: 235us
  Maximum execution time: 28882us
  Max / Min ratio: 122.902
  Mean execution time: 8546.79us
  Median execution time: 9004us
  Standard deviation: 2245.3us
  Total time: 35663ms
  Tasks completed: 100000
  Total threads: 128
[==========] 2 tests from 1 test suite ran. (70071 ms total)
[  PASSED  ] 2 tests.

Monero Thread Pool Statistics:
  Minimum execution time: 216us
  Maximum execution time: 137360us
  Max / Min ratio: 635.926
  Mean execution time: 43857.8us
  Median execution time: 46244us
  Standard deviation: 12227.8us
  Total time: 34323ms
  Tasks completed: 100000
  # threads: 128

TBB Thread Pool Statistics:
  Minimum execution time: 240us
  Maximum execution time: 28266us
  Max / Min ratio: 117.775
  Mean execution time: 8260.85us
  Median execution time: 8675us
  Standard deviation: 2275.47us
  Total time: 34477ms
  Tasks completed: 100000
  Total threads: 128
[==========] 2 tests from 1 test suite ran. (68815 ms total)
[  PASSED  ] 2 tests.

Monero Thread Pool Statistics:
  Minimum execution time: 220us
  Maximum execution time: 101413us
  Max / Min ratio: 460.968
  Mean execution time: 47193.7us
  Median execution time: 48816.5us
  Standard deviation: 10829.5us
  Total time: 36892ms
  Tasks completed: 100000
  # threads: 128

TBB Thread Pool Statistics:
  Minimum execution time: 240us
  Maximum execution time: 34591us
  Max / Min ratio: 144.129
  Mean execution time: 8754.1us
  Median execution time: 9187us
  Standard deviation: 2144.79us
  Total time: 36527ms
  Tasks completed: 100000
  Total threads: 128
[==========] 2 tests from 1 test suite ran. (73432 ms total)
[  PASSED  ] 2 tests.

Monero Thread Pool Statistics:
  Minimum execution time: 214us
  Maximum execution time: 110699us
  Max / Min ratio: 517.285
  Mean execution time: 43915us
  Median execution time: 46344us
  Standard deviation: 12865.7us
  Total time: 34339ms
  Tasks completed: 100000
  # threads: 128

TBB Thread Pool Statistics:
  Minimum execution time: 240us
  Maximum execution time: 29571us
  Max / Min ratio: 123.213
  Mean execution time: 8645.97us
  Median execution time: 9054.5us
  Standard deviation: 2360.8us
  Total time: 36078ms
  Tasks completed: 100000
  Total threads: 128
[==========] 2 tests from 1 test suite ran. (70432 ms total)
[  PASSED  ] 2 tests.

Monero Thread Pool Statistics:
  Minimum execution time: 220us
  Maximum execution time: 122603us
  Max / Min ratio: 557.286
  Mean execution time: 38606.3us
  Median execution time: 39056.5us
  Standard deviation: 13701.6us
  Total time: 30184ms
  Tasks completed: 100000
  # threads: 128

TBB Thread Pool Statistics:
  Minimum execution time: 240us
  Maximum execution time: 38602us
  Max / Min ratio: 160.842
  Mean execution time: 8132.39us
  Median execution time: 8348us
  Standard deviation: 2454.35us
  Total time: 33937ms
  Tasks completed: 100000
  Total threads: 128
[==========] 2 tests from 1 test suite ran. (64136 ms total)
[  PASSED  ] 2 tests.

Patch:

diff --git a/tests/unit_tests/CMakeLists.txt b/tests/unit_tests/CMakeLists.txt
index e329b7506..a6f87093b 100644
--- a/tests/unit_tests/CMakeLists.txt
+++ b/tests/unit_tests/CMakeLists.txt
@@ -26,6 +26,8 @@
 # STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
 # THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
+find_package(TBB REQUIRED)
+
 set(unit_tests_sources
   account.cpp
   apply_permutation.cpp
@@ -131,6 +133,7 @@ target_link_libraries(unit_tests
     ${GTEST_LIBRARIES}
     ${CMAKE_THREAD_LIBS_INIT}
     ${EXTRA_LIBRARIES}
+    TBB::tbb
     PkgConfig::libzmq)
 set_property(TARGET unit_tests
   PROPERTY
diff --git a/tests/unit_tests/threadpool.cpp b/tests/unit_tests/threadpool.cpp
index d89f16167..698971ddd 100644
--- a/tests/unit_tests/threadpool.cpp
+++ b/tests/unit_tests/threadpool.cpp
@@ -28,9 +28,32 @@
 // THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 #include <atomic>
+#include <cstdint>
+#include <thread>
+#include <algorithm>
+#include <numeric>
+#include <boost/asio/thread_pool.hpp>
+#include <boost/asio/post.hpp>
 #include "gtest/gtest.h"
 #include "misc_language.h"
 #include "common/threadpool.h"
+#include <vector>
+#include <queue>
+#include <memory>
+#include <thread>
+#include <mutex>
+#include <condition_variable>
+#include <atomic>
+#include <functional>
+#include <exception>
+#include <algorithm>
+#include <cassert>
+#include <limits>
+
+#include <tbb/task.h>
+#include <tbb/task_arena.h>
+#include <tbb/task_group.h>
+#include <tbb/global_control.h>
 
 TEST(threadpool, wait_nothing)
 {
@@ -145,3 +168,187 @@ TEST(threadpool, leaf_reentrancy)
   waiter.wait();
   ASSERT_EQ(counter, 500000);
 }
+
+const size_t NUM_TASKS = 100000;
+const size_t FFT_SIZE = 4096 / 2;
+const size_t max_prime = 200000;
+const size_t NUM_THREADS = 128;
+
+
+auto is_prime = [](int n) {
+  if (n < 2) return false;
+  for (int i = 2; i <= std::sqrt(n); ++i)
+      if (n % i == 0) return false;
+  return true;
+};
+
+auto calculate_stats = [](const std::vector<uint64_t>& times) {
+    auto min_time = *std::min_element(times.begin(), times.end());
+    auto max_time = *std::max_element(times.begin(), times.end());
+    double mean = std::accumulate(times.begin(), times.end(), 0.0) / times.size();
+    
+    double median;
+    size_t N = sorted_times.size();
+    if (N % 2 == 0) {
+        // Even number of elements
+        median = (sorted_times[N / 2 - 1] + sorted_times[N / 2]) / 2.0;
+    } else {
+        // Odd number of elements
+        median = sorted_times[N / 2];
+    }
+
+    // Calculate standard deviation
+    double sq_sum = std::accumulate(times.begin(), times.end(), 0.0,
+        [mean](double acc, uint64_t val) {
+            double diff = val - mean;
+            return acc + diff * diff;
+        });
+    double stddev = std::sqrt(sq_sum / times.size());
+
+    
+    return std::make_tuple(min_time, max_time, mean, median, stddev);
+};
+
+// FFT function
+void fft(std::vector<std::complex<double>>& a) {
+    const size_t N = a.size();
+    if (N <= 1) return;
+
+    std::vector<std::complex<double>> even(N / 2);
+    std::vector<std::complex<double>> odd(N / 2);
+    for (size_t i = 0; i < N / 2; ++i) {
+        even[i] = a[i * 2];
+        odd[i] = a[i * 2 + 1];
+    }
+
+    fft(even);
+    fft(odd);
+
+    for (size_t i = 0; i < N / 2; ++i) {
+        std::complex<double> t = std::polar(1.0, -2 * M_PI * i / N) * odd[i];
+        a[i] = even[i] + t;
+        a[i + N / 2] = even[i] - t;
+    }
+}
+
+
+TEST(threadpool, utilization)
+{
+  srand(time(NULL));
+
+  tools::threadpool& tpool = *tools::threadpool::getNewForUnitTests(NUM_THREADS);
+  tools::threadpool::waiter waiter(tpool);
+
+  std::atomic<size_t> tasks_completed{0};
+  uint64_t execution_times[NUM_TASKS];
+
+  auto run_func = [&](uint64_t id) {
+      auto task_start = std::chrono::steady_clock::now();
+      
+      // Initialize input for FFT
+      std::vector<std::complex<double>> data(FFT_SIZE);
+      for (size_t i = 0; i < FFT_SIZE; ++i) {
+          data[i] = std::complex<double>(rand() % 100, rand() % 100);
+      }
+      
+      // Perform FFT
+      fft(data);
+      
+      auto task_end = std::chrono::steady_clock::now();
+      auto duration = std::chrono::duration_cast<std::chrono::microseconds>(
+          task_end - task_start).count();
+          
+      execution_times[id] = duration;
+      tasks_completed++;
+  };
+
+  auto start = std::chrono::steady_clock::now();
+  
+  for (size_t i = 0; i < NUM_TASKS; ++i) {
+      tpool.submit(&waiter, std::bind(run_func, i), (i % 3));
+  }
+  
+  waiter.wait();
+  auto end = std::chrono::steady_clock::now();
+  
+  auto [min_time, max_time, mean, median, stddev] = calculate_stats(std::vector<uint64_t>(execution_times, execution_times + NUM_TASKS));
+  auto total_time = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
+  double utilization = (mean * tasks_completed * 100.0) / (tpool.get_max_concurrency() * total_time);
+  
+  std::cout << "\nMonero Thread Pool Statistics:\n"
+            << "  Minimum execution time: " << min_time << "us\n"
+            << "  Maximum execution time: " << max_time << "us\n" 
+            << "  Max / Min ratio: " << max_time / (double) min_time << "\n" 
+            << "  Mean execution time: " << mean << "us\n"
+            << "  Median execution time: " << median << "us\n"
+            << "  Standard deviation: " << stddev << "us\n"
+            << "  Total time: " << std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::microseconds(total_time)).count() << "ms\n"
+            << "  Tasks completed: " << tasks_completed << "\n"
+            << "  # threads: " << tpool.get_max_concurrency() << "\n";
+            
+  
+  ASSERT_EQ(tasks_completed, NUM_TASKS);
+}
+
+
+TEST(threadpool, butilization)
+{
+  srand(time(NULL));
+
+  // boost::asio::thread_pool pool(NUM_THREADS);
+  tbb::global_control global_limit(tbb::global_control::max_allowed_parallelism, NUM_THREADS);
+  tbb::task_arena arena;
+  tbb::task_group group;
+
+  std::atomic<size_t> tasks_completed{0};
+  uint64_t execution_times[NUM_TASKS];
+
+  auto run_func = [&](uint64_t id) {
+      auto task_start = std::chrono::steady_clock::now();
+      
+      // Initialize input for FFT
+      std::vector<std::complex<double>> data(FFT_SIZE);
+      for (size_t i = 0; i < FFT_SIZE; ++i) {
+          data[i] = std::complex<double>(rand() % 100, rand() % 100);
+      }
+      
+      // Perform FFT
+      fft(data);
+      
+      auto task_end = std::chrono::steady_clock::now();
+      auto duration = std::chrono::duration_cast<std::chrono::microseconds>(
+          task_end - task_start).count();
+          
+      execution_times[id] = duration;
+      tasks_completed++;
+  };
+
+  auto start = std::chrono::steady_clock::now();
+  
+  for (size_t i = 0; i < NUM_TASKS; ++i) {
+    group.run(std::bind(run_func, i));
+    // boost::asio::post(pool, run_func);
+  }
+  
+  group.wait();
+  // pool.join();
+  auto end = std::chrono::steady_clock::now();
+  
+  auto [min_time, max_time, mean, median, stddev] = calculate_stats(std::vector<uint64_t>(execution_times, execution_times + NUM_TASKS));
+  auto total_time = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
+  double utilization = (mean * tasks_completed * 100.0) / (NUM_THREADS * total_time);
+  
+  std::cout << "\nTBB Thread Pool Statistics:\n"
+            << "  Minimum execution time: " << min_time << "us\n"
+            << "  Maximum execution time: " << max_time << "us\n" 
+            << "  Max / Min ratio: " << max_time / (double) min_time << "\n" 
+            << "  Mean execution time: " << mean << "us\n"
+            << "  Median execution time: " << median << "us\n"
+            << "  Standard deviation: " << stddev << "us\n"
+            << "  Total time: " << std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::microseconds(total_time)).count() << "ms\n"
+            << "  Tasks completed: " << tasks_completed << "\n"
+            << "  Total threads: " << NUM_THREADS << "\n";
+            
+  
+  ASSERT_EQ(tasks_completed, NUM_TASKS);
+}

Boost provides 1-3% improvement, although I would still say boost worth it due the simplification it provides. But the TBB [1] is deal breaker.

  1. https://en.wikipedia.org/wiki/Threading_Building_Blocks

@iamamyth
Copy link

iamamyth commented Feb 5, 2025

Work-stealing queues certainly have their merit. That said, I don't like this benchmark:

  1. Zero information on the experiment setup. What OS, CPU, memory, etc?
  2. Throughput seems the relevant metric for the daemon, so the per-task measurements muddy the data (the lower typical wait time for a work stealing queue is expected, but not necessarily that interesting).
  3. The variance statistics should be per run of N iterations. For example, I'd want to see standard deviation across 100 one minute runs, to understand the noise in the setup; I don't care about stddev within a run.

@0xFFFC0000
Copy link
Collaborator Author

Work-stealing queues certainly have their merit. That said, I don't like this benchmark:

  1. Zero information on the experiment setup. What OS, CPU, memory, etc?
  2. Throughput seems the relevant metric for the daemon, so the per-task measurements muddy the data (the lower typical wait time for a work stealing queue is expected, but not necessarily that interesting).
  3. The variance statistics should be per run of N iterations. For example, I'd want to see standard deviation across 100 one minute runs, to understand the noise in the setup; I don't care about stddev within a run.

Please investigate the benchmarks more deeply.

The purpose of that benchmark was not to prove tbb js faster ( which it is ), the purpose was to prove our thread pool suffers from severe task starvation. So no need for numbers or std dev across 1 minute run. You haven’t paid attention about the exact reason what those numbers prove.

@iamamyth
Copy link

iamamyth commented Feb 6, 2025

The "numbers" are meaningless without the experimental setup, and, per my comment, which remains unaddressed, throughput is the relevant metric, not fairness, and the numbers you posted (insufficient as they are), do not even suggest a meaningful gap in throughput.

@0xFFFC0000
Copy link
Collaborator Author

The "numbers" are meaningless without the experimental setup,

No they are not. The purpose is not benchmarking. The purpose is showing / proving starvation.

and, per my comment, which remains unaddressed, throughput is the relevant metric, not fairness,

Wrong, this shows you are not familiar with the internal of our thread pool. Our thread pool has a notion of leaf job. Which basically puts it on the forward of running queue. Basically you can starve a task. Fairnes is extremely important for us. Since we want our task to finish in roughly relative order.

and the numbers you posted (insufficient as they are), do not even suggest a meaningful gap in throughput.

Wrong. You haven’t understood the numbers. Pay attention to max to min ratio.

@iamamyth
Copy link

iamamyth commented Feb 6, 2025

If you want to make the case that queue responsiveness for leaf jobs heavily impacts overall daemon performance, this benchmark doesn't do that job, nor any job, as it's missing a basic requisite of any scientific experiment, the experimental setup. Furthermore, if you want to measure latency, this benchmark plainly doesn't, and it imposes an asymmetric constraint (1/3 of tasks leaf tasks, vs. no such requirement), plus additional compute burden (an extra divide), on the existing queue implementation.

@0xFFFC0000
Copy link
Collaborator Author

0xFFFC0000 commented Feb 6, 2025

as it's missing a basic requisite of any scientific experiment, the experimental setup

Again, wrong. Repeatedly stating that doesn’t make it a fact. I explained it in detail in my previous comment. That sole purpose is to prove the variance in running time of same task. Please read our discussion from beginning again to comprehend the objectives.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants