Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

src: remove threadpool and use boost asio thread_pool #9768

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion src/common/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ set(common_sources
perf_timer.cpp
pruning.cpp
spawn.cpp
threadpool.cpp
updates.cpp
aligned.c
timings.cc
Expand Down
17 changes: 12 additions & 5 deletions src/common/dns_utils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -35,12 +35,19 @@
#include <set>
#include <stdlib.h>
#include "include_base_utils.h"
#include "common/threadpool.h"
#include "crypto/crypto.h"
#include <boost/thread/mutex.hpp>
#include <boost/algorithm/string/join.hpp>
#include <boost/optional.hpp>
#include <boost/utility/string_ref.hpp>
#include <boost/bind/bind.hpp>
#include <boost/asio/thread_pool.hpp>
#include <boost/asio/post.hpp>
#include <boost/thread/condition_variable.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/thread.hpp>
#include <thread>

using namespace epee;

#undef MONERO_DEFAULT_LOG_CATEGORY
Expand Down Expand Up @@ -487,17 +494,17 @@ bool load_txt_records_from_dns(std::vector<std::string> &good_records, const std

// send all requests in parallel
std::deque<bool> avail(dns_urls.size(), false), valid(dns_urls.size(), false);
tools::threadpool& tpool = tools::threadpool::getInstanceForIO();
tools::threadpool::waiter waiter(tpool);
int threads = std::thread::hardware_concurrency();
Copy link

@iamamyth iamamyth Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change (and similar ones elsewhere) doesn't just alter the backing implementation, but the semantics, because you end up with many thread pools (one per usage site), rather than just two (one for IO, and one for compute). The prior pattern makes more sense in the vast majority of cases (arguably there should be a pool for disk IO, in addition to the existing IO pool, which appears to be used for network IO).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each thread_pool, will be destroyed when goes out of scope.

So your objection factually is not correct about “we will end up with multiple thread polls”.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So your objection factually is not correct about “we will end up with multiple thread polls”.

Yes, it is: Call site one creates pool, enqueues work. So does call site 2, And 3. Now you have three pools; and so on.

Copy link

@iamamyth iamamyth Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The lifetime of the pools doesn't matter; if anything, one could argue a shorter lifetime makes the problem worse. My objection is purely that this model in no way matches underlying resources, which is the whole reason for pooling in the first place.

boost::asio::thread_pool thread_pool(threads);
for (size_t n = 0; n < dns_urls.size(); ++n)
{
tpool.submit(&waiter,[n, dns_urls, &records, &avail, &valid](){
boost::asio::post(thread_pool, [n, dns_urls, &records, &avail, &valid](){
const auto res = tools::DNSResolver::instance().get_txt_record(dns_urls[n], avail[n], valid[n]);
for (const auto &s: res)
records[n].insert(s);
});
}
waiter.wait();
thread_pool.wait();

size_t cur_index = first_index;
do
Expand Down
180 changes: 0 additions & 180 deletions src/common/threadpool.cpp

This file was deleted.

107 changes: 0 additions & 107 deletions src/common/threadpool.h

This file was deleted.

Loading
Loading