Skip to content
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions src/liblelantus/threadpool.h
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ class ParallelOpThreadPool {

// Post a task to the thread pool and return a future to wait for its completion
boost::future<Result> PostTask(std::function<Result()> task) {
boost::packaged_task<Result> packagedTask(std::move(task));
boost::packaged_task<Result> packagedTask(task);
boost::future<Result> ret = packagedTask.get_future();

boost::mutex::scoped_lock lock(task_queue_mutex);
Expand All @@ -91,7 +91,7 @@ class ParallelOpThreadPool {
task_queue.emplace(std::move(packagedTask));
task_queue_condition.notify_one();

return std::move(ret);
return ret;
}

int GetNumberOfThreads() const {
Expand Down Expand Up @@ -119,6 +119,11 @@ class ParallelOpThreadPool {
boost::mutex::scoped_lock lock(task_queue_mutex);
return shutdown;
}

std::size_t GetPendingTaskCount() {
boost::mutex::scoped_lock lock(task_queue_mutex);
return task_queue.size();
}
};


Expand Down
30 changes: 18 additions & 12 deletions src/spark/state.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ struct ProofCheckState {
static std::map<uint256, ProofCheckState> gCheckedSparkSpendTransactions;
static CCriticalSection cs_checkedSparkSpendTransactions;

static ParallelOpThreadPool<bool> gCheckProofThreadPool(boost::thread::hardware_concurrency());
static ParallelOpThreadPool<bool> gCheckProofThreadPool(std::max(boost::thread::hardware_concurrency(), 8u));
Copy link
Contributor

@coderabbitai coderabbitai bot Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# Read the file around line 27
echo "=== src/spark/state.cpp lines 20-35 ==="
sed -n '20,35p' src/spark/state.cpp

# Search for references to gCheckProofThreadPool
echo -e "\n=== All references to gCheckProofThreadPool ==="
rg -n "gCheckProofThreadPool" src/spark/state.cpp

# Search for line 808 mentioned in original comment (backpressure mechanism)
echo -e "\n=== Context around line 808 ==="
sed -n '800,820p' src/spark/state.cpp

# Look for any PR or design documentation
echo -e "\n=== Search for thread pool or cap mentions in comments ==="
rg -n -i "thread.*pool|cap.*thread|concurrency.*limit|hardware_concurrency" src/spark/state.cpp | head -20

Repository: firoorg/firo

Length of output: 3131


Use std::min instead of std::max to enforce the 8-thread cap.

The code uses std::max(boost::thread::hardware_concurrency(), 8u), which sets a minimum of 8 threads. On a 64-core system, this creates 64 threads, contradicting the PR goal of "constraining the ZK proof verification thread pool size" to an explicit cap of 8. The backpressure mechanism at line 808 only throttles task queueing, not thread creation. Change to std::min(boost::thread::hardware_concurrency(), 8u) to enforce the intended maximum.

🤖 Prompt for AI Agents
In src/spark/state.cpp around line 27, the thread-pool is constructed with
std::max(boost::thread::hardware_concurrency(), 8u) which sets a minimum of 8
threads instead of capping at 8; replace std::max with std::min so the
constructor uses std::min(boost::thread::hardware_concurrency(), 8u) to enforce
an 8-thread maximum and prevent creating more threads than the intended cap.

Copy link
Contributor

@aleflm aleflm Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@psolstice I confirm Coderabbit's comment here. std::max is not correct here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!


static CSparkState sparkState;

Expand Down Expand Up @@ -804,17 +804,23 @@ bool CheckSparkSpendTransaction(
}
else if (!fStatefulSigmaCheck && !gCheckProofThreadPool.IsPoolShutdown()) {
// not an urgent check, put the proof into the thread pool for verification
auto future = gCheckProofThreadPool.PostTask([spend, cover_sets]() {
try {
return spark::SpendTransaction::verify(*spend, cover_sets);
} catch (const std::exception &) {
return false;
}
});
auto &checkState = gCheckedSparkSpendTransactions[hashTx];
checkState.fChecked = false;
checkState.fResult = false;
checkState.checkInProgress = std::make_shared<boost::future<bool>>(std::move(future));
// don't post a request if there are too many tasks already
if (gCheckProofThreadPool.GetPendingTaskCount() < (std::size_t)gCheckProofThreadPool.GetNumberOfThreads()/2) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just asking. I am not 100% sure. It wouldn't be better to have this:

if (gCheckProofThreadPool.GetPendingTaskCount() < ...) {
    gCheckProofThreadPool.PostTask(...);
}

as atomic operation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's not that important right here, it's just an optimization, even if the number of tasks changes it won't cause any real damage

auto future = gCheckProofThreadPool.PostTask([spend, cover_sets]() mutable {
try {
bool result = spark::SpendTransaction::verify(*spend, cover_sets);
spend.reset();
cover_sets.clear();
return result;
} catch (const std::exception &) {
return false;
}
});
auto &checkState = gCheckedSparkSpendTransactions[hashTx];
checkState.fChecked = false;
checkState.fResult = false;
checkState.checkInProgress = std::make_shared<boost::future<bool>>(std::move(future));
}
}
}
while (fRecheckNeeded);
Expand Down
Loading