-
Notifications
You must be signed in to change notification settings - Fork 10
Description
I am currently using C API for .so xgboost model loading and multiple thread will use the same model for prediction, I would like to know if the prediction be thread safe for prediction? In the real run, the first prediction will be extremely time consuming but subsequent prediction will decrease dramatically.
Here is the code:
TL2cgenPredictorHandle pred; auto ret = TL2cgenPredictorLoad(new_model_path.c_str(), /*num_worker_thread=*/1, &pred); std::vector<std::future<void>> futures; Common::ThreadPool pool(21); // Assuming ThreadPool is implemented and takes the number of threads as a parameter for (size_t i = 0; i < ticker_universe.size(); ++i) { auto future = pool.enqueue(&ticker_universe,&pred, i]() { calculate(ticker_universe[i], pred); }); futures.push_back(std::move(future)); } for (auto &future : futures) { future.get(); }
And the calculate(ticker_universe[i], pred) function will create the dmat and use the same pred to calculate, will this be thread safe?