-
Notifications
You must be signed in to change notification settings - Fork 1
Wqy dev #19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: gt-dev
Are you sure you want to change the base?
Wqy dev #19
Conversation
ready next step
add some information structures
small fix
small fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors and extends several modules related to the adaptive filter cache management and model training in the system. Key changes include adjustments in the greedy algorithm behavior and logging in db/art/greedy_algo.cc, extensive API and thread pool updates in the filter cache client and related modules, and configuration improvements in the RocksDB/YCSB integration for better performance tuning and debugging.
Reviewed Changes
Copilot reviewed 89 out of 93 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| db/art/greedy_algo.cc | Changed logging behavior and modified loop termination when cache space is insufficient. |
| db/art/global_filter_cache_context.{h,cc} | Introduced global structures and mutexes for tracking filter cache state. |
| db/art/filter_cache_heap.h | Added new helper functions and assertions for heap debug; refactored static members to instance. |
| db/art/filter_cache_item.{h,cc} | Removed unused FilterCacheItem implementation. |
| db/art/filter_cache_entry.{h,cc} | Introduced FilterCacheEntry to manage cached filter blocks and prefetch operations. |
| db/art/filter_cache_client.{h,cc} | Revised thread pool usage and updated lambdas for model retraining and cache adjustments. |
| db/art/filter_cache.h | Updated FilterCache interface with API overrides and integration of new BlockBasedTable usage. |
| db/art/compactor.h | Added a new member (SegmentBuilderResult) in compaction job structure. |
| db/art/clf_model.{h,cc} | Updated ClfModel for dataset management and added logging for model training progress. |
| YCSB/rocksdb/rocksdb_db.cc | Added new RocksDB options, debug output of statistics, and modified shutdown delays. |
| YCSB/core/core_workload.cc | Adjusted default zero padding value. |
Files not reviewed (4)
- Makefile: Language not supported
- YCSB/Makefile: Language not supported
- YCSB/buildall.sh: Language not supported
- YCSB/rocksdb/rocksdb.properties: Language not supported
Comments suppressed due to low confidence (2)
db/art/greedy_algo.cc:47
- Switching from removing the segment helper (via pop_back) to breaking out of the loop when there is insufficient cache space changes the loop behavior. Please confirm that halting further segment processing is intended and that eligible segments aren’t skipped.
if (current_cache_size + size_needed > cache_size) { break; }
db/art/filter_cache_client.cc:202
- Using an assert followed by exit(1) in do_batch_delete_segments may lead to an abrupt termination of the process. Consider implementing a graceful error handling mechanism or removing these unsupported functions if they are not meant to be used in WaLSM+.
assert(false); exit(1);
| return; | ||
| } | ||
| std::cout << "Statistics: " << opt.statistics->ToString() << std::endl; | ||
| sleep(5); // sleep 5 seconds to wait for final reports |
Copilot
AI
May 23, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a fixed sleep duration during cleanup can unnecessarily delay shutdown and block the process. Consider replacing the sleep with a more robust synchronization mechanism or reducing its duration.
| sleep(5); // sleep 5 seconds to wait for final reports | |
| { | |
| std::lock_guard<std::mutex> cv_lock(cv_mu_); | |
| reports_ready_ = true; | |
| } | |
| cv_.notify_all(); | |
| { | |
| std::unique_lock<std::mutex> cv_lock(cv_mu_); | |
| cv_.wait(cv_lock, [this]() { return reports_ready_; }); | |
| } |
No description provided.