{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":6934395,"defaultBranch":"main","name":"rocksdb","ownerLogin":"facebook","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2012-11-30T06:16:18.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/69631?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1726879026.0","currentOid":""},"activityList":{"items":[{"before":"a1a102ffcefab83e75b9d026d204395d00f2a582","after":"fbbb08770fc46117d1f056d423cd12cab6fdacdb","ref":"refs/heads/main","pushedAt":"2024-09-21T02:25:32.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Update HISTORY.md, version.h, and the format compatibility check script for the 9.7 release (#13027)\n\nSummary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/13027\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D63158344\n\nPulled By: ltamasi\n\nfbshipit-source-id: e650a0024155d52c7aa2afd0f242b8363071279a","shortMessageHtmlLink":"Update HISTORY.md, version.h, and the format compatibility check scri…"}},{"before":null,"after":"a617c14d2ae6fd21a7323d3e0912b3877aeb7949","ref":"refs/heads/9.7.fb","pushedAt":"2024-09-21T00:37:06.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"ltamasi","name":null,"path":"/ltamasi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/47607618?s=80&v=4"},"commit":{"message":"Update HISTORY for the 9.7 release","shortMessageHtmlLink":"Update HISTORY for the 9.7 release"}},{"before":"5f4a8c3da41327e9844d8d14793d34a0ec3453b9","after":"a1a102ffcefab83e75b9d026d204395d00f2a582","ref":"refs/heads/main","pushedAt":"2024-09-20T23:25:08.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Steps toward deprecating implicit prefix seek, related fixes (#13026)\n\nSummary:\nWith some new use cases onboarding to prefix extractors/seek/filters, one of the risks is existing iterator code, e.g. for maintenance tasks, being unintentionally subject to prefix seek semantics. This is a longstanding known design flaw with prefix seek, and `prefix_same_as_start` and `auto_prefix_mode` were steps in the direction of making that obsolete. However, we can't just immediately set `total_order_seek` to true by default, because that would impact so much code instantly.\n\nHere we add a new DB option, `prefix_seek_opt_in_only` that basically allows users to transition to the future behavior when they are ready. When set to true, all iterators will be treated as if `total_order_seek=true` and then the only ways to get prefix seek semantics are with `prefix_same_as_start` or `auto_prefix_mode`.\n\nRelated fixes / changes:\n* Make sure that `prefix_same_as_start` and `auto_prefix_mode` are compatible with (or override) `total_order_seek` (depending on your interpretation).\n* Fix a bug in which a new iterator after dynamically changing the prefix extractor might mix different prefix semantics between memtable and SSTs. Both should use the latest extractor semantics, which means iterators ignoring memtable prefix filters with an old extractor. And that means passing the latest prefix extractor to new memtable iterators that might use prefix seek. (Without the fix, the test added for this fails in many ways.)\n\nSuggested follow-up:\n* Investigate a FIXME where a MergeIteratorBuilder is created in db_impl.cc. No unit test detects a change in value that should impact correctness.\n* Make memtable prefix bloom compatible with `auto_prefix_mode`, which might require involving the memtablereps because we don't know at iterator creation time (only seek time) whether an auto_prefix_mode seek will be a prefix seek.\n* Add `prefix_same_as_start` testing to db_stress\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13026\n\nTest Plan:\ntests updated, added. Add combination of `total_order_seek=true` and `auto_prefix_mode=true` to stress test. Ran `make blackbox_crash_test` for a long while.\n\nManually ran tests with `prefix_seek_opt_in_only=true` as default, looking for unexpected issues. I inspected most of the results and migrated many tests to be ready for such a change (but not all).\n\nReviewed By: ltamasi\n\nDifferential Revision: D63147378\n\nPulled By: pdillinger\n\nfbshipit-source-id: 1f4477b730683d43b4be7e933338583702d3c25e","shortMessageHtmlLink":"Steps toward deprecating implicit prefix seek, related fixes (#13026)"}},{"before":"6549b117148f01a5c35287d5de9754183dd3b781","after":"5f4a8c3da41327e9844d8d14793d34a0ec3453b9","ref":"refs/heads/main","pushedAt":"2024-09-20T21:25:15.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Load latest options from OPTIONS file in Remote host (#13025)\n\nSummary:\nWe've been serializing and deserializing DBOptions and CFOptions (and other CF into) as part of `CompactionServiceInput`. These are all readily available in the OPTIONS file and the remote worker can read the OPTIONS file to obtain the same information. This helps reducing the size of payload significantly.\n\nIn a very rare scenario if the OPTIONS file is purged due to options change by primary host at the same time while the remote host is loading the latest options, it may fail. In this case, we just retry once.\n\nThis also solves the problem where we had to open the default CF with the CFOption from another CF if the remote compaction is for a non-default column family. (TODO comment in /db_impl_secondary.cc)\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13025\n\nTest Plan:\nUnit Tests\n```\n./compaction_service_test\n```\n```\n./compaction_job_test\n```\n\nAlso tested with Meta's internal Offload Infra\n\nReviewed By: anand1976, cbi42\n\nDifferential Revision: D63100109\n\nPulled By: jaykorean\n\nfbshipit-source-id: b7162695e31e2c5a920daa7f432842163a5b156d","shortMessageHtmlLink":"Load latest options from OPTIONS file in Remote host (#13025)"}},{"before":"71e38dbe25253dc19e0ca0214ee8244d6731ac76","after":"6549b117148f01a5c35287d5de9754183dd3b781","ref":"refs/heads/main","pushedAt":"2024-09-20T19:25:06.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Make Cache a customizable class (#13024)\n\nSummary:\nThis PR allows a Cache object to be created using the object registry.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13024\n\nReviewed By: pdillinger\n\nDifferential Revision: D63043233\n\nPulled By: anand1976\n\nfbshipit-source-id: 5bc3f7c29b35ad62638ff8205451303e2cecea9d","shortMessageHtmlLink":"Make Cache a customizable class (#13024)"}},{"before":"54ace7f34083090dcefa51c0fd9381436ccb8fa0","after":"71e38dbe25253dc19e0ca0214ee8244d6731ac76","ref":"refs/heads/main","pushedAt":"2024-09-19T22:53:59.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Compact one file at a time for FIFO temperature change compactions (#13018)\n\nSummary:\nPer customer request, we should not merge multiple SST files together during temperature change compaction, since this can cause FIFO TTL compactions to be delayed. This PR changes the compaction picking logic to pick one file at a time.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13018\n\nTest Plan: * updated some existing unit tests to test this new behavior.\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D62883292\n\nPulled By: cbi42\n\nfbshipit-source-id: 6a9fc8c296b5d9b17168ef6645f25153241c8b93","shortMessageHtmlLink":"Compact one file at a time for FIFO temperature change compactions (#…"}},{"before":"98c33cb8e39412ffdc4aa9b0ec97204dbd28acb9","after":"54ace7f34083090dcefa51c0fd9381436ccb8fa0","ref":"refs/heads/main","pushedAt":"2024-09-19T22:50:06.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Change the semantics of blob_garbage_collection_force_threshold to provide better control over space amp (#13022)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13022\n\nCurrently, `blob_garbage_collection_force_threshold` applies to the oldest batch of blob files, which is typically only a small subset of the blob files currently eligible for garbage collection. This can result in a form of head-of-line blocking: no GC-triggered compactions will be scheduled if the oldest batch does not currently exceed the threshold, even if a lot of higher-numbered blob files do. This can in turn lead to high space amplification that exceeds the soft bound implicit in the force threshold (e.g. 50% would suggest a space amp of <2 and 75% would imply a space amp of <4). The patch changes the semantics of this configuration threshold to apply to the entire set of blob files that are eligible for garbage collection based on `blob_garbage_collection_age_cutoff`. This provides more intuitive semantics for the option and can provide a better write amp/space amp trade-off. (Note that GC-triggered compactions still pick the same SST files as before, so triggered GC still targets the oldest the blob files.)\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D62977860\n\nfbshipit-source-id: a999f31fe9cdda313de513f0e7a6fc707424d4a3","shortMessageHtmlLink":"Change the semantics of blob_garbage_collection_force_threshold to pr…"}},{"before":"1238120fe65171df7a47540eef1357de6592755d","after":"98c33cb8e39412ffdc4aa9b0ec97204dbd28acb9","ref":"refs/heads/main","pushedAt":"2024-09-19T21:10:31.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Steps toward making IDENTITY file obsolete (#13019)\n\nSummary:\n* Set write_dbid_to_manifest=true by default\n* Add new option write_identity_file (default true) that allows us to opt-in to future behavior without identity file\n* Refactor related DB open code to minimize code duplication\n\n_Recommend hiding whitespace changes for review_\n\nIntended follow-up: add support to ldb for reading and even replacing the DB identity in the manifest. Could be a variant of `update_manifest` command or based on it.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13019\n\nTest Plan: unit tests and stress test updated for new functionality\n\nReviewed By: anand1976\n\nDifferential Revision: D62898229\n\nPulled By: pdillinger\n\nfbshipit-source-id: c08b25cf790610b034e51a9de0dc78b921abbcf0","shortMessageHtmlLink":"Steps toward making IDENTITY file obsolete (#13019)"}},{"before":"10984e8c26ae961144231b8c6f8dd3058e55a169","after":"1238120fe65171df7a47540eef1357de6592755d","ref":"refs/heads/main","pushedAt":"2024-09-19T00:51:58.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add an option to dump wal seqno gaps (#13014)\n\nSummary:\nAdd an option `--only_print_seqno_gaps` for wal dump to help with debugging. This option will check the continuity of sequence numbers in WAL logs, assuming `seq_per_batch` is false. `--walfile` option now also takes a directory, and it will check all WAL logs in the directory in chronological order.\n\nWhen a gap is found, we can further check if it's related to operations like external file ingestion.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13014\n\nTest Plan: Manually tested\n\nReviewed By: ltamasi\n\nDifferential Revision: D62989115\n\nPulled By: jowlyzhang\n\nfbshipit-source-id: 22e3326344e7969ff9d5091d21fec2935770fbc7","shortMessageHtmlLink":"Add an option to dump wal seqno gaps (#13014)"}},{"before":"0611eb5b9df04f76b9d763b44763349b067b6c48","after":"10984e8c26ae961144231b8c6f8dd3058e55a169","ref":"refs/heads/main","pushedAt":"2024-09-18T22:30:00.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix and generalize framework for filtering range queries, etc. (#13005)\n\nSummary:\nThere was a subtle design/contract bug in the previous version of range filtering in experimental.h If someone implemented a key segments extractor with \"all or nothing\" fixed size segments, that could result in unsafe range filtering. For example, with two segments of width 3:\n```\nx = 0x|12 34 56|78 9A 00|\ny = 0x|12 34 56||78 9B\nz = 0x|12 34 56|78 9C 00|\n```\nSegment 1 of y (empty) is out of order with segment 1 of x and z.\n\nI have re-worked the contract to make it clear what does work, and implemented a standard extractor for fixed-size segments, CappedKeySegmentsExtractor. The safe approach for filtering is to consume as much as is available for a segment in the case of a short key.\n\nI have also added support for min-max filtering with reverse byte-wise comparator, which is probably the 2nd most common comparator for RocksDB users (because of MySQL). It might seem that a min-max filter doesn't care about forward or reverse ordering, but it does when trying to determine whether in input range from segment values v1 to v2, where it so happens that v2 is byte-wise less than v1, is an empty forward interval or a non-empty reverse interval. At least in the current setup, we don't have that context.\n\nA new unit test (with some refactoring) tests CappedKeySegmentsExtractor, reverse byte-wise comparator, and the corresponding min-max filter.\n\nI have also (contractually / mathematically) generalized the framework to comparators other than the byte-wise comparator, and made other generalizations to make the extractor limitations more explicitly connected to the particular filters and filtering used--at least in description.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13005\n\nTest Plan: added unit tests as described\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D62769784\n\nPulled By: pdillinger\n\nfbshipit-source-id: 0d41f0d0273586bdad55e4aa30381ebc861f7044","shortMessageHtmlLink":"Fix and generalize framework for filtering range queries, etc. (#13005)"}},{"before":"f411c8bc978c9b5dceaeffb677b1f69c9d42f312","after":"0611eb5b9df04f76b9d763b44763349b067b6c48","ref":"refs/heads/main","pushedAt":"2024-09-18T20:30:52.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix orphaned files in SstFileManager (#13015)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13015\n\n`Close()`ing a database now releases tracked files in `SstFileManager`. Previously this space would be leaked until the database was later reopened.\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D62590773\n\nfbshipit-source-id: 5461bd253d974ac4967ad52fee92e2650f8a9a28","shortMessageHtmlLink":"Fix orphaned files in SstFileManager (#13015)"}},{"before":"13d5230e5da650cf93e6dccb389c82d316d355c6","after":"8d913b5b7548950c669e5cb48508934886e9a652","ref":"refs/heads/9.6.fb","pushedAt":"2024-09-18T01:59:00.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"anand1976","name":null,"path":"/anand1976","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/33647610?s=80&v=4"},"commit":{"message":"Fix a couple of missing cases of retry on corruption (#13007)\n\nSummary:\nFor SST checksum mismatch corruptions in the read path, RocksDB retries the read if the underlying file system supports verification and reconstruction of data (`FSSupportedOps::kVerifyAndReconstructRead`). There were a couple of places where the retry was missing - reading the SST footer and the properties block. This PR fixes the retry in those cases.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13007\n\nTest Plan: Add new unit tests\n\nReviewed By: jaykorean\n\nDifferential Revision: D62519186\n\nPulled By: anand1976\n\nfbshipit-source-id: 50aa38f18f2a53531a9fc8d4ccdf34fbf034ed59","shortMessageHtmlLink":"Fix a couple of missing cases of retry on corruption (#13007)"}},{"before":"f97e33454f97e88c803b7354166258f950472aa4","after":"f411c8bc978c9b5dceaeffb677b1f69c9d42f312","ref":"refs/heads/main","pushedAt":"2024-09-18T00:50:11.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Update folly Github hash (#13017)\n\nSummary:\nThe internal codebase is updated for the coro directory's graduation from experimental. Updating our build script for a newer version with this change too. Using this hash: https://github.com/facebook/folly/commit/03041f014b6e6ebb6119ffae8b7a37308f52e913\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13017\n\nReviewed By: nickbrekhus\n\nDifferential Revision: D62763932\n\nPulled By: jowlyzhang\n\nfbshipit-source-id: 1b211707fbc7d974d6d6ceaf577e174424bb44ed","shortMessageHtmlLink":"Update folly Github hash (#13017)"}},{"before":"a16457625243e22b08cd1c87ce01153e4f554d77","after":"f97e33454f97e88c803b7354166258f950472aa4","ref":"refs/heads/main","pushedAt":"2024-09-17T21:13:55.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix a bug with auto recovery on WAL write error (#12995)\n\nSummary:\nA recent crash test failure shows that auto recovery from WAL write failure can cause CFs to be inconsistent. A unit test repro in P1569398553. The following is an example sequence of events:\n\n```\n0. manual_wal_flush is true. There are multiple CFs in a DB.\n1. Submit a write batch with updates to multiple CF\n2. A FlushWAL or a memtable swtich that will try to write the buffered WAL data. Fail this write so that buffered WAL data is dropped: https://github.com/facebook/rocksdb/blob/4b1d595306fae602b56d2aa5128b11b1162bfa81/file/writable_file_writer.cc#L624\nThe error needs to be retryable to start background auto recovery.\n3. One CF successfully flushes its memtable during auto recovery.\n4. Crash the process.\n5. Reopen the DB, one CF will have the update as a result of successful flush. Other CFs will miss all the updates in the write batch since WAL does not have them.\n```\n\nThis can happen if a users configures manual_wal_flush, uses more than one CF, and can hit retryable error for WAL writes. This PR is a short-term fix that upgrades WAL related errors to fatal and not trigger auto recovery.\n\nA long-term fix may be not drop buffered WAL data by checking how much data is actually written, or require atomically flushing all column families during error recovery from this kind of errors.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12995\n\nTest Plan:\nadded unit test to check error severity and if recovery is triggered. A crash test repro command that fails in a few runs before this PR:\n```\npython3 ./tools/db_crashtest.py blackbox --interval=60 --metadata_write_fault_one_in=1000 --column_families=10 --exclude_wal_from_write_fault_injection=0 --manual_wal_flush_one_in=1000 --WAL_size_limit_MB=10240 --WAL_ttl_seconds=0 --acquire_snapshot_one_in=10000 --adaptive_readahead=1 --adm_policy=1 --advise_random_on_open=1 --allow_data_in_errors=True --allow_fallocate=1 --async_io=0 --auto_readahead_size=0 --avoid_flush_during_recovery=1 --avoid_flush_during_shutdown=1 --avoid_unnecessary_blocking_io=0 --backup_max_size=104857600 --backup_one_in=0 --batch_protection_bytes_per_key=0 --bgerror_resume_retry_interval=100 --block_align=1 --block_protection_bytes_per_key=0 --block_size=16384 --bloom_before_level=2147483647 --bottommost_compression_type=none --bottommost_file_compaction_delay=0 --bytes_per_sync=0 --cache_index_and_filter_blocks=1 --cache_index_and_filter_blocks_with_high_priority=1 --cache_size=33554432 --cache_type=auto_hyper_clock_cache --charge_compression_dictionary_building_buffer=0 --charge_file_metadata=1 --charge_filter_construction=1 --charge_table_reader=0 --check_multiget_consistency=0 --check_multiget_entity_consistency=0 --checkpoint_one_in=0 --checksum_type=kxxHash64 --clear_column_family_one_in=0 --compact_files_one_in=0 --compact_range_one_in=0 --compaction_pri=1 --compaction_readahead_size=1048576 --compaction_ttl=0 --compress_format_version=1 --compressed_secondary_cache_size=8388608 --compression_checksum=0 --compression_max_dict_buffer_bytes=0 --compression_max_dict_bytes=0 --compression_parallel_threads=4 --compression_type=none --compression_use_zstd_dict_trainer=1 --compression_zstd_max_train_bytes=0 --continuous_verification_interval=0 --daily_offpeak_time_utc= --data_block_index_type=0 --db_write_buffer_size=0 --decouple_partitioned_filters=1 --default_temperature=kCold --default_write_temperature=kWarm --delete_obsolete_files_period_micros=30000000 --delpercent=4 --delrangepercent=1 --destroy_db_initially=0 --detect_filter_construct_corruption=0 --disable_file_deletions_one_in=1000000 --disable_manual_compaction_one_in=1000000 --disable_wal=0 --dump_malloc_stats=1 --enable_checksum_handoff=1 --enable_compaction_filter=0 --enable_custom_split_merge=0 --enable_do_not_compress_roles=0 --enable_index_compression=0 --enable_memtable_insert_with_hint_prefix_extractor=0 --enable_pipelined_write=1 --enable_sst_partitioner_factory=0 --enable_thread_tracking=1 --enable_write_thread_adaptive_yield=1 --error_recovery_with_no_fault_injection=1 --fail_if_options_file_error=1 --fifo_allow_compaction=1 --file_checksum_impl=big --fill_cache=1 --flush_one_in=1000000 --format_version=6 --get_all_column_family_metadata_one_in=1000000 --get_current_wal_file_one_in=0 --get_live_files_apis_one_in=10000 --get_properties_of_all_tables_one_in=1000000 --get_property_one_in=100000 --get_sorted_wal_files_one_in=0 --hard_pending_compaction_bytes_limit=274877906944 --index_block_restart_interval=4 --index_shortening=1 --index_type=0 --ingest_external_file_one_in=0 --initial_auto_readahead_size=16384 --inplace_update_support=0 --iterpercent=10 --key_len_percent_dist=1,30,69 --key_may_exist_one_in=100000 --last_level_temperature=kWarm --level_compaction_dynamic_level_bytes=0 --lock_wal_one_in=10000 --log_file_time_to_roll=0 --log_readahead_size=0 --long_running_snapshots=0 --lowest_used_cache_tier=2 --manifest_preallocation_size=5120 --mark_for_compaction_one_file_in=10 --max_auto_readahead_size=0 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --max_key=100000 --max_key_len=3 --max_log_file_size=0 --max_manifest_file_size=1073741824 --max_sequential_skip_in_iterations=16 --max_total_wal_size=0 --max_write_batch_group_size_bytes=16777216 --max_write_buffer_number=10 --max_write_buffer_size_to_maintain=2097152 --memtable_insert_hint_per_batch=1 --memtable_max_range_deletions=0 --memtable_prefix_bloom_size_ratio=0.001 --memtable_protection_bytes_per_key=2 --memtable_whole_key_filtering=0 --memtablerep=skip_list --metadata_charge_policy=1 --metadata_read_fault_one_in=0 --min_write_buffer_number_to_merge=1 --mmap_read=1 --mock_direct_io=False --nooverwritepercent=1 --num_file_reads_for_auto_readahead=2 --open_files=100 --open_metadata_read_fault_one_in=0 --open_metadata_write_fault_one_in=0 --open_read_fault_one_in=0 --open_write_fault_one_in=0 --optimize_filters_for_hits=0 --optimize_filters_for_memory=0 --optimize_multiget_for_io=0 --paranoid_file_checks=1 --paranoid_memory_checks=0 --partition_filters=0 --partition_pinning=2 --pause_background_one_in=10000 --periodic_compaction_seconds=0 --prefix_size=8 --prefixpercent=5 --prepopulate_block_cache=0 --preserve_internal_time_seconds=0 --progress_reports=0 --promote_l0_one_in=0 --read_amp_bytes_per_bit=0 --read_fault_one_in=0 --readahead_size=524288 --readpercent=45 --recycle_log_file_num=0 --reopen=0 --report_bg_io_stats=0 --reset_stats_one_in=10000 --sample_for_compression=5 --secondary_cache_fault_one_in=0 --secondary_cache_uri= --set_options_one_in=10000 --skip_stats_update_on_db_open=1 --snapshot_hold_ops=100000 --soft_pending_compaction_bytes_limit=1048576 --sqfc_name=bar --sqfc_version=1 --sst_file_manager_bytes_per_sec=0 --sst_file_manager_bytes_per_truncate=0 --stats_dump_period_sec=600 --stats_history_buffer_size=1048576 --strict_bytes_per_sync=1 --subcompactions=2 --sync=0 --sync_fault_injection=1 --table_cache_numshardbits=6 --target_file_size_base=524288 --target_file_size_multiplier=2 --test_batches_snapshots=0 --top_level_index_pinning=3 --uncache_aggressiveness=8 --universal_max_read_amp=-1 --unpartitioned_pinning=2 --use_adaptive_mutex=1 --use_adaptive_mutex_lru=0 --use_attribute_group=1 --use_delta_encoding=0 --use_direct_io_for_flush_and_compaction=0 --use_direct_reads=0 --use_full_merge_v1=0 --use_get_entity=0 --use_merge=1 --use_multi_cf_iterator=1 --use_multi_get_entity=0 --use_multiget=0 --use_put_entity_one_in=1 --use_sqfc_for_range_queries=0 --use_timed_put_one_in=0 --use_write_buffer_manager=0 --user_timestamp_size=0 --value_size_mult=32 --verification_only=0 --verify_checksum=1 --verify_checksum_one_in=1000000 --verify_compression=1 --verify_db_one_in=100000 --verify_file_checksums_one_in=1000000 --verify_iterator_with_expected_state_one_in=5 --verify_sst_unique_id_in_manifest=1 --wal_bytes_per_sync=0 --wal_compression=none --write_buffer_size=4194304 --write_dbid_to_manifest=0 --write_fault_one_in=50 --writepercent=35 --ops_per_thread=100000 --preserve_unverified_changes=1\n```\n\nReviewed By: hx235\n\nDifferential Revision: D62888510\n\nPulled By: cbi42\n\nfbshipit-source-id: 308bdbbb8d897cc8eba950155cd0e37cf7eb76fe","shortMessageHtmlLink":"Fix a bug with auto recovery on WAL write error (#12995)"}},{"before":"8648fbcba3ff019f8693c471e1f682a019be264c","after":"a16457625243e22b08cd1c87ce01153e4f554d77","ref":"refs/heads/main","pushedAt":"2024-09-17T20:25:09.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Remove last user of AutoHeaders.RECURSIVE_GLOB\n\nSummary: I came across this code while buckifying parts of folly and fizz in open source. This is pretty hacky code and cleaning it up doesn't seem that hard, so I did it.\n\nReviewed By: zertosh, pdillinger\n\nDifferential Revision: D62781766\n\nfbshipit-source-id: 43714bce992c53149d1e619063d803297362fb5d","shortMessageHtmlLink":"Remove last user of AutoHeaders.RECURSIVE_GLOB"}},{"before":"0e04ef1a96d75acdcf2cf1802692375d62aea64c","after":"8648fbcba3ff019f8693c471e1f682a019be264c","ref":"refs/heads/main","pushedAt":"2024-09-17T20:12:22.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add missing RemapFileSystem::ReopenWritableFile (#12941)\n\nSummary:\n`RemapFileSystem::ReopenWritableFile` is missing, add it.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12941\n\nReviewed By: pdillinger\n\nDifferential Revision: D61822540\n\nPulled By: cbi42\n\nfbshipit-source-id: dc228f7e8b842216f63de8b925cb663898455345","shortMessageHtmlLink":"Add missing RemapFileSystem::ReopenWritableFile (#12941)"}},{"before":"40adb2bab79375924f1ee6421c236068eda24d12","after":"0e04ef1a96d75acdcf2cf1802692375d62aea64c","ref":"refs/heads/main","pushedAt":"2024-09-14T16:51:54.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Deshim coro in fbcode/internal_repo_rocksdb\n\nSummary:\nThe following rules were deshimmed:\n```\n//folly/experimental/coro:accumulate -> //folly/coro:accumulate\n//folly/experimental/coro:async_generator -> //folly/coro:async_generator\n//folly/experimental/coro:async_pipe -> //folly/coro:async_pipe\n//folly/experimental/coro:async_scope -> //folly/coro:async_scope\n//folly/experimental/coro:async_stack -> //folly/coro:async_stack\n//folly/experimental/coro:baton -> //folly/coro:baton\n//folly/experimental/coro:blocking_wait -> //folly/coro:blocking_wait\n//folly/experimental/coro:collect -> //folly/coro:collect\n//folly/experimental/coro:concat -> //folly/coro:concat\n//folly/experimental/coro:coroutine -> //folly/coro:coroutine\n//folly/experimental/coro:current_executor -> //folly/coro:current_executor\n//folly/experimental/coro:detach_on_cancel -> //folly/coro:detach_on_cancel\n//folly/experimental/coro:detail_barrier -> //folly/coro:detail_barrier\n//folly/experimental/coro:detail_barrier_task -> //folly/coro:detail_barrier_task\n//folly/experimental/coro:detail_current_async_frame -> //folly/coro:detail_current_async_frame\n//folly/experimental/coro:detail_helpers -> //folly/coro:detail_helpers\n//folly/experimental/coro:detail_malloc -> //folly/coro:detail_malloc\n//folly/experimental/coro:detail_manual_lifetime -> //folly/coro:detail_manual_lifetime\n//folly/experimental/coro:detail_traits -> //folly/coro:detail_traits\n//folly/experimental/coro:filter -> //folly/coro:filter\n//folly/experimental/coro:future_util -> //folly/coro:future_util\n//folly/experimental/coro:generator -> //folly/coro:generator\n//folly/experimental/coro:gmock_helpers -> //folly/coro:gmock_helpers\n//folly/experimental/coro:gtest_helpers -> //folly/coro:gtest_helpers\n//folly/experimental/coro:inline_task -> //folly/coro:inline_task\n//folly/experimental/coro:invoke -> //folly/coro:invoke\n//folly/experimental/coro:merge -> //folly/coro:merge\n//folly/experimental/coro:mutex -> //folly/coro:mutex\n//folly/experimental/coro:promise -> //folly/coro:promise\n//folly/experimental/coro:result -> //folly/coro:result\n//folly/experimental/coro:retry -> //folly/coro:retry\n//folly/experimental/coro:rust_adaptors -> //folly/coro:rust_adaptors\n//folly/experimental/coro:scope_exit -> //folly/coro:scope_exit\n//folly/experimental/coro:shared_lock -> //folly/coro:shared_lock\n//folly/experimental/coro:shared_mutex -> //folly/coro:shared_mutex\n//folly/experimental/coro:sleep -> //folly/coro:sleep\n//folly/experimental/coro:small_unbounded_queue -> //folly/coro:small_unbounded_queue\n//folly/experimental/coro:task -> //folly/coro:task\n//folly/experimental/coro:timed_wait -> //folly/coro:timed_wait\n//folly/experimental/coro:timeout -> //folly/coro:timeout\n//folly/experimental/coro:traits -> //folly/coro:traits\n//folly/experimental/coro:transform -> //folly/coro:transform\n//folly/experimental/coro:unbounded_queue -> //folly/coro:unbounded_queue\n//folly/experimental/coro:via_if_async -> //folly/coro:via_if_async\n//folly/experimental/coro:with_async_stack -> //folly/coro:with_async_stack\n//folly/experimental/coro:with_cancellation -> //folly/coro:with_cancellation\n//folly/experimental/coro:bounded_queue -> //folly/coro:bounded_queue\n//folly/experimental/coro:shared_promise -> //folly/coro:shared_promise\n//folly/experimental/coro:cleanup -> //folly/coro:cleanup\n//folly/experimental/coro:auto_cleanup_fwd -> //folly/coro:auto_cleanup_fwd\n//folly/experimental/coro:auto_cleanup -> //folly/coro:auto_cleanup\n```\n\nThe following headers were deshimmed:\n```\nfolly/experimental/coro/Accumulate.h -> folly/coro/Accumulate.h\nfolly/experimental/coro/Accumulate-inl.h -> folly/coro/Accumulate-inl.h\nfolly/experimental/coro/AsyncGenerator.h -> folly/coro/AsyncGenerator.h\nfolly/experimental/coro/AsyncPipe.h -> folly/coro/AsyncPipe.h\nfolly/experimental/coro/AsyncScope.h -> folly/coro/AsyncScope.h\nfolly/experimental/coro/AsyncStack.h -> folly/coro/AsyncStack.h\nfolly/experimental/coro/Baton.h -> folly/coro/Baton.h\nfolly/experimental/coro/BlockingWait.h -> folly/coro/BlockingWait.h\nfolly/experimental/coro/Collect.h -> folly/coro/Collect.h\nfolly/experimental/coro/Collect-inl.h -> folly/coro/Collect-inl.h\nfolly/experimental/coro/Concat.h -> folly/coro/Concat.h\nfolly/experimental/coro/Concat-inl.h -> folly/coro/Concat-inl.h\nfolly/experimental/coro/Coroutine.h -> folly/coro/Coroutine.h\nfolly/experimental/coro/CurrentExecutor.h -> folly/coro/CurrentExecutor.h\nfolly/experimental/coro/DetachOnCancel.h -> folly/coro/DetachOnCancel.h\nfolly/experimental/coro/detail/Barrier.h -> folly/coro/detail/Barrier.h\nfolly/experimental/coro/detail/BarrierTask.h -> folly/coro/detail/BarrierTask.h\nfolly/experimental/coro/detail/CurrentAsyncFrame.h -> folly/coro/detail/CurrentAsyncFrame.h\nfolly/experimental/coro/detail/Helpers.h -> folly/coro/detail/Helpers.h\nfolly/experimental/coro/detail/Malloc.h -> folly/coro/detail/Malloc.h\nfolly/experimental/coro/detail/ManualLifetime.h -> folly/coro/detail/ManualLifetime.h\nfolly/experimental/coro/detail/Traits.h -> folly/coro/detail/Traits.h\nfolly/experimental/coro/Filter.h -> folly/coro/Filter.h\nfolly/experimental/coro/Filter-inl.h -> folly/coro/Filter-inl.h\nfolly/experimental/coro/FutureUtil.h -> folly/coro/FutureUtil.h\nfolly/experimental/coro/Generator.h -> folly/coro/Generator.h\nfolly/experimental/coro/GmockHelpers.h -> folly/coro/GmockHelpers.h\nfolly/experimental/coro/GtestHelpers.h -> folly/coro/GtestHelpers.h\nfolly/experimental/coro/detail/InlineTask.h -> folly/coro/detail/InlineTask.h\nfolly/experimental/coro/Invoke.h -> folly/coro/Invoke.h\nfolly/experimental/coro/Merge.h -> folly/coro/Merge.h\nfolly/experimental/coro/Merge-inl.h -> folly/coro/Merge-inl.h\nfolly/experimental/coro/Mutex.h -> folly/coro/Mutex.h\nfolly/experimental/coro/Promise.h -> folly/coro/Promise.h\nfolly/experimental/coro/Result.h -> folly/coro/Result.h\nfolly/experimental/coro/Retry.h -> folly/coro/Retry.h\nfolly/experimental/coro/RustAdaptors.h -> folly/coro/RustAdaptors.h\nfolly/experimental/coro/ScopeExit.h -> folly/coro/ScopeExit.h\nfolly/experimental/coro/SharedLock.h -> folly/coro/SharedLock.h\nfolly/experimental/coro/SharedMutex.h -> folly/coro/SharedMutex.h\nfolly/experimental/coro/Sleep.h -> folly/coro/Sleep.h\nfolly/experimental/coro/Sleep-inl.h -> folly/coro/Sleep-inl.h\nfolly/experimental/coro/SmallUnboundedQueue.h -> folly/coro/SmallUnboundedQueue.h\nfolly/experimental/coro/Task.h -> folly/coro/Task.h\nfolly/experimental/coro/TimedWait.h -> folly/coro/TimedWait.h\nfolly/experimental/coro/Timeout.h -> folly/coro/Timeout.h\nfolly/experimental/coro/Timeout-inl.h -> folly/coro/Timeout-inl.h\nfolly/experimental/coro/Traits.h -> folly/coro/Traits.h\nfolly/experimental/coro/Transform.h -> folly/coro/Transform.h\nfolly/experimental/coro/Transform-inl.h -> folly/coro/Transform-inl.h\nfolly/experimental/coro/UnboundedQueue.h -> folly/coro/UnboundedQueue.h\nfolly/experimental/coro/ViaIfAsync.h -> folly/coro/ViaIfAsync.h\nfolly/experimental/coro/WithAsyncStack.h -> folly/coro/WithAsyncStack.h\nfolly/experimental/coro/WithCancellation.h -> folly/coro/WithCancellation.h\nfolly/experimental/coro/BoundedQueue.h -> folly/coro/BoundedQueue.h\nfolly/experimental/coro/SharedPromise.h -> folly/coro/SharedPromise.h\nfolly/experimental/coro/Cleanup.h -> folly/coro/Cleanup.h\nfolly/experimental/coro/AutoCleanup-fwd.h -> folly/coro/AutoCleanup-fwd.h\nfolly/experimental/coro/AutoCleanup.h -> folly/coro/AutoCleanup.h\n```\n\nThis is a codemod. It was automatically generated and will be landed once it is approved and tests are passing in sandcastle.\nYou have been added as a reviewer by Sentinel or Butterfly.\n\nAutodiff project: dcoro\nAutodiff partition: fbcode.internal_repo_rocksdb\nAutodiff bookmark: ad.dcoro.fbcode.internal_repo_rocksdb\n\nReviewed By: dtolnay\n\nDifferential Revision: D62684411\n\nfbshipit-source-id: 8dbd31ab64fcdd99435d322035b9668e3200e0a3","shortMessageHtmlLink":"Deshim coro in fbcode/internal_repo_rocksdb"}},{"before":"cabd2d871846320ff30cf79c7814b0b46d689732","after":"40adb2bab79375924f1ee6421c236068eda24d12","ref":"refs/heads/main","pushedAt":"2024-09-13T21:43:07.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix wraparound in SstFileManager (#13010)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13010\n\nThe OnAddFile cur_compactions_reserved_size_ accounting causes wraparound when re-opening a database with an unowned SstFileManager and during recovery. It was introduced in #4164 which addresses out of space recovery with an unclear purpose. Compaction jobs do this accounting via EnoughRoomForCompaction/OnCompactionCompletion and to my understanding would never reuse a sst file name.\n\nReviewed By: anand1976\n\nDifferential Revision: D62535775\n\nfbshipit-source-id: a7c44d6e0a4b5ff74bc47abfe57c32ca6770243d","shortMessageHtmlLink":"Fix wraparound in SstFileManager (#13010)"}},{"before":"e490f2b051b53c019c7f318771cedffd5cf708b4","after":"cabd2d871846320ff30cf79c7814b0b46d689732","ref":"refs/heads/main","pushedAt":"2024-09-13T20:59:54.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix a couple of missing cases of retry on corruption (#13007)\n\nSummary:\nFor SST checksum mismatch corruptions in the read path, RocksDB retries the read if the underlying file system supports verification and reconstruction of data (`FSSupportedOps::kVerifyAndReconstructRead`). There were a couple of places where the retry was missing - reading the SST footer and the properties block. This PR fixes the retry in those cases.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13007\n\nTest Plan: Add new unit tests\n\nReviewed By: jaykorean\n\nDifferential Revision: D62519186\n\nPulled By: anand1976\n\nfbshipit-source-id: 50aa38f18f2a53531a9fc8d4ccdf34fbf034ed59","shortMessageHtmlLink":"Fix a couple of missing cases of retry on corruption (#13007)"}},{"before":"43bc71fef678eb5480d6ee010cc8875bd5618be1","after":"e490f2b051b53c019c7f318771cedffd5cf708b4","ref":"refs/heads/main","pushedAt":"2024-09-12T22:22:58.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix a bug in ReFitLevel() where `FileMetaData::being_compacted` is not cleared (#13009)\n\nSummary:\nin ReFitLevel(), we were not setting being_compacted to false after ReFitLevel() is done. This is not a issue if refit level is successful, since new FileMetaData is created for files at the target level. However, if there's an error during RefitLevel(), e.g., Manifest write failure, we should clear the being_compacted field for these files. Otherwise, these files will not be picked for compaction until db reopen.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13009\n\nTest Plan:\nexisting test.\n- stress test failure in T200339331 should not happen anymore.\n\nReviewed By: hx235\n\nDifferential Revision: D62597169\n\nPulled By: cbi42\n\nfbshipit-source-id: 0ba659806da6d6d4b42384fc95268b2d7bad720e","shortMessageHtmlLink":"Fix a bug in ReFitLevel() where FileMetaData::being_compacted is no…"}},{"before":"55ac0b729e5266a47d91e8ba9cd7ac9e5feea35b","after":"43bc71fef678eb5480d6ee010cc8875bd5618be1","ref":"refs/heads/main","pushedAt":"2024-09-10T20:25:53.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add an internal API MemTableList::GetEditForDroppingCurrentVersion (#13001)\n\nSummary:\nPrepare this internal API to be used by atomic data replacement. The main purpose of this API is to get a `VersionEdit` to mark the entire current `MemTableListVersion` as dropped. Flush needs the similar functionality when installing results, so that logic is refactored into a util function `GetDBRecoveryEditForObsoletingMemTables` to be shared by flush and this internal API.\n\nTo test this internal API, flush's result installation is redirected to use this API when it is flushing all the immutable MemTables in debug mode. It should achieve the exact same results, just with a duplicated `VersionEdit::log_number` field that doesn't upsets the recovery logic.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13001\n\nTest Plan: Existing tests\n\nReviewed By: pdillinger\n\nDifferential Revision: D62309591\n\nPulled By: jowlyzhang\n\nfbshipit-source-id: e25914d9a2e281c25ab7ee31a66eaf6adfae4b88","shortMessageHtmlLink":"Add an internal API MemTableList::GetEditForDroppingCurrentVersion (#…"}},{"before":"f48b64460e90e679cb349aa820d73e66482e2f93","after":"55ac0b729e5266a47d91e8ba9cd7ac9e5feea35b","ref":"refs/heads/main","pushedAt":"2024-09-10T18:41:20.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Support building db_bench using buck (#13004)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13004\n\nThe patch extends the buckifier script so it generates a target for `db_bench` as well.\n\nReviewed By: cbi42\n\nDifferential Revision: D62407071\n\nfbshipit-source-id: 0cb98a324ce0598ad84a8675aa77b7d0f91bf40c","shortMessageHtmlLink":"Support building db_bench using buck (#13004)"}},{"before":"0c6e9c036a9b99f6f760ad23739f491c1eaa07f2","after":"f48b64460e90e679cb349aa820d73e66482e2f93","ref":"refs/heads/main","pushedAt":"2024-09-06T21:57:32.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Provide a way to invoke a callback for a Cache handle (#12987)\n\nSummary:\nAdd the `ApplyToHandle` method to the `Cache` interface to allow a caller to request the invocation of a callback on the given cache handle. The goal here is to allow a cache that manages multiple cache instances to use a callback on a handle to determine which instance it belongs to. For example, the callback can hash the key and use that to pick the correct target instance. This is useful to redirect methods like `Ref` and `Release`, which don't know the cache key.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12987\n\nReviewed By: pdillinger\n\nDifferential Revision: D62151907\n\nPulled By: anand1976\n\nfbshipit-source-id: e4ffbbb96eac9061d2ab0e7e1739eea5ebb1cd58","shortMessageHtmlLink":"Provide a way to invoke a callback for a Cache handle (#12987)"}},{"before":"a24574e80adcba35ec85d726273e939c7ca8fdb5","after":"0c6e9c036a9b99f6f760ad23739f491c1eaa07f2","ref":"refs/heads/main","pushedAt":"2024-09-06T21:11:27.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Make compaction always use the input version with extra ref protection (#12992)\n\nSummary:\n`Compaction` is already creating its own ref for the input Version: https://github.com/facebook/rocksdb/blob/4b1d595306fae602b56d2aa5128b11b1162bfa81/db/compaction/compaction.cc#L73\n\nAnd properly Unref it during destruction:\nhttps://github.com/facebook/rocksdb/blob/4b1d595306fae602b56d2aa5128b11b1162bfa81/db/compaction/compaction.cc#L450\n\nThis PR redirects compaction's access of `cfd->current()` to this input `Version`, to prepare for when a column family's data can be replaced all together, and `cfd->current()` is not safe to access for a compaction job. Because a new `Version` with just some other external files could be installed as `cfd->current()`. The compaction job's expectation of the current `Version` and the corresponding storage info to always have its input files will no longer be guaranteed.\n\nMy next follow up is to do a similar thing for flush, also to prepare it for when a column family's data can be replaced. I will make it create its own reference of the current `MemTableListVersion` and use it as input, all flush job's access of memtables will be wired to that input `MemTableListVersion`. Similarly this reference will be unreffed during a flush job's destruction.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12992\n\nTest Plan: Existing tests\n\nReviewed By: pdillinger\n\nDifferential Revision: D62212625\n\nPulled By: jowlyzhang\n\nfbshipit-source-id: 9a781213469cf366857a128d50a702af683a046a","shortMessageHtmlLink":"Make compaction always use the input version with extra ref protection ("}},{"before":"0bea5a2cfe95dacc9ca015b13cd3f01eff885f07","after":"a24574e80adcba35ec85d726273e939c7ca8fdb5","ref":"refs/heads/main","pushedAt":"2024-09-06T20:11:42.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add documentation for background job's state transition (#12994)\n\nSummary:\nThe `SchedulePending*` API is a bit confusing since it doesn't immediately schedule the work and can be confused with the actual scheduling. So I have changed these to be `EnqueuePending*` and added some documentation for the corresponding state transitions of these background work.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12994\n\nTest Plan: existing tests\n\nReviewed By: cbi42\n\nDifferential Revision: D62252746\n\nPulled By: jowlyzhang\n\nfbshipit-source-id: ee68be6ed33070cad9a5004b7b3e16f5bcb041bf","shortMessageHtmlLink":"Add documentation for background job's state transition (#12994)"}},{"before":"4577b672d548e88b58193adcaf05f12eb0fc1835","after":"0bea5a2cfe95dacc9ca015b13cd3f01eff885f07","ref":"refs/heads/main","pushedAt":"2024-09-06T18:08:56.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Disable WAL fault injection in some case (#13000)\n\nSummary:\nwhen manual_wal_flush is true and when there are more than 1 CF, WAL fault injection can cause CFs to be inconsistent. See more explanation and repro in T199157789. Disable the combination for now until we have a fix that allows auto recovery. This also helps to see if there's other cause of stress test failures.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/13000\n\nTest Plan:\nthe following command could repro db consistency failure in a few runs before this PR.\nFrom stress test output we can also see that exclude_wal_from_write_fault_injection and metadata_write_fault_one_in are sanitized to 0.\n```\npython3 ./tools/db_crashtest.py blackbox --interval=60 --metadata_write_fault_one_in=1000 --column_families=10 --exclude_wal_from_write_fault_injection=0 --manual_wal_flush_one_in=1000 --WAL_size_limit_MB=10240 --WAL_ttl_seconds=0 --acquire_snapshot_one_in=10000 --adaptive_readahead=1 --adm_policy=1 --advise_random_on_open=1 --allow_data_in_errors=True --allow_fallocate=1 --async_io=0 --auto_readahead_size=0 --avoid_flush_during_recovery=1 --avoid_flush_during_shutdown=1 --avoid_unnecessary_blocking_io=0 --backup_max_size=104857600 --backup_one_in=0 --batch_protection_bytes_per_key=0 --bgerror_resume_retry_interval=100 --block_align=1 --block_protection_bytes_per_key=0 --block_size=16384 --bloom_before_level=2147483647 --bottommost_compression_type=none --bottommost_file_compaction_delay=0 --bytes_per_sync=0 --cache_index_and_filter_blocks=1 --cache_index_and_filter_blocks_with_high_priority=1 --cache_size=33554432 --cache_type=auto_hyper_clock_cache --charge_compression_dictionary_building_buffer=0 --charge_file_metadata=1 --charge_filter_construction=1 --charge_table_reader=0 --check_multiget_consistency=0 --check_multiget_entity_consistency=0 --checkpoint_one_in=0 --checksum_type=kxxHash64 --clear_column_family_one_in=0 --compact_files_one_in=0 --compact_range_one_in=0 --compaction_pri=1 --compaction_readahead_size=1048576 --compaction_ttl=0 --compress_format_version=1 --compressed_secondary_cache_size=8388608 --compression_checksum=0 --compression_max_dict_buffer_bytes=0 --compression_max_dict_bytes=0 --compression_parallel_threads=4 --compression_type=none --compression_use_zstd_dict_trainer=1 --compression_zstd_max_train_bytes=0 --continuous_verification_interval=0 --daily_offpeak_time_utc= --data_block_index_type=0 --db_write_buffer_size=0 --decouple_partitioned_filters=1 --default_temperature=kCold --default_write_temperature=kWarm --delete_obsolete_files_period_micros=30000000 --delpercent=4 --delrangepercent=1 --destroy_db_initially=0 --detect_filter_construct_corruption=0 --disable_file_deletions_one_in=1000000 --disable_manual_compaction_one_in=1000000 --disable_wal=0 --dump_malloc_stats=1 --enable_checksum_handoff=1 --enable_compaction_filter=0 --enable_custom_split_merge=0 --enable_do_not_compress_roles=0 --enable_index_compression=0 --enable_memtable_insert_with_hint_prefix_extractor=0 --enable_pipelined_write=1 --enable_sst_partitioner_factory=0 --enable_thread_tracking=1 --enable_write_thread_adaptive_yield=1 --error_recovery_with_no_fault_injection=1 --fail_if_options_file_error=1 --fifo_allow_compaction=1 --file_checksum_impl=big --fill_cache=1 --flush_one_in=1000000 --format_version=6 --get_all_column_family_metadata_one_in=1000000 --get_current_wal_file_one_in=0 --get_live_files_apis_one_in=10000 --get_properties_of_all_tables_one_in=1000000 --get_property_one_in=100000 --get_sorted_wal_files_one_in=0 --hard_pending_compaction_bytes_limit=274877906944 --index_block_restart_interval=4 --index_shortening=1 --index_type=0 --ingest_external_file_one_in=0 --initial_auto_readahead_size=16384 --inplace_update_support=0 --iterpercent=10 --key_len_percent_dist=1,30,69 --key_may_exist_one_in=100000 --last_level_temperature=kWarm --level_compaction_dynamic_level_bytes=0 --lock_wal_one_in=10000 --log_file_time_to_roll=0 --log_readahead_size=0 --long_running_snapshots=0 --lowest_used_cache_tier=2 --manifest_preallocation_size=5120 --mark_for_compaction_one_file_in=10 --max_auto_readahead_size=0 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --max_key=100000 --max_key_len=3 --max_log_file_size=0 --max_manifest_file_size=1073741824 --max_sequential_skip_in_iterations=16 --max_total_wal_size=0 --max_write_batch_group_size_bytes=16777216 --max_write_buffer_number=10 --max_write_buffer_size_to_maintain=2097152 --memtable_insert_hint_per_batch=1 --memtable_max_range_deletions=0 --memtable_prefix_bloom_size_ratio=0.001 --memtable_protection_bytes_per_key=2 --memtable_whole_key_filtering=0 --memtablerep=skip_list --metadata_charge_policy=1 --metadata_read_fault_one_in=0 --min_write_buffer_number_to_merge=1 --mmap_read=1 --mock_direct_io=False --nooverwritepercent=1 --num_file_reads_for_auto_readahead=2 --open_files=100 --open_metadata_read_fault_one_in=0 --open_metadata_write_fault_one_in=0 --open_read_fault_one_in=0 --open_write_fault_one_in=0 --optimize_filters_for_hits=0 --optimize_filters_for_memory=0 --optimize_multiget_for_io=0 --paranoid_file_checks=1 --paranoid_memory_checks=0 --partition_filters=0 --partition_pinning=2 --pause_background_one_in=10000 --periodic_compaction_seconds=0 --prefix_size=8 --prefixpercent=5 --prepopulate_block_cache=0 --preserve_internal_time_seconds=0 --progress_reports=0 --promote_l0_one_in=0 --read_amp_bytes_per_bit=0 --read_fault_one_in=0 --readahead_size=524288 --readpercent=45 --recycle_log_file_num=0 --reopen=0 --report_bg_io_stats=0 --reset_stats_one_in=10000 --sample_for_compression=5 --secondary_cache_fault_one_in=0 --secondary_cache_uri= --set_options_one_in=10000 --skip_stats_update_on_db_open=1 --snapshot_hold_ops=100000 --soft_pending_compaction_bytes_limit=1048576 --sqfc_name=bar --sqfc_version=1 --sst_file_manager_bytes_per_sec=0 --sst_file_manager_bytes_per_truncate=0 --stats_dump_period_sec=600 --stats_history_buffer_size=1048576 --strict_bytes_per_sync=1 --subcompactions=2 --sync=0 --sync_fault_injection=1 --table_cache_numshardbits=6 --target_file_size_base=524288 --target_file_size_multiplier=2 --test_batches_snapshots=0 --top_level_index_pinning=3 --uncache_aggressiveness=8 --universal_max_read_amp=-1 --unpartitioned_pinning=2 --use_adaptive_mutex=1 --use_adaptive_mutex_lru=0 --use_attribute_group=1 --use_delta_encoding=0 --use_direct_io_for_flush_and_compaction=0 --use_direct_reads=0 --use_full_merge_v1=0 --use_get_entity=0 --use_merge=1 --use_multi_cf_iterator=1 --use_multi_get_entity=0 --use_multiget=0 --use_put_entity_one_in=1 --use_sqfc_for_range_queries=0 --use_timed_put_one_in=0 --use_write_buffer_manager=0 --user_timestamp_size=0 --value_size_mult=32 --verification_only=0 --verify_checksum=1 --verify_checksum_one_in=1000000 --verify_compression=1 --verify_db_one_in=100000 --verify_file_checksums_one_in=1000000 --verify_iterator_with_expected_state_one_in=5 --verify_sst_unique_id_in_manifest=1 --wal_bytes_per_sync=0 --wal_compression=none --write_buffer_size=4194304 --write_dbid_to_manifest=0 --write_fault_one_in=50 --writepercent=35 --ops_per_thread=100000 --preserve_unverified_changes=1\n```\n\nReviewed By: hx235\n\nDifferential Revision: D62303631\n\nPulled By: cbi42\n\nfbshipit-source-id: d9441188ee84d53e5e7916f3305e50843fe9fde2","shortMessageHtmlLink":"Disable WAL fault injection in some case (#13000)"}},{"before":"0d3aaf7c0f9c15cb52954e89c171ac880079a8c7","after":"4577b672d548e88b58193adcaf05f12eb0fc1835","ref":"refs/heads/main","pushedAt":"2024-09-06T17:17:04.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"More valgrind fixes (#12990)\n\nSummary:\n* https://github.com/facebook/rocksdb/issues/12936 was insufficient to fix the std::optional false positives. Making a fix validated in CI this time (see https://github.com/facebook/rocksdb/issues/12991)\n* valgrind grinds to a halt on startup on my dev machine apparently because it expects internet access. Disable its attempts to access the internet when git is using a proxy.\n* Move PORTABLE=1 from CI job to the Makefile. Without it, valgrind complains about illegal instructions (too new)\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12990\n\nTest Plan: manual, watch nightly valgrind job\n\nReviewed By: ltamasi\n\nDifferential Revision: D62203242\n\nPulled By: pdillinger\n\nfbshipit-source-id: a611b08da7dbd173b0709ed7feb0578729553a17","shortMessageHtmlLink":"More valgrind fixes (#12990)"}},{"before":"4b1d595306fae602b56d2aa5128b11b1162bfa81","after":"0d3aaf7c0f9c15cb52954e89c171ac880079a8c7","ref":"refs/heads/main","pushedAt":"2024-09-05T17:40:19.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Ensure SSTs compressed in tiered_secondary_cache_test (#12993)\n\nSummary:\nIt appears the arm testsuite is failing because it is building without snappy, which is causing the SST files not to be compressed, which somehow causes these tests to fail. Manually setting LZ4 which is already required.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12993\n\nTest Plan: reproduced and verified fix on ARM laptop\n\nReviewed By: anand1976\n\nDifferential Revision: D62216451\n\nPulled By: pdillinger\n\nfbshipit-source-id: 3f21fcd9be0edaa66c7eca0cb7d56b998171e263","shortMessageHtmlLink":"Ensure SSTs compressed in tiered_secondary_cache_test (#12993)"}},{"before":"064c0ad53d856e55807931f10f1df81fb5a53c7f","after":"4b1d595306fae602b56d2aa5128b11b1162bfa81","ref":"refs/heads/main","pushedAt":"2024-09-04T18:45:31.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix and clarify ignore_unknown_options (#12989)\n\nSummary:\n`ignore_unknown_options=true` had an undocumented behavior of having no effect (disallow unknown options) if reading from the same or older major.minor version. Presumably this was intended to catch unintentional addition of new options in a patch release, but there is no automated version compatibility testing between patch releases. So this was a bad choice without such testing support, because it just means users would hit the failure in case of adding features to a patch release.\n\nIn this diff we respect ignore_unknown_options when reading a file from any newer version, even patch versions, and document this behavior in the API.\n\nI don't think it's practical or necessary to test among patch releases in check_format_compatible.sh. This seems like an exceptional case of applying a *different semantics* to patch version updates than to minor/major versions.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12989\n\nTest Plan: unit test updated (and refactored)\n\nReviewed By: jaykorean\n\nDifferential Revision: D62168738\n\nPulled By: pdillinger\n\nfbshipit-source-id: fb3c3ef30f0bbad0d5ffcc4570fb9ef963e7daac","shortMessageHtmlLink":"Fix and clarify ignore_unknown_options (#12989)"}},{"before":"c989c51ed7260135777e8a13e1a2f8c85e520920","after":"064c0ad53d856e55807931f10f1df81fb5a53c7f","ref":"refs/heads/main","pushedAt":"2024-09-04T05:20:30.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix the check format compatible change (#12988)\n\nSummary:\n`check_format_compatible` script was broken due to extra comma added in 5b8f5cbcf4324584b914dcbc27bed7c007c82b3e\ne.g. https://github.com/facebook/rocksdb/actions/runs/10505042711/job/29101787220\n```\n...\n2024-08-23T11:44:15.0175202Z == Building 9.5.fb, debug\n2024-08-23T11:44:15.0190592Z fatal: ambiguous argument '_tmp_origin/9.5.fb,': unknown revision or path not in the working tree.\n...\n```\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12988\n\nTest Plan:\n```\ntools/check_format_compatible.sh\n```\n```\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 8.6.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/8.6.fb to /tmp/rocksdb_format_compatible_jewoongh/db/8.6.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 8.7.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/8.7.fb to /tmp/rocksdb_format_compatible_jewoongh/db/8.7.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 8.8.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/8.8.fb to /tmp/rocksdb_format_compatible_jewoongh/db/8.8.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 8.9.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/8.9.fb to /tmp/rocksdb_format_compatible_jewoongh/db/8.9.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 8.10.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/8.10.fb to /tmp/rocksdb_format_compatible_jewoongh/db/8.10.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 8.11.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/8.11.fb to /tmp/rocksdb_format_compatible_jewoongh/db/8.11.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 9.0.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/9.0.fb to /tmp/rocksdb_format_compatible_jewoongh/db/9.0.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 9.1.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/9.1.fb to /tmp/rocksdb_format_compatible_jewoongh/db/9.1.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 9.2.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/9.2.fb to /tmp/rocksdb_format_compatible_jewoongh/db/9.2.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 9.3.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/9.3.fb to /tmp/rocksdb_format_compatible_jewoongh/db/9.3.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 9.4.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/9.4.fb to /tmp/rocksdb_format_compatible_jewoongh/db/9.4.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 9.5.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/9.5.fb to /tmp/rocksdb_format_compatible_jewoongh/db/9.5.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n== Use HEAD (48339d2a65670211bc9c204364a2127ba9b2a460) to open DB generated using 9.6.fb...\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/9.6.fb to /tmp/rocksdb_format_compatible_jewoongh/db/9.6.fb/db_dump.txt\n== Dumping data from /tmp/rocksdb_format_compatible_jewoongh/db/current to /tmp/rocksdb_format_compatible_jewoongh/db/current/db_dump.txt\n==== Compatibility Test PASSED ====\n```\n\nReviewed By: pdillinger\n\nDifferential Revision: D62162454\n\nPulled By: jaykorean\n\nfbshipit-source-id: 562225c6cb27e0eb66f241a6f9424dc624d8c837","shortMessageHtmlLink":"Fix the check format compatible change (#12988)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0yMVQwMjoyNTozMi4wMDAwMDBazwAAAAS8f4Xc","startCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0yMVQwMjoyNTozMi4wMDAwMDBazwAAAAS8f4Xc","endCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0wNFQwNToyMDozMC4wMDAwMDBazwAAAASsSiRu"}},"title":"Activity · facebook/rocksdb"}