10225501448 李度 10225101546 陈胤遒 10215501422 高宇菲
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1598 lines
52 KiB

3 weeks ago
3 weeks ago
3 weeks ago
3 weeks ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
3 weeks ago
5 years ago
5 years ago
5 years ago
Add Env::Remove{File,Dir} which obsolete Env::Delete{File,Dir}. The "DeleteFile" method name causes pain for Windows developers, because <windows.h> #defines a DeleteFile macro to DeleteFileW or DeleteFileA. Current code uses workarounds, like #undefining DeleteFile everywhere an Env is declared, implemented, or used. This CL removes the need for workarounds by renaming Env::DeleteFile to Env::RemoveFile. For consistency, Env::DeleteDir is also renamed to Env::RemoveDir. A few internal methods are also renamed for consistency. Software that supports Windows is expected to migrate any Env implementations and usage to Remove{File,Dir}, and never use the name Env::Delete{File,Dir} in its code. The renaming is done in a backwards-compatible way, at the risk of making it slightly more difficult to build a new correct Env implementation. The backwards compatibility is achieved using the following hacks: 1) Env::Remove{File,Dir} methods are added, with a default implementation that calls into Env::Delete{File,Dir}. This makes old Env implementations compatible with code that calls into the updated API. 2) The Env::Delete{File,Dir} methods are no longer pure virtuals. Instead, they gain a default implementation that calls into Env::Remove{File,Dir}. This makes updated Env implementations compatible with code that calls into the old API. The cost of this approach is that it's possible to write an Env without overriding either Rename{File,Dir} or Delete{File,Dir}, without getting a compiler warning. However, attempting to run the test suite will immediately fail with an infinite call stack ending in {Remove,Delete}{File,Dir}, making developers aware of the problem. PiperOrigin-RevId: 288710907
4 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Fix snapshot compaction bug Closes google/leveldb#320 During compaction it was possible that records from a block b1=(l1,u1) would be pushed down from level i to level i+1. If there is a block b2=(l2,u2) at level i with k1 = user_key(u1) = user_key(l2) then a subsequent search for k1 will yield the record l2 which has a smaller sequence number than u1 because the sort order for records sorts increasing by user key but decreaing by sequence number. This change add a call to a new function AddBoundaryInputs to SetupOtherInputs. AddBoundaryInputs searches for a block b2 matching the criteria above and adds it to the set of files to be compacted. Whenever AddBoundaryInputs is called it is important that the compaction fileset in level i+1 (known as c->inputs_[1] in the code) be recomputed. Each call to AddBoundaryInputs is followed by a call to GetOverlappingInputs. SetupOtherInputs is called on both manual and automated compaction passes. It is called for both level zero and for levels greater than 0. The original change posted in https://github.com/google/leveldb/pull/339 has been modified to also include changed made by Chris Mumford<cmumford@google.com> in https://github.com/cmumford/leveldb/commit/4b72cb14f8da2aab12451c24b8e205aff686e9dc 1. Releasing snapshots during test cleanup to avoid memory leak warnings. 2. Refactored test to use testutil.h to be in line with other issue tests and to create the test database in the correct temporary location. 3. Added copyright banner. Otherwise, just minor formatting and limiting character width to 80 characters. Additionally the change was rebased on top of current master and changes previously made to the Makefile were ported to the CMakeLists.txt. Testing Done: A test program (issue320_test) was constructed that performs mutations while snapshots are active. issue320_test fails without this bug fix after 64k writes. It passes with this bug fix. It was run with 200M writes and passed. Unit tests were written for the new function that was added to the code. Make test was run and seen to pass. Signed-off-by: Richard Cole <richcole@amazon.com>
8 years ago
Add Env::Remove{File,Dir} which obsolete Env::Delete{File,Dir}. The "DeleteFile" method name causes pain for Windows developers, because <windows.h> #defines a DeleteFile macro to DeleteFileW or DeleteFileA. Current code uses workarounds, like #undefining DeleteFile everywhere an Env is declared, implemented, or used. This CL removes the need for workarounds by renaming Env::DeleteFile to Env::RemoveFile. For consistency, Env::DeleteDir is also renamed to Env::RemoveDir. A few internal methods are also renamed for consistency. Software that supports Windows is expected to migrate any Env implementations and usage to Remove{File,Dir}, and never use the name Env::Delete{File,Dir} in its code. The renaming is done in a backwards-compatible way, at the risk of making it slightly more difficult to build a new correct Env implementation. The backwards compatibility is achieved using the following hacks: 1) Env::Remove{File,Dir} methods are added, with a default implementation that calls into Env::Delete{File,Dir}. This makes old Env implementations compatible with code that calls into the updated API. 2) The Env::Delete{File,Dir} methods are no longer pure virtuals. Instead, they gain a default implementation that calls into Env::Remove{File,Dir}. This makes updated Env implementations compatible with code that calls into the old API. The cost of this approach is that it's possible to write an Env without overriding either Rename{File,Dir} or Delete{File,Dir}, without getting a compiler warning. However, attempting to run the test suite will immediately fail with an infinite call stack ending in {Remove,Delete}{File,Dir}, making developers aware of the problem. PiperOrigin-RevId: 288710907
4 years ago
  1. // Copyright (c) 2011 The LevelDB Authors. All rights reserved.
  2. // Use of this source code is governed by a BSD-style license that can be
  3. // found in the LICENSE file. See the AUTHORS file for names of contributors.
  4. #include "db/version_set.h"
  5. #include <algorithm>
  6. #include <cstdio>
  7. #include "db/filename.h"
  8. #include "db/log_reader.h"
  9. #include "db/log_writer.h"
  10. #include "db/memtable.h"
  11. #include "db/table_cache.h"
  12. #include "leveldb/env.h"
  13. #include "leveldb/table_builder.h"
  14. #include "table/merger.h"
  15. #include "table/two_level_iterator.h"
  16. #include "util/coding.h"
  17. #include "util/logging.h"
  18. namespace leveldb {
  19. static size_t TargetFileSize(const Options* options) {
  20. return options->max_file_size;
  21. }
  22. // Maximum bytes of overlaps in grandparent (i.e., level+2) before we
  23. // stop building a single file in a level->level+1 compaction.
  24. static int64_t MaxGrandParentOverlapBytes(const Options* options) {
  25. return 10 * TargetFileSize(options);
  26. }
  27. // Maximum number of bytes in all compacted files. We avoid expanding
  28. // the lower level file set of a compaction if it would make the
  29. // total compaction cover more than this many bytes.
  30. static int64_t ExpandedCompactionByteSizeLimit(const Options* options) {
  31. return 25 * TargetFileSize(options);
  32. }
  33. static double MaxBytesForLevel(const Options* options, int level) {
  34. // Note: the result for level zero is not really used since we set
  35. // the level-0 compaction threshold based on number of files.
  36. // Result for both level-0 and level-1
  37. double result = 10. * 1048576.0;
  38. while (level > 1) {
  39. result *= 10;
  40. level--;
  41. }
  42. return result;
  43. }
  44. static uint64_t MaxFileSizeForLevel(const Options* options, int level) {
  45. // We could vary per level to reduce number of files?
  46. return TargetFileSize(options);
  47. }
  48. static int64_t TotalFileSize(const std::vector<FileMetaData*>& files) {
  49. int64_t sum = 0;
  50. for (size_t i = 0; i < files.size(); i++) {
  51. sum += files[i]->file_size;
  52. }
  53. return sum;
  54. }
  55. Version::~Version() {
  56. assert(refs_ == 0);
  57. // Remove from linked list
  58. prev_->next_ = next_;
  59. next_->prev_ = prev_;
  60. // Drop references to files
  61. for (int level = 0; level < config::kNumLevels; level++) {
  62. for (size_t i = 0; i < files_[level].size(); i++) {
  63. FileMetaData* f = files_[level][i];
  64. assert(f->refs > 0);
  65. f->refs--;
  66. if (f->refs <= 0) {
  67. delete f;
  68. }
  69. }
  70. }
  71. }
  72. //寻找文件的时候也要考虑生存期
  73. int FindFile(const InternalKeyComparator& icmp,
  74. const std::vector<FileMetaData*>& files, const Slice& key) {
  75. uint32_t left = 0;
  76. uint32_t right = files.size();
  77. ParsedInternalKey parsed;
  78. ParseInternalKey(key,&parsed);
  79. while (left < right) {
  80. uint32_t mid = (left + right) / 2;
  81. const FileMetaData* f = files[mid];
  82. if (icmp.InternalKeyComparator::Compare(f->largest.Encode(), key) < 0) {
  83. // Key at "mid.largest" is < "target". Therefore all
  84. // files at or before "mid" are uninteresting.
  85. left = mid + 1;
  86. } else {
  87. // Key at "mid.largest" is >= "target". Therefore all files
  88. // after "mid" are uninteresting.
  89. right = mid;
  90. }
  91. }
  92. while(right < files.size()) {
  93. printf("file ind %d num %lu largest deadtime %lu parsed deadtime %lu\n",
  94. right,files[right]->number,files[right]->largest_deadtime,parsed.deadTime);
  95. if(files[right]->largest_deadtime >= parsed.deadTime) {
  96. break;
  97. }
  98. // if(icmp.InternalKeyComparator::Compare(files[right]->largest.Encode(), key) > 0) {
  99. // break;
  100. // }
  101. right ++;
  102. }
  103. return right;
  104. }
  105. static bool AfterFile(const Comparator* ucmp, const Slice* user_key,
  106. const FileMetaData* f) {
  107. // null user_key occurs before all keys and is therefore never after *f
  108. return (user_key != nullptr &&
  109. ucmp->Compare(*user_key, f->largest.user_key()) > 0);
  110. }
  111. static bool BeforeFile(const Comparator* ucmp, const Slice* user_key,
  112. const FileMetaData* f) {
  113. // null user_key occurs after all keys and is therefore never before *f
  114. return (user_key != nullptr &&
  115. ucmp->Compare(*user_key, f->smallest.user_key()) < 0);
  116. }
  117. bool SomeFileOverlapsRange(const InternalKeyComparator& icmp,
  118. bool disjoint_sorted_files,
  119. const std::vector<FileMetaData*>& files,
  120. const Slice* smallest_user_key,
  121. const Slice* largest_user_key) {
  122. const Comparator* ucmp = icmp.user_comparator();
  123. if (!disjoint_sorted_files) {
  124. // Need to check against all files
  125. for (size_t i = 0; i < files.size(); i++) {
  126. const FileMetaData* f = files[i];
  127. if (AfterFile(ucmp, smallest_user_key, f) ||
  128. BeforeFile(ucmp, largest_user_key, f)) {
  129. // No overlap
  130. } else {
  131. return true; // Overlap
  132. }
  133. }
  134. return false;
  135. }
  136. // Binary search over file list
  137. uint32_t index = 0;
  138. if (smallest_user_key != nullptr) {
  139. // Find the earliest possible internal key for smallest_user_key
  140. InternalKey small_key(*smallest_user_key, kMaxSequenceNumber,
  141. kValueTypeForSeek);
  142. index = FindFile(icmp, files, small_key.Encode());
  143. }
  144. if (index >= files.size()) {
  145. // beginning of range is after all files, so no overlap.
  146. return false;
  147. }
  148. return !BeforeFile(ucmp, largest_user_key, files[index]);
  149. }
  150. // An internal iterator. For a given version/level pair, yields
  151. // information about the files in the level. For a given entry, key()
  152. // is the largest key that occurs in the file, and value() is an
  153. // 16-byte value containing the file number and file size, both
  154. // encoded using EncodeFixed64.
  155. class Version::LevelFileNumIterator : public Iterator {
  156. public:
  157. LevelFileNumIterator(const InternalKeyComparator& icmp,
  158. const std::vector<FileMetaData*>* flist)
  159. : icmp_(icmp), flist_(flist), index_(flist->size()) { // Marks as invalid
  160. }
  161. bool Valid() const override { return index_ < flist_->size(); }
  162. void Seek(const Slice& target) override {
  163. index_ = FindFile(icmp_, *flist_, target);
  164. }
  165. void SeekToFirst() override { index_ = 0; }
  166. void SeekToLast() override {
  167. index_ = flist_->empty() ? 0 : flist_->size() - 1;
  168. }
  169. void Next() override {
  170. assert(Valid());
  171. index_++;
  172. }
  173. void Prev() override {
  174. assert(Valid());
  175. if (index_ == 0) {
  176. index_ = flist_->size(); // Marks as invalid
  177. } else {
  178. index_--;
  179. }
  180. }
  181. Slice key() const override {
  182. assert(Valid());
  183. return (*flist_)[index_]->largest.Encode();
  184. }
  185. Slice value() const override {
  186. assert(Valid());
  187. EncodeFixed64(value_buf_, (*flist_)[index_]->number);
  188. EncodeFixed64(value_buf_ + 8, (*flist_)[index_]->file_size);
  189. return Slice(value_buf_, sizeof(value_buf_));
  190. }
  191. Status status() const override { return Status::OK(); }
  192. private:
  193. const InternalKeyComparator icmp_;
  194. const std::vector<FileMetaData*>* const flist_;
  195. uint32_t index_;
  196. // Backing store for value(). Holds the file number and size.
  197. mutable char value_buf_[16];
  198. };
  199. static Iterator* GetFileIterator(void* arg, const ReadOptions& options,
  200. const Slice& file_value) {
  201. TableCache* cache = reinterpret_cast<TableCache*>(arg);
  202. if (file_value.size() != 16) {
  203. return NewErrorIterator(
  204. Status::Corruption("FileReader invoked with unexpected value"));
  205. } else {
  206. return cache->NewIterator(options, DecodeFixed64(file_value.data()),
  207. DecodeFixed64(file_value.data() + 8));
  208. }
  209. }
  210. Iterator* Version::NewConcatenatingIterator(const ReadOptions& options,
  211. int level) const {
  212. return NewTwoLevelIterator(
  213. new LevelFileNumIterator(vset_->icmp_, &files_[level]), &GetFileIterator,
  214. vset_->table_cache_, options);
  215. }
  216. void Version::AddIterators(const ReadOptions& options,
  217. std::vector<Iterator*>* iters) {
  218. // Merge all level zero files together since they may overlap
  219. for (size_t i = 0; i < files_[0].size(); i++) {
  220. iters->push_back(vset_->table_cache_->NewIterator(
  221. options, files_[0][i]->number, files_[0][i]->file_size));
  222. }
  223. // For levels > 0, we can use a concatenating iterator that sequentially
  224. // walks through the non-overlapping files in the level, opening them
  225. // lazily.
  226. for (int level = 1; level < config::kNumLevels; level++) {
  227. if (!files_[level].empty()) {
  228. iters->push_back(NewConcatenatingIterator(options, level));
  229. }
  230. }
  231. }
  232. // Callback from TableCache::Get()
  233. namespace {
  234. enum SaverState {
  235. kNotFound,
  236. kFound,
  237. kDeleted,
  238. kCorrupt,
  239. };
  240. struct Saver {
  241. SaverState state;
  242. const Comparator* ucmp;
  243. Slice user_key;
  244. std::string* value;
  245. };
  246. } // namespace
  247. static void SaveValue(void* arg, const Slice& ikey, const Slice& v) {
  248. Saver* s = reinterpret_cast<Saver*>(arg);
  249. ParsedInternalKey parsed_key;
  250. if (!ParseInternalKey(ikey, &parsed_key)) {
  251. // std::cout<<"corrupt get"<<std::endl;
  252. s->state = kCorrupt;
  253. } else {
  254. std::cout<<"found & target "<<parsed_key.user_key.ToString()<<" "<<s->user_key.ToString()<<std::endl;
  255. if (s->ucmp->Compare(parsed_key.user_key, s->user_key) == 0) {
  256. s->state = (parsed_key.type == kTypeValue) ? kFound : kDeleted;
  257. if (s->state == kFound) {
  258. s->value->assign(v.data(), v.size());
  259. }
  260. }
  261. }
  262. std::cout<<"state : "<<s->state<<std::endl;
  263. }
  264. static bool NewestFirst(FileMetaData* a, FileMetaData* b) {
  265. return a->number > b->number;
  266. }
  267. void Version::ForEachOverlapping(Slice user_key, Slice internal_key, void* arg,
  268. bool (*func)(void*, int, FileMetaData*)) {
  269. const Comparator* ucmp = vset_->icmp_.user_comparator();
  270. ParsedInternalKey parsed;
  271. ParseInternalKey(internal_key,&parsed);
  272. printf("parsed lookup deadtime : %lu\n",parsed.deadTime);
  273. // Search level-0 in order from newest to oldest.
  274. std::vector<FileMetaData*> tmp;
  275. tmp.reserve(files_[0].size());
  276. for (uint32_t i = 0; i < files_[0].size(); i++) {
  277. FileMetaData* f = files_[0][i];
  278. //除了key的范围之外,还要考虑文件含有kv的最大deadtime
  279. if (ucmp->Compare(user_key, f->smallest.user_key()) >= 0 &&
  280. ucmp->Compare(user_key, f->largest.user_key()) <= 0 &&
  281. f->largest_deadtime > parsed.deadTime) {
  282. tmp.push_back(f);
  283. }
  284. }
  285. if (!tmp.empty()) {
  286. std::sort(tmp.begin(), tmp.end(), NewestFirst);
  287. for (uint32_t i = 0; i < tmp.size(); i++) {
  288. if (!(*func)(arg, 0, tmp[i])) {
  289. return;
  290. }
  291. }
  292. }
  293. // Search other levels.
  294. for (int level = 1; level < config::kNumLevels; level++) {
  295. std::cout<<"----------search in level "<<level<<"--------------\n";
  296. size_t num_files = files_[level].size();
  297. if (num_files == 0) continue;
  298. // Binary search to find earliest index whose largest key >= internal_key.
  299. uint32_t index = FindFile(vset_->icmp_, files_[level], internal_key);
  300. if (index < num_files) {
  301. FileMetaData* f = files_[level][index];
  302. std::cout<<"userkey fsmallest "<<user_key.ToString()<<" "<<f->smallest.user_key().ToString()<<std::endl;
  303. if (ucmp->Compare(user_key, f->smallest.user_key()) < 0) {
  304. // All of "f" is past any data for user_key
  305. } else {
  306. if (!(*func)(arg, level, f)) {
  307. return;
  308. }
  309. }
  310. }
  311. }
  312. }
  313. Status Version::Get(const ReadOptions& options, const LookupKey& k,
  314. std::string* value, GetStats* stats) {
  315. stats->seek_file = nullptr;
  316. stats->seek_file_level = -1;
  317. struct State {
  318. Saver saver;
  319. GetStats* stats;
  320. const ReadOptions* options;
  321. Slice ikey;
  322. FileMetaData* last_file_read;
  323. int last_file_read_level;
  324. VersionSet* vset;
  325. Status s;
  326. bool found;
  327. static bool Match(void* arg, int level, FileMetaData* f) {
  328. State* state = reinterpret_cast<State*>(arg);
  329. if (state->stats->seek_file == nullptr &&
  330. state->last_file_read != nullptr) {
  331. // We have had more than one seek for this read. Charge the 1st file.
  332. state->stats->seek_file = state->last_file_read;
  333. state->stats->seek_file_level = state->last_file_read_level;
  334. }
  335. state->last_file_read = f;
  336. state->last_file_read_level = level;
  337. state->s = state->vset->table_cache_->Get(*state->options, f->number,
  338. f->file_size, state->ikey,
  339. &state->saver, SaveValue);
  340. printf("file level %d num %lu\n", level, f->number);
  341. std::cout<<"state->s ->saver.state: "<<state->s.ok()<<" "<<state->saver.state<<std::endl;
  342. if (!state->s.ok()) {
  343. state->found = true;
  344. return false;
  345. }
  346. switch (state->saver.state) {
  347. case kNotFound:
  348. return true; // Keep searching in other files
  349. case kFound:
  350. state->found = true;
  351. return false;
  352. case kDeleted:
  353. return false;
  354. case kCorrupt:
  355. state->s =
  356. Status::Corruption("corrupted key for ", state->saver.user_key);
  357. state->found = true;
  358. return false;
  359. }
  360. // Not reached. Added to avoid false compilation warnings of
  361. // "control reaches end of non-void function".
  362. return false;
  363. }
  364. };
  365. State state;
  366. state.found = false;
  367. state.stats = stats;
  368. state.last_file_read = nullptr;
  369. state.last_file_read_level = -1;
  370. state.options = &options;
  371. state.ikey = k.internal_key();
  372. state.vset = vset_;
  373. state.saver.state = kNotFound;
  374. state.saver.ucmp = vset_->icmp_.user_comparator();
  375. state.saver.user_key = k.user_key();
  376. state.saver.value = value;
  377. ForEachOverlapping(state.saver.user_key, state.ikey, &state, &State::Match);
  378. return state.found ? state.s : Status::NotFound(Slice());
  379. }
  380. bool Version::UpdateStats(const GetStats& stats) {
  381. FileMetaData* f = stats.seek_file;
  382. if (f != nullptr) {
  383. f->allowed_seeks--;
  384. if (f->allowed_seeks <= 0 && file_to_compact_ == nullptr) {
  385. file_to_compact_ = f;
  386. file_to_compact_level_ = stats.seek_file_level;
  387. return true;
  388. }
  389. }
  390. return false;
  391. }
  392. bool Version::RecordReadSample(Slice internal_key) {
  393. ParsedInternalKey ikey;
  394. if (!ParseInternalKey(internal_key, &ikey)) {
  395. return false;
  396. }
  397. struct State {
  398. GetStats stats; // Holds first matching file
  399. int matches;
  400. static bool Match(void* arg, int level, FileMetaData* f) {
  401. State* state = reinterpret_cast<State*>(arg);
  402. state->matches++;
  403. if (state->matches == 1) {
  404. // Remember first match.
  405. state->stats.seek_file = f;
  406. state->stats.seek_file_level = level;
  407. }
  408. // We can stop iterating once we have a second match.
  409. return state->matches < 2;
  410. }
  411. };
  412. State state;
  413. state.matches = 0;
  414. ForEachOverlapping(ikey.user_key, internal_key, &state, &State::Match);
  415. // Must have at least two matches since we want to merge across
  416. // files. But what if we have a single file that contains many
  417. // overwrites and deletions? Should we have another mechanism for
  418. // finding such files?
  419. if (state.matches >= 2) {
  420. // 1MB cost is about 1 seek (see comment in Builder::Apply).
  421. return UpdateStats(state.stats);
  422. }
  423. return false;
  424. }
  425. void Version::Ref() { ++refs_; }
  426. void Version::Unref() {
  427. assert(this != &vset_->dummy_versions_);
  428. assert(refs_ >= 1);
  429. --refs_;
  430. if (refs_ == 0) {
  431. delete this;
  432. }
  433. }
  434. bool Version::OverlapInLevel(int level, const Slice* smallest_user_key,
  435. const Slice* largest_user_key) {
  436. return SomeFileOverlapsRange(vset_->icmp_, (level > 0), files_[level],
  437. smallest_user_key, largest_user_key);
  438. }
  439. int Version::PickLevelForMemTableOutput(const Slice& smallest_user_key,
  440. const Slice& largest_user_key) {
  441. int level = 0;
  442. if (!OverlapInLevel(0, &smallest_user_key, &largest_user_key)) {
  443. // Push to next level if there is no overlap in next level,
  444. // and the #bytes overlapping in the level after that are limited.
  445. InternalKey start(smallest_user_key, kMaxSequenceNumber, kValueTypeForSeek);
  446. InternalKey limit(largest_user_key, 0, static_cast<ValueType>(0));
  447. std::vector<FileMetaData*> overlaps;
  448. while (level < config::kMaxMemCompactLevel) {
  449. if (OverlapInLevel(level + 1, &smallest_user_key, &largest_user_key)) {
  450. break;
  451. }
  452. if (level + 2 < config::kNumLevels) {
  453. // Check that file does not overlap too many grandparent bytes.
  454. GetOverlappingInputs(level + 2, &start, &limit, &overlaps);
  455. const int64_t sum = TotalFileSize(overlaps);
  456. if (sum > MaxGrandParentOverlapBytes(vset_->options_)) {
  457. break;
  458. }
  459. }
  460. level++;
  461. }
  462. }
  463. return level;
  464. }
  465. // Store in "*inputs" all files in "level" that overlap [begin,end]
  466. void Version::GetOverlappingInputs(int level, const InternalKey* begin,
  467. const InternalKey* end,
  468. std::vector<FileMetaData*>* inputs) {
  469. assert(level >= 0);
  470. assert(level < config::kNumLevels);
  471. inputs->clear();
  472. Slice user_begin, user_end;
  473. if (begin != nullptr) {
  474. user_begin = begin->user_key();
  475. }
  476. if (end != nullptr) {
  477. user_end = end->user_key();
  478. }
  479. const Comparator* user_cmp = vset_->icmp_.user_comparator();
  480. for (size_t i = 0; i < files_[level].size();) {
  481. FileMetaData* f = files_[level][i++];
  482. const Slice file_start = f->smallest.user_key();
  483. const Slice file_limit = f->largest.user_key();
  484. if (begin != nullptr && user_cmp->Compare(file_limit, user_begin) < 0) {
  485. // "f" is completely before specified range; skip it
  486. } else if (end != nullptr && user_cmp->Compare(file_start, user_end) > 0) {
  487. // "f" is completely after specified range; skip it
  488. } else {
  489. inputs->push_back(f);
  490. if (level == 0) {
  491. // Level-0 files may overlap each other. So check if the newly
  492. // added file has expanded the range. If so, restart search.
  493. if (begin != nullptr && user_cmp->Compare(file_start, user_begin) < 0) {
  494. user_begin = file_start;
  495. inputs->clear();
  496. i = 0;
  497. } else if (end != nullptr &&
  498. user_cmp->Compare(file_limit, user_end) > 0) {
  499. user_end = file_limit;
  500. inputs->clear();
  501. i = 0;
  502. }
  503. }
  504. }
  505. }
  506. }
  507. std::string Version::DebugString() const {
  508. std::string r;
  509. for (int level = 0; level < config::kNumLevels; level++) {
  510. // E.g.,
  511. // --- level 1 ---
  512. // 17:123['a' .. 'd']
  513. // 20:43['e' .. 'g']
  514. r.append("--- level ");
  515. AppendNumberTo(&r, level);
  516. r.append(" ---\n");
  517. const std::vector<FileMetaData*>& files = files_[level];
  518. for (size_t i = 0; i < files.size(); i++) {
  519. r.push_back(' ');
  520. AppendNumberTo(&r, files[i]->number);
  521. r.push_back(':');
  522. AppendNumberTo(&r, files[i]->file_size);
  523. r.append("[");
  524. r.append(files[i]->smallest.DebugString());
  525. r.append(" .. ");
  526. r.append(files[i]->largest.DebugString());
  527. r.append("]\n");
  528. }
  529. }
  530. return r;
  531. }
  532. // A helper class so we can efficiently apply a whole sequence
  533. // of edits to a particular state without creating intermediate
  534. // Versions that contain full copies of the intermediate state.
  535. class VersionSet::Builder {
  536. private:
  537. // Helper to sort by v->files_[file_number].smallest
  538. struct BySmallestKey {
  539. const InternalKeyComparator* internal_comparator;
  540. bool operator()(FileMetaData* f1, FileMetaData* f2) const {
  541. int r = internal_comparator->Compare(f1->smallest, f2->smallest);
  542. if (r != 0) {
  543. return (r < 0);
  544. } else {
  545. // Break ties by file number
  546. return (f1->number < f2->number);
  547. }
  548. }
  549. };
  550. typedef std::set<FileMetaData*, BySmallestKey> FileSet;
  551. struct LevelState {
  552. std::set<uint64_t> deleted_files;
  553. FileSet* added_files;
  554. };
  555. VersionSet* vset_;
  556. Version* base_;
  557. LevelState levels_[config::kNumLevels];
  558. public:
  559. // Initialize a builder with the files from *base and other info from *vset
  560. Builder(VersionSet* vset, Version* base) : vset_(vset), base_(base) {
  561. base_->Ref();
  562. BySmallestKey cmp;
  563. cmp.internal_comparator = &vset_->icmp_;
  564. for (int level = 0; level < config::kNumLevels; level++) {
  565. levels_[level].added_files = new FileSet(cmp);
  566. }
  567. }
  568. ~Builder() {
  569. for (int level = 0; level < config::kNumLevels; level++) {
  570. const FileSet* added = levels_[level].added_files;
  571. std::vector<FileMetaData*> to_unref;
  572. to_unref.reserve(added->size());
  573. for (FileSet::const_iterator it = added->begin(); it != added->end();
  574. ++it) {
  575. to_unref.push_back(*it);
  576. }
  577. delete added;
  578. for (uint32_t i = 0; i < to_unref.size(); i++) {
  579. FileMetaData* f = to_unref[i];
  580. f->refs--;
  581. if (f->refs <= 0) {
  582. delete f;
  583. }
  584. }
  585. }
  586. base_->Unref();
  587. }
  588. // Apply all of the edits in *edit to the current state.
  589. void Apply(const VersionEdit* edit) {
  590. // Update compaction pointers
  591. for (size_t i = 0; i < edit->compact_pointers_.size(); i++) {
  592. const int level = edit->compact_pointers_[i].first;
  593. vset_->compact_pointer_[level] =
  594. edit->compact_pointers_[i].second.Encode().ToString();
  595. }
  596. // Delete files
  597. for (const auto& deleted_file_set_kvp : edit->deleted_files_) {
  598. const int level = deleted_file_set_kvp.first;
  599. const uint64_t number = deleted_file_set_kvp.second;
  600. levels_[level].deleted_files.insert(number);
  601. }
  602. // Add new files
  603. for (size_t i = 0; i < edit->new_files_.size(); i++) {
  604. const int level = edit->new_files_[i].first;
  605. FileMetaData* f = new FileMetaData(edit->new_files_[i].second);
  606. f->refs = 1;
  607. // We arrange to automatically compact this file after
  608. // a certain number of seeks. Let's assume:
  609. // (1) One seek costs 10ms
  610. // (2) Writing or reading 1MB costs 10ms (100MB/s)
  611. // (3) A compaction of 1MB does 25MB of IO:
  612. // 1MB read from this level
  613. // 10-12MB read from next level (boundaries may be misaligned)
  614. // 10-12MB written to next level
  615. // This implies that 25 seeks cost the same as the compaction
  616. // of 1MB of data. I.e., one seek costs approximately the
  617. // same as the compaction of 40KB of data. We are a little
  618. // conservative and allow approximately one seek for every 16KB
  619. // of data before triggering a compaction.
  620. f->allowed_seeks = static_cast<int>((f->file_size / 16384U));
  621. if (f->allowed_seeks < 100) f->allowed_seeks = 100;
  622. levels_[level].deleted_files.erase(f->number);
  623. levels_[level].added_files->insert(f);
  624. }
  625. }
  626. // Save the current state in *v.
  627. void SaveTo(Version* v) {
  628. BySmallestKey cmp;
  629. cmp.internal_comparator = &vset_->icmp_;
  630. for (int level = 0; level < config::kNumLevels; level++) {
  631. // Merge the set of added files with the set of pre-existing files.
  632. // Drop any deleted files. Store the result in *v.
  633. const std::vector<FileMetaData*>& base_files = base_->files_[level];
  634. std::vector<FileMetaData*>::const_iterator base_iter = base_files.begin();
  635. std::vector<FileMetaData*>::const_iterator base_end = base_files.end();
  636. const FileSet* added_files = levels_[level].added_files;
  637. v->files_[level].reserve(base_files.size() + added_files->size());
  638. for (const auto& added_file : *added_files) {
  639. // Add all smaller files listed in base_
  640. for (std::vector<FileMetaData*>::const_iterator bpos =
  641. std::upper_bound(base_iter, base_end, added_file, cmp);
  642. base_iter != bpos; ++base_iter) {
  643. MaybeAddFile(v, level, *base_iter);
  644. }
  645. MaybeAddFile(v, level, added_file);
  646. }
  647. // Add remaining base files
  648. for (; base_iter != base_end; ++base_iter) {
  649. MaybeAddFile(v, level, *base_iter);
  650. }
  651. #ifndef NDEBUG
  652. // Make sure there is no overlap in levels > 0
  653. if (level > 0) {
  654. for (uint32_t i = 1; i < v->files_[level].size(); i++) {
  655. const InternalKey& prev_end = v->files_[level][i - 1]->largest;
  656. const InternalKey& this_begin = v->files_[level][i]->smallest;
  657. if (vset_->icmp_.Compare(prev_end, this_begin) >= 0) {
  658. std::fprintf(stderr, "overlapping ranges in same level %s vs. %s\n",
  659. prev_end.DebugString().c_str(),
  660. this_begin.DebugString().c_str());
  661. std::abort();
  662. }
  663. }
  664. }
  665. #endif
  666. }
  667. }
  668. void MaybeAddFile(Version* v, int level, FileMetaData* f) {
  669. if (levels_[level].deleted_files.count(f->number) > 0) {
  670. // File is deleted: do nothing
  671. } else {
  672. std::vector<FileMetaData*>* files = &v->files_[level];
  673. if (level > 0 && !files->empty()) {
  674. // Must not overlap
  675. assert(vset_->icmp_.Compare((*files)[files->size() - 1]->largest,
  676. f->smallest) < 0);
  677. }
  678. f->refs++;
  679. files->push_back(f);
  680. }
  681. }
  682. };
  683. VersionSet::VersionSet(const std::string& dbname, const Options* options,
  684. TableCache* table_cache,
  685. const InternalKeyComparator* cmp)
  686. : env_(options->env),
  687. dbname_(dbname),
  688. options_(options),
  689. table_cache_(table_cache),
  690. icmp_(*cmp),
  691. next_file_number_(2),
  692. manifest_file_number_(0), // Filled by Recover()
  693. last_sequence_(0),
  694. log_number_(0),
  695. prev_log_number_(0),
  696. descriptor_file_(nullptr),
  697. descriptor_log_(nullptr),
  698. dummy_versions_(this),
  699. current_(nullptr) {
  700. AppendVersion(new Version(this));
  701. }
  702. VersionSet::~VersionSet() {
  703. current_->Unref();
  704. assert(dummy_versions_.next_ == &dummy_versions_); // List must be empty
  705. delete descriptor_log_;
  706. delete descriptor_file_;
  707. }
  708. void VersionSet::AppendVersion(Version* v) {
  709. // Make "v" current
  710. assert(v->refs_ == 0);
  711. assert(v != current_);
  712. if (current_ != nullptr) {
  713. current_->Unref();
  714. }
  715. current_ = v;
  716. v->Ref();
  717. // Append to linked list
  718. v->prev_ = dummy_versions_.prev_;
  719. v->next_ = &dummy_versions_;
  720. v->prev_->next_ = v;
  721. v->next_->prev_ = v;
  722. }
  723. Status VersionSet::LogAndApply(VersionEdit* edit, port::Mutex* mu) {
  724. if (edit->has_log_number_) {
  725. assert(edit->log_number_ >= log_number_);
  726. assert(edit->log_number_ < next_file_number_);
  727. } else {
  728. edit->SetLogNumber(log_number_);
  729. }
  730. if (!edit->has_prev_log_number_) {
  731. edit->SetPrevLogNumber(prev_log_number_);
  732. }
  733. edit->SetNextFile(next_file_number_);
  734. edit->SetLastSequence(last_sequence_);
  735. Version* v = new Version(this);
  736. {
  737. Builder builder(this, current_);
  738. builder.Apply(edit);
  739. builder.SaveTo(v);
  740. }
  741. Finalize(v);
  742. // Initialize new descriptor log file if necessary by creating
  743. // a temporary file that contains a snapshot of the current version.
  744. std::string new_manifest_file;
  745. Status s;
  746. if (descriptor_log_ == nullptr) {
  747. // No reason to unlock *mu here since we only hit this path in the
  748. // first call to LogAndApply (when opening the database).
  749. assert(descriptor_file_ == nullptr);
  750. new_manifest_file = DescriptorFileName(dbname_, manifest_file_number_);
  751. s = env_->NewWritableFile(new_manifest_file, &descriptor_file_);
  752. if (s.ok()) {
  753. descriptor_log_ = new log::Writer(descriptor_file_);
  754. s = WriteSnapshot(descriptor_log_);
  755. }
  756. }
  757. // Unlock during expensive MANIFEST log write
  758. {
  759. mu->Unlock();
  760. // Write new record to MANIFEST log
  761. if (s.ok()) {
  762. std::string record;
  763. edit->EncodeTo(&record);
  764. s = descriptor_log_->AddRecord(record);
  765. if (s.ok()) {
  766. s = descriptor_file_->Sync();
  767. }
  768. if (!s.ok()) {
  769. Log(options_->info_log, "MANIFEST write: %s\n", s.ToString().c_str());
  770. }
  771. }
  772. // If we just created a new descriptor file, install it by writing a
  773. // new CURRENT file that points to it.
  774. if (s.ok() && !new_manifest_file.empty()) {
  775. s = SetCurrentFile(env_, dbname_, manifest_file_number_);
  776. }
  777. mu->Lock();
  778. }
  779. // Install the new version
  780. if (s.ok()) {
  781. AppendVersion(v);
  782. log_number_ = edit->log_number_;
  783. prev_log_number_ = edit->prev_log_number_;
  784. } else {
  785. delete v;
  786. if (!new_manifest_file.empty()) {
  787. delete descriptor_log_;
  788. delete descriptor_file_;
  789. descriptor_log_ = nullptr;
  790. descriptor_file_ = nullptr;
  791. env_->RemoveFile(new_manifest_file);
  792. }
  793. }
  794. return s;
  795. }
  796. Status VersionSet::Recover(bool* save_manifest) {
  797. struct LogReporter : public log::Reader::Reporter {
  798. Status* status;
  799. void Corruption(size_t bytes, const Status& s) override {
  800. if (this->status->ok()) *this->status = s;
  801. }
  802. };
  803. // Read "CURRENT" file, which contains a pointer to the current manifest file
  804. std::string current;
  805. Status s = ReadFileToString(env_, CurrentFileName(dbname_), &current);
  806. if (!s.ok()) {
  807. return s;
  808. }
  809. if (current.empty() || current[current.size() - 1] != '\n') {
  810. return Status::Corruption("CURRENT file does not end with newline");
  811. }
  812. current.resize(current.size() - 1);
  813. std::string dscname = dbname_ + "/" + current;
  814. SequentialFile* file;
  815. s = env_->NewSequentialFile(dscname, &file);
  816. if (!s.ok()) {
  817. if (s.IsNotFound()) {
  818. return Status::Corruption("CURRENT points to a non-existent file",
  819. s.ToString());
  820. }
  821. return s;
  822. }
  823. bool have_log_number = false;
  824. bool have_prev_log_number = false;
  825. bool have_next_file = false;
  826. bool have_last_sequence = false;
  827. uint64_t next_file = 0;
  828. uint64_t last_sequence = 0;
  829. uint64_t log_number = 0;
  830. uint64_t prev_log_number = 0;
  831. Builder builder(this, current_);
  832. int read_records = 0;
  833. {
  834. LogReporter reporter;
  835. reporter.status = &s;
  836. log::Reader reader(file, &reporter, true /*checksum*/,
  837. 0 /*initial_offset*/);
  838. Slice record;
  839. std::string scratch;
  840. while (reader.ReadRecord(&record, &scratch) && s.ok()) {
  841. ++read_records;
  842. VersionEdit edit;
  843. s = edit.DecodeFrom(record);
  844. if (s.ok()) {
  845. if (edit.has_comparator_ &&
  846. edit.comparator_ != icmp_.user_comparator()->Name()) {
  847. s = Status::InvalidArgument(
  848. edit.comparator_ + " does not match existing comparator ",
  849. icmp_.user_comparator()->Name());
  850. }
  851. }
  852. if (s.ok()) {
  853. builder.Apply(&edit);
  854. }
  855. if (edit.has_log_number_) {
  856. log_number = edit.log_number_;
  857. have_log_number = true;
  858. }
  859. if (edit.has_prev_log_number_) {
  860. prev_log_number = edit.prev_log_number_;
  861. have_prev_log_number = true;
  862. }
  863. if (edit.has_next_file_number_) {
  864. next_file = edit.next_file_number_;
  865. have_next_file = true;
  866. }
  867. if (edit.has_last_sequence_) {
  868. last_sequence = edit.last_sequence_;
  869. have_last_sequence = true;
  870. }
  871. }
  872. }
  873. delete file;
  874. file = nullptr;
  875. if (s.ok()) {
  876. if (!have_next_file) {
  877. s = Status::Corruption("no meta-nextfile entry in descriptor");
  878. } else if (!have_log_number) {
  879. s = Status::Corruption("no meta-lognumber entry in descriptor");
  880. } else if (!have_last_sequence) {
  881. s = Status::Corruption("no last-sequence-number entry in descriptor");
  882. }
  883. if (!have_prev_log_number) {
  884. prev_log_number = 0;
  885. }
  886. MarkFileNumberUsed(prev_log_number);
  887. MarkFileNumberUsed(log_number);
  888. }
  889. if (s.ok()) {
  890. Version* v = new Version(this);
  891. builder.SaveTo(v);
  892. // Install recovered version
  893. Finalize(v);
  894. AppendVersion(v);
  895. manifest_file_number_ = next_file;
  896. next_file_number_ = next_file + 1;
  897. last_sequence_ = last_sequence;
  898. log_number_ = log_number;
  899. prev_log_number_ = prev_log_number;
  900. // See if we can reuse the existing MANIFEST file.
  901. if (ReuseManifest(dscname, current)) {
  902. // No need to save new manifest
  903. } else {
  904. *save_manifest = true;
  905. }
  906. } else {
  907. std::string error = s.ToString();
  908. Log(options_->info_log, "Error recovering version set with %d records: %s",
  909. read_records, error.c_str());
  910. }
  911. return s;
  912. }
  913. bool VersionSet::ReuseManifest(const std::string& dscname,
  914. const std::string& dscbase) {
  915. if (!options_->reuse_logs) {
  916. return false;
  917. }
  918. FileType manifest_type;
  919. uint64_t manifest_number;
  920. uint64_t manifest_size;
  921. if (!ParseFileName(dscbase, &manifest_number, &manifest_type) ||
  922. manifest_type != kDescriptorFile ||
  923. !env_->GetFileSize(dscname, &manifest_size).ok() ||
  924. // Make new compacted MANIFEST if old one is too big
  925. manifest_size >= TargetFileSize(options_)) {
  926. return false;
  927. }
  928. assert(descriptor_file_ == nullptr);
  929. assert(descriptor_log_ == nullptr);
  930. Status r = env_->NewAppendableFile(dscname, &descriptor_file_);
  931. if (!r.ok()) {
  932. Log(options_->info_log, "Reuse MANIFEST: %s\n", r.ToString().c_str());
  933. assert(descriptor_file_ == nullptr);
  934. return false;
  935. }
  936. Log(options_->info_log, "Reusing MANIFEST %s\n", dscname.c_str());
  937. descriptor_log_ = new log::Writer(descriptor_file_, manifest_size);
  938. manifest_file_number_ = manifest_number;
  939. return true;
  940. }
  941. void VersionSet::MarkFileNumberUsed(uint64_t number) {
  942. if (next_file_number_ <= number) {
  943. next_file_number_ = number + 1;
  944. }
  945. }
  946. void VersionSet::Finalize(Version* v) {
  947. // Precomputed best level for next compaction
  948. int best_level = -1;
  949. double best_score = -1;
  950. for (int level = 0; level < config::kNumLevels - 1; level++) {
  951. double score;
  952. if (level == 0) {
  953. // We treat level-0 specially by bounding the number of files
  954. // instead of number of bytes for two reasons:
  955. //
  956. // (1) With larger write-buffer sizes, it is nice not to do too
  957. // many level-0 compactions.
  958. //
  959. // (2) The files in level-0 are merged on every read and
  960. // therefore we wish to avoid too many files when the individual
  961. // file size is small (perhaps because of a small write-buffer
  962. // setting, or very high compression ratios, or lots of
  963. // overwrites/deletions).
  964. score = v->files_[level].size() /
  965. static_cast<double>(config::kL0_CompactionTrigger);
  966. } else {
  967. // Compute the ratio of current size to size limit.
  968. const uint64_t level_bytes = TotalFileSize(v->files_[level]);
  969. score =
  970. static_cast<double>(level_bytes) / MaxBytesForLevel(options_, level);
  971. }
  972. if (score > best_score) {
  973. best_level = level;
  974. best_score = score;
  975. }
  976. }
  977. v->compaction_level_ = best_level;
  978. v->compaction_score_ = best_score;
  979. }
  980. Status VersionSet::WriteSnapshot(log::Writer* log) {
  981. // TODO: Break up into multiple records to reduce memory usage on recovery?
  982. // Save metadata
  983. VersionEdit edit;
  984. edit.SetComparatorName(icmp_.user_comparator()->Name());
  985. // Save compaction pointers
  986. for (int level = 0; level < config::kNumLevels; level++) {
  987. if (!compact_pointer_[level].empty()) {
  988. InternalKey key;
  989. key.DecodeFrom(compact_pointer_[level]);
  990. edit.SetCompactPointer(level, key);
  991. }
  992. }
  993. // Save files
  994. for (int level = 0; level < config::kNumLevels; level++) {
  995. const std::vector<FileMetaData*>& files = current_->files_[level];
  996. for (size_t i = 0; i < files.size(); i++) {
  997. const FileMetaData* f = files[i];
  998. edit.AddFile(level, f->number, f->file_size, f->smallest, f->largest,
  999. f->smallest_deadtime,f->largest_deadtime);
  1000. }
  1001. }
  1002. std::string record;
  1003. edit.EncodeTo(&record);
  1004. return log->AddRecord(record);
  1005. }
  1006. int VersionSet::NumLevelFiles(int level) const {
  1007. assert(level >= 0);
  1008. assert(level < config::kNumLevels);
  1009. return current_->files_[level].size();
  1010. }
  1011. const char* VersionSet::LevelSummary(LevelSummaryStorage* scratch) const {
  1012. // Update code if kNumLevels changes
  1013. static_assert(config::kNumLevels == 7, "");
  1014. std::snprintf(
  1015. scratch->buffer, sizeof(scratch->buffer), "files[ %d %d %d %d %d %d %d ]",
  1016. int(current_->files_[0].size()), int(current_->files_[1].size()),
  1017. int(current_->files_[2].size()), int(current_->files_[3].size()),
  1018. int(current_->files_[4].size()), int(current_->files_[5].size()),
  1019. int(current_->files_[6].size()));
  1020. return scratch->buffer;
  1021. }
  1022. uint64_t VersionSet::ApproximateOffsetOf(Version* v, const InternalKey& ikey) {
  1023. uint64_t result = 0;
  1024. for (int level = 0; level < config::kNumLevels; level++) {
  1025. const std::vector<FileMetaData*>& files = v->files_[level];
  1026. for (size_t i = 0; i < files.size(); i++) {
  1027. if (icmp_.Compare(files[i]->largest, ikey) <= 0) {
  1028. // Entire file is before "ikey", so just add the file size
  1029. result += files[i]->file_size;
  1030. } else if (icmp_.Compare(files[i]->smallest, ikey) > 0) {
  1031. // Entire file is after "ikey", so ignore
  1032. if (level > 0) {
  1033. // Files other than level 0 are sorted by meta->smallest, so
  1034. // no further files in this level will contain data for
  1035. // "ikey".
  1036. break;
  1037. }
  1038. } else {
  1039. // "ikey" falls in the range for this table. Add the
  1040. // approximate offset of "ikey" within the table.
  1041. Table* tableptr;
  1042. Iterator* iter = table_cache_->NewIterator(
  1043. ReadOptions(), files[i]->number, files[i]->file_size, &tableptr);
  1044. if (tableptr != nullptr) {
  1045. result += tableptr->ApproximateOffsetOf(ikey.Encode());
  1046. }
  1047. delete iter;
  1048. }
  1049. }
  1050. }
  1051. return result;
  1052. }
  1053. void VersionSet::AddLiveFiles(std::set<uint64_t>* live) {
  1054. for (Version* v = dummy_versions_.next_; v != &dummy_versions_;
  1055. v = v->next_) {
  1056. for (int level = 0; level < config::kNumLevels; level++) {
  1057. const std::vector<FileMetaData*>& files = v->files_[level];
  1058. for (size_t i = 0; i < files.size(); i++) {
  1059. live->insert(files[i]->number);
  1060. }
  1061. }
  1062. }
  1063. }
  1064. int64_t VersionSet::NumLevelBytes(int level) const {
  1065. assert(level >= 0);
  1066. assert(level < config::kNumLevels);
  1067. return TotalFileSize(current_->files_[level]);
  1068. }
  1069. int64_t VersionSet::MaxNextLevelOverlappingBytes() {
  1070. int64_t result = 0;
  1071. std::vector<FileMetaData*> overlaps;
  1072. for (int level = 1; level < config::kNumLevels - 1; level++) {
  1073. for (size_t i = 0; i < current_->files_[level].size(); i++) {
  1074. const FileMetaData* f = current_->files_[level][i];
  1075. current_->GetOverlappingInputs(level + 1, &f->smallest, &f->largest,
  1076. &overlaps);
  1077. const int64_t sum = TotalFileSize(overlaps);
  1078. if (sum > result) {
  1079. result = sum;
  1080. }
  1081. }
  1082. }
  1083. return result;
  1084. }
  1085. // Stores the minimal range that covers all entries in inputs in
  1086. // *smallest, *largest.
  1087. // REQUIRES: inputs is not empty
  1088. void VersionSet::GetRange(const std::vector<FileMetaData*>& inputs,
  1089. InternalKey* smallest, InternalKey* largest) {
  1090. assert(!inputs.empty());
  1091. smallest->Clear();
  1092. largest->Clear();
  1093. for (size_t i = 0; i < inputs.size(); i++) {
  1094. FileMetaData* f = inputs[i];
  1095. if (i == 0) {
  1096. *smallest = f->smallest;
  1097. *largest = f->largest;
  1098. } else {
  1099. if (icmp_.Compare(f->smallest, *smallest) < 0) {
  1100. *smallest = f->smallest;
  1101. }
  1102. if (icmp_.Compare(f->largest, *largest) > 0) {
  1103. *largest = f->largest;
  1104. }
  1105. }
  1106. }
  1107. }
  1108. // Stores the minimal range that covers all entries in inputs1 and inputs2
  1109. // in *smallest, *largest.
  1110. // REQUIRES: inputs is not empty
  1111. void VersionSet::GetRange2(const std::vector<FileMetaData*>& inputs1,
  1112. const std::vector<FileMetaData*>& inputs2,
  1113. InternalKey* smallest, InternalKey* largest) {
  1114. std::vector<FileMetaData*> all = inputs1;
  1115. all.insert(all.end(), inputs2.begin(), inputs2.end());
  1116. GetRange(all, smallest, largest);
  1117. }
  1118. Iterator* VersionSet::MakeInputIterator(Compaction* c) {
  1119. ReadOptions options;
  1120. options.verify_checksums = options_->paranoid_checks;
  1121. options.fill_cache = false;
  1122. // Level-0 files have to be merged together. For other levels,
  1123. // we will make a concatenating iterator per level.
  1124. // TODO(opt): use concatenating iterator for level-0 if there is no overlap
  1125. const int space = (c->level() == 0 ? c->inputs_[0].size() + 1 : 2);
  1126. Iterator** list = new Iterator*[space];
  1127. int num = 0;
  1128. for (int which = 0; which < 2; which++) {
  1129. if (!c->inputs_[which].empty()) {
  1130. if (c->level() + which == 0) {
  1131. const std::vector<FileMetaData*>& files = c->inputs_[which];
  1132. for (size_t i = 0; i < files.size(); i++) {
  1133. list[num++] = table_cache_->NewIterator(options, files[i]->number,
  1134. files[i]->file_size);
  1135. }
  1136. } else {
  1137. // Create concatenating iterator for the files from this level
  1138. list[num++] = NewTwoLevelIterator(
  1139. new Version::LevelFileNumIterator(icmp_, &c->inputs_[which]),
  1140. &GetFileIterator, table_cache_, options);
  1141. }
  1142. }
  1143. }
  1144. assert(num <= space);
  1145. Iterator* result = NewMergingIterator(&icmp_, list, num);
  1146. delete[] list;
  1147. return result;
  1148. }
  1149. Compaction* VersionSet::PickCompaction() {
  1150. Compaction* c;
  1151. int level;
  1152. // We prefer compactions triggered by too much data in a level over
  1153. // the compactions triggered by seeks.
  1154. const bool size_compaction = (current_->compaction_score_ >= 1);
  1155. const bool seek_compaction = (current_->file_to_compact_ != nullptr);
  1156. if (size_compaction) {
  1157. level = current_->compaction_level_;
  1158. assert(level >= 0);
  1159. assert(level + 1 < config::kNumLevels);
  1160. c = new Compaction(options_, level);
  1161. // Pick the first file that comes after compact_pointer_[level]
  1162. for (size_t i = 0; i < current_->files_[level].size(); i++) {
  1163. FileMetaData* f = current_->files_[level][i];
  1164. if (compact_pointer_[level].empty() ||
  1165. icmp_.Compare(f->largest.Encode(), compact_pointer_[level]) > 0) {
  1166. c->inputs_[0].push_back(f);
  1167. break;
  1168. }
  1169. }
  1170. if (c->inputs_[0].empty()) {
  1171. // Wrap-around to the beginning of the key space
  1172. c->inputs_[0].push_back(current_->files_[level][0]);
  1173. }
  1174. } else if (seek_compaction) {
  1175. level = current_->file_to_compact_level_;
  1176. c = new Compaction(options_, level);
  1177. c->inputs_[0].push_back(current_->file_to_compact_);
  1178. } else {
  1179. return nullptr;
  1180. }
  1181. c->input_version_ = current_;
  1182. c->input_version_->Ref();
  1183. // Files in level 0 may overlap each other, so pick up all overlapping ones
  1184. if (level == 0) {
  1185. InternalKey smallest, largest;
  1186. GetRange(c->inputs_[0], &smallest, &largest);
  1187. // Note that the next call will discard the file we placed in
  1188. // c->inputs_[0] earlier and replace it with an overlapping set
  1189. // which will include the picked file.
  1190. current_->GetOverlappingInputs(0, &smallest, &largest, &c->inputs_[0]);
  1191. assert(!c->inputs_[0].empty());
  1192. }
  1193. SetupOtherInputs(c);
  1194. return c;
  1195. }
  1196. // Finds the largest key in a vector of files. Returns true if files is not
  1197. // empty.
  1198. bool FindLargestKey(const InternalKeyComparator& icmp,
  1199. const std::vector<FileMetaData*>& files,
  1200. InternalKey* largest_key) {
  1201. if (files.empty()) {
  1202. return false;
  1203. }
  1204. *largest_key = files[0]->largest;
  1205. for (size_t i = 1; i < files.size(); ++i) {
  1206. FileMetaData* f = files[i];
  1207. if (icmp.Compare(f->largest, *largest_key) > 0) {
  1208. *largest_key = f->largest;
  1209. }
  1210. }
  1211. return true;
  1212. }
  1213. // Finds minimum file b2=(l2, u2) in level file for which l2 > u1 and
  1214. // user_key(l2) = user_key(u1)
  1215. FileMetaData* FindSmallestBoundaryFile(
  1216. const InternalKeyComparator& icmp,
  1217. const std::vector<FileMetaData*>& level_files,
  1218. const InternalKey& largest_key) {
  1219. const Comparator* user_cmp = icmp.user_comparator();
  1220. FileMetaData* smallest_boundary_file = nullptr;
  1221. for (size_t i = 0; i < level_files.size(); ++i) {
  1222. FileMetaData* f = level_files[i];
  1223. if (icmp.Compare(f->smallest, largest_key) > 0 &&
  1224. user_cmp->Compare(f->smallest.user_key(), largest_key.user_key()) ==
  1225. 0) {
  1226. if (smallest_boundary_file == nullptr ||
  1227. icmp.Compare(f->smallest, smallest_boundary_file->smallest) < 0) {
  1228. smallest_boundary_file = f;
  1229. }
  1230. }
  1231. }
  1232. return smallest_boundary_file;
  1233. }
  1234. // Extracts the largest file b1 from |compaction_files| and then searches for a
  1235. // b2 in |level_files| for which user_key(u1) = user_key(l2). If it finds such a
  1236. // file b2 (known as a boundary file) it adds it to |compaction_files| and then
  1237. // searches again using this new upper bound.
  1238. //
  1239. // If there are two blocks, b1=(l1, u1) and b2=(l2, u2) and
  1240. // user_key(u1) = user_key(l2), and if we compact b1 but not b2 then a
  1241. // subsequent get operation will yield an incorrect result because it will
  1242. // return the record from b2 in level i rather than from b1 because it searches
  1243. // level by level for records matching the supplied user key.
  1244. //
  1245. // parameters:
  1246. // in level_files: List of files to search for boundary files.
  1247. // in/out compaction_files: List of files to extend by adding boundary files.
  1248. void AddBoundaryInputs(const InternalKeyComparator& icmp,
  1249. const std::vector<FileMetaData*>& level_files,
  1250. std::vector<FileMetaData*>* compaction_files) {
  1251. InternalKey largest_key;
  1252. // Quick return if compaction_files is empty.
  1253. if (!FindLargestKey(icmp, *compaction_files, &largest_key)) {
  1254. return;
  1255. }
  1256. bool continue_searching = true;
  1257. while (continue_searching) {
  1258. FileMetaData* smallest_boundary_file =
  1259. FindSmallestBoundaryFile(icmp, level_files, largest_key);
  1260. // If a boundary file was found advance largest_key, otherwise we're done.
  1261. if (smallest_boundary_file != NULL) {
  1262. compaction_files->push_back(smallest_boundary_file);
  1263. largest_key = smallest_boundary_file->largest;
  1264. } else {
  1265. continue_searching = false;
  1266. }
  1267. }
  1268. }
  1269. void VersionSet::SetupOtherInputs(Compaction* c) {
  1270. const int level = c->level();
  1271. InternalKey smallest, largest;
  1272. AddBoundaryInputs(icmp_, current_->files_[level], &c->inputs_[0]);
  1273. GetRange(c->inputs_[0], &smallest, &largest);
  1274. current_->GetOverlappingInputs(level + 1, &smallest, &largest,
  1275. &c->inputs_[1]);
  1276. AddBoundaryInputs(icmp_, current_->files_[level + 1], &c->inputs_[1]);
  1277. // Get entire range covered by compaction
  1278. InternalKey all_start, all_limit;
  1279. GetRange2(c->inputs_[0], c->inputs_[1], &all_start, &all_limit);
  1280. // See if we can grow the number of inputs in "level" without
  1281. // changing the number of "level+1" files we pick up.
  1282. if (!c->inputs_[1].empty()) {
  1283. std::vector<FileMetaData*> expanded0;
  1284. current_->GetOverlappingInputs(level, &all_start, &all_limit, &expanded0);
  1285. AddBoundaryInputs(icmp_, current_->files_[level], &expanded0);
  1286. const int64_t inputs0_size = TotalFileSize(c->inputs_[0]);
  1287. const int64_t inputs1_size = TotalFileSize(c->inputs_[1]);
  1288. const int64_t expanded0_size = TotalFileSize(expanded0);
  1289. if (expanded0.size() > c->inputs_[0].size() &&
  1290. inputs1_size + expanded0_size <
  1291. ExpandedCompactionByteSizeLimit(options_)) {
  1292. InternalKey new_start, new_limit;
  1293. GetRange(expanded0, &new_start, &new_limit);
  1294. std::vector<FileMetaData*> expanded1;
  1295. current_->GetOverlappingInputs(level + 1, &new_start, &new_limit,
  1296. &expanded1);
  1297. AddBoundaryInputs(icmp_, current_->files_[level + 1], &expanded1);
  1298. if (expanded1.size() == c->inputs_[1].size()) {
  1299. Log(options_->info_log,
  1300. "Expanding@%d %d+%d (%ld+%ld bytes) to %d+%d (%ld+%ld bytes)\n",
  1301. level, int(c->inputs_[0].size()), int(c->inputs_[1].size()),
  1302. long(inputs0_size), long(inputs1_size), int(expanded0.size()),
  1303. int(expanded1.size()), long(expanded0_size), long(inputs1_size));
  1304. smallest = new_start;
  1305. largest = new_limit;
  1306. c->inputs_[0] = expanded0;
  1307. c->inputs_[1] = expanded1;
  1308. GetRange2(c->inputs_[0], c->inputs_[1], &all_start, &all_limit);
  1309. }
  1310. }
  1311. }
  1312. // Compute the set of grandparent files that overlap this compaction
  1313. // (parent == level+1; grandparent == level+2)
  1314. if (level + 2 < config::kNumLevels) {
  1315. current_->GetOverlappingInputs(level + 2, &all_start, &all_limit,
  1316. &c->grandparents_);
  1317. }
  1318. // Update the place where we will do the next compaction for this level.
  1319. // We update this immediately instead of waiting for the VersionEdit
  1320. // to be applied so that if the compaction fails, we will try a different
  1321. // key range next time.
  1322. compact_pointer_[level] = largest.Encode().ToString();
  1323. c->edit_.SetCompactPointer(level, largest);
  1324. }
  1325. Compaction* VersionSet::CompactRange(int level, const InternalKey* begin,
  1326. const InternalKey* end) {
  1327. std::vector<FileMetaData*> inputs;
  1328. current_->GetOverlappingInputs(level, begin, end, &inputs);
  1329. if (inputs.empty()) {
  1330. return nullptr;
  1331. }
  1332. // Avoid compacting too much in one shot in case the range is large.
  1333. // But we cannot do this for level-0 since level-0 files can overlap
  1334. // and we must not pick one file and drop another older file if the
  1335. // two files overlap.
  1336. if (level > 0) {
  1337. const uint64_t limit = MaxFileSizeForLevel(options_, level);
  1338. uint64_t total = 0;
  1339. for (size_t i = 0; i < inputs.size(); i++) {
  1340. uint64_t s = inputs[i]->file_size;
  1341. total += s;
  1342. if (total >= limit) {
  1343. inputs.resize(i + 1);
  1344. break;
  1345. }
  1346. }
  1347. }
  1348. Compaction* c = new Compaction(options_, level);
  1349. c->input_version_ = current_;
  1350. c->input_version_->Ref();
  1351. c->inputs_[0] = inputs;
  1352. SetupOtherInputs(c);
  1353. return c;
  1354. }
  1355. Compaction::Compaction(const Options* options, int level)
  1356. : level_(level),
  1357. max_output_file_size_(MaxFileSizeForLevel(options, level)),
  1358. input_version_(nullptr),
  1359. grandparent_index_(0),
  1360. seen_key_(false),
  1361. overlapped_bytes_(0) {
  1362. for (int i = 0; i < config::kNumLevels; i++) {
  1363. level_ptrs_[i] = 0;
  1364. }
  1365. }
  1366. Compaction::~Compaction() {
  1367. if (input_version_ != nullptr) {
  1368. input_version_->Unref();
  1369. }
  1370. }
  1371. bool Compaction::IsTrivialMove() const {
  1372. const VersionSet* vset = input_version_->vset_;
  1373. // Avoid a move if there is lots of overlapping grandparent data.
  1374. // Otherwise, the move could create a parent file that will require
  1375. // a very expensive merge later on.
  1376. return (num_input_files(0) == 1 && num_input_files(1) == 0 &&
  1377. TotalFileSize(grandparents_) <=
  1378. MaxGrandParentOverlapBytes(vset->options_));
  1379. }
  1380. void Compaction::AddInputDeletions(VersionEdit* edit) {
  1381. for (int which = 0; which < 2; which++) {
  1382. for (size_t i = 0; i < inputs_[which].size(); i++) {
  1383. edit->RemoveFile(level_ + which, inputs_[which][i]->number);
  1384. }
  1385. }
  1386. }
  1387. bool Compaction::IsBaseLevelForKey(const Slice& user_key) {
  1388. // Maybe use binary search to find right entry instead of linear search?
  1389. const Comparator* user_cmp = input_version_->vset_->icmp_.user_comparator();
  1390. for (int lvl = level_ + 2; lvl < config::kNumLevels; lvl++) {
  1391. const std::vector<FileMetaData*>& files = input_version_->files_[lvl];
  1392. while (level_ptrs_[lvl] < files.size()) {
  1393. FileMetaData* f = files[level_ptrs_[lvl]];
  1394. if (user_cmp->Compare(user_key, f->largest.user_key()) <= 0) {
  1395. // We've advanced far enough
  1396. if (user_cmp->Compare(user_key, f->smallest.user_key()) >= 0) {
  1397. // Key falls in this file's range, so definitely not base level
  1398. return false;
  1399. }
  1400. break;
  1401. }
  1402. level_ptrs_[lvl]++;
  1403. }
  1404. }
  1405. return true;
  1406. }
  1407. bool Compaction::ShouldStopBefore(const Slice& internal_key) {
  1408. const VersionSet* vset = input_version_->vset_;
  1409. // Scan to find earliest grandparent file that contains key.
  1410. const InternalKeyComparator* icmp = &vset->icmp_;
  1411. while (grandparent_index_ < grandparents_.size() &&
  1412. icmp->Compare(internal_key,
  1413. grandparents_[grandparent_index_]->largest.Encode()) >
  1414. 0) {
  1415. if (seen_key_) {
  1416. overlapped_bytes_ += grandparents_[grandparent_index_]->file_size;
  1417. }
  1418. grandparent_index_++;
  1419. }
  1420. seen_key_ = true;
  1421. if (overlapped_bytes_ > MaxGrandParentOverlapBytes(vset->options_)) {
  1422. // Too much overlap for current output; start new output
  1423. overlapped_bytes_ = 0;
  1424. return true;
  1425. } else {
  1426. return false;
  1427. }
  1428. }
  1429. void Compaction::ReleaseInputs() {
  1430. if (input_version_ != nullptr) {
  1431. input_version_->Unref();
  1432. input_version_ = nullptr;
  1433. }
  1434. }
  1435. } // namespace leveldb