作者: 谢瑞阳 10225101483 徐翔宇 10225101535
您最多选择25个主题 主题必须以字母或数字开头,可以包含连字符 (-),并且长度不得超过35个字符

164 行
5.2 KiB

Add support for Zstd-based compression in LevelDB. This change implements support for Zstd-based compression in LevelDB. Building up from the Snappy compression (which has been supported since inception), this change adds Zstd as an alternate compression algorithm. We are implementing this to provide alternative options for users who might have different performance and efficiency requirements. For instance, the Zstandard website (https://facebook.github.io/zstd/) claims that the Zstd algorithm can achieve around 30% higher compression ratios than Snappy, with relatively smaller (~10%) slowdowns in de/compression speeds. Benchmarking results: $ blaze-bin/third_party/leveldb/db_bench LevelDB: version 1.23 Date: Thu Feb 2 18:50:06 2023 CPU: 56 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz CPUCache: 35840 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) ------------------------------------------------ fillseq : 2.613 micros/op; 42.3 MB/s fillsync : 3924.432 micros/op; 0.0 MB/s (1000 ops) fillrandom : 3.609 micros/op; 30.7 MB/s overwrite : 4.508 micros/op; 24.5 MB/s readrandom : 6.136 micros/op; (864322 of 1000000 found) readrandom : 5.446 micros/op; (864083 of 1000000 found) readseq : 0.180 micros/op; 613.3 MB/s readreverse : 0.321 micros/op; 344.7 MB/s compact : 827043.000 micros/op; readrandom : 4.603 micros/op; (864105 of 1000000 found) readseq : 0.169 micros/op; 656.3 MB/s readreverse : 0.315 micros/op; 350.8 MB/s fill100K : 854.009 micros/op; 111.7 MB/s (1000 ops) crc32c : 1.227 micros/op; 3184.0 MB/s (4K per op) snappycomp : 3.610 micros/op; 1081.9 MB/s (output: 55.2%) snappyuncomp : 0.691 micros/op; 5656.3 MB/s zstdcomp : 15.731 micros/op; 248.3 MB/s (output: 44.1%) zstduncomp : 4.218 micros/op; 926.2 MB/s PiperOrigin-RevId: 509957778
1年前
Release 1.18 Changes are: * Update version number to 1.18 * Replace the basic fprintf call with a call to fwrite in order to work around the apparent compiler optimization/rewrite failure that we are seeing with the new toolchain/iOS SDKs provided with Xcode6 and iOS8. * Fix ALL the header guards. * Createed a README.md with the LevelDB project description. * A new CONTRIBUTING file. * Don't implicitly convert uint64_t to size_t or int. Either preserve it as uint64_t, or explicitly cast. This fixes MSVC warnings about possible value truncation when compiling this code in Chromium. * Added a DumpFile() library function that encapsulates the guts of the "leveldbutil dump" command. This will allow clients to dump data to their log files instead of stdout. It will also allow clients to supply their own environment. * leveldb: Remove unused function 'ConsumeChar'. * leveldbutil: Remove unused member variables from WriteBatchItemPrinter. * OpenBSD, NetBSD and DragonflyBSD have _LITTLE_ENDIAN, so define PLATFORM_IS_LITTLE_ENDIAN like on FreeBSD. This fixes: * issue #143 * issue #198 * issue #249 * Switch from <cstdatomic> to <atomic>. The former never made it into the standard and doesn't exist in modern gcc versions at all. The later contains everything that leveldb was using from the former. This problem was noticed when porting to Portable Native Client where no memory barrier is defined. The fact that <cstdatomic> is missing normally goes unnoticed since memory barriers are defined for most architectures. * Make Hash() treat its input as unsigned. Before this change LevelDB files from platforms with different signedness of char were not compatible. This change fixes: issue #243 * Verify checksums of index/meta/filter blocks when paranoid_checks set. * Invoke all tools for iOS with xcrun. (This was causing problems with the new XCode 5.1.1 image on pulse.) * include <sys/stat.h> only once, and fix the following linter warning: "Found C system header after C++ system header" * When encountering a corrupted table file, return Status::Corruption instead of Status::InvalidArgument. * Support cygwin as build platform, patch is from https://code.google.com/p/leveldb/issues/detail?id=188 * Fix typo, merge patch from https://code.google.com/p/leveldb/issues/detail?id=159 * Fix typos and comments, and address the following two issues: * issue #166 * issue #241 * Add missing db synchronize after "fillseq" in the benchmark. * Removed unused variable in SeekRandom: value (issue #201)
10 年前
Add support for Zstd-based compression in LevelDB. This change implements support for Zstd-based compression in LevelDB. Building up from the Snappy compression (which has been supported since inception), this change adds Zstd as an alternate compression algorithm. We are implementing this to provide alternative options for users who might have different performance and efficiency requirements. For instance, the Zstandard website (https://facebook.github.io/zstd/) claims that the Zstd algorithm can achieve around 30% higher compression ratios than Snappy, with relatively smaller (~10%) slowdowns in de/compression speeds. Benchmarking results: $ blaze-bin/third_party/leveldb/db_bench LevelDB: version 1.23 Date: Thu Feb 2 18:50:06 2023 CPU: 56 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz CPUCache: 35840 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) ------------------------------------------------ fillseq : 2.613 micros/op; 42.3 MB/s fillsync : 3924.432 micros/op; 0.0 MB/s (1000 ops) fillrandom : 3.609 micros/op; 30.7 MB/s overwrite : 4.508 micros/op; 24.5 MB/s readrandom : 6.136 micros/op; (864322 of 1000000 found) readrandom : 5.446 micros/op; (864083 of 1000000 found) readseq : 0.180 micros/op; 613.3 MB/s readreverse : 0.321 micros/op; 344.7 MB/s compact : 827043.000 micros/op; readrandom : 4.603 micros/op; (864105 of 1000000 found) readseq : 0.169 micros/op; 656.3 MB/s readreverse : 0.315 micros/op; 350.8 MB/s fill100K : 854.009 micros/op; 111.7 MB/s (1000 ops) crc32c : 1.227 micros/op; 3184.0 MB/s (4K per op) snappycomp : 3.610 micros/op; 1081.9 MB/s (output: 55.2%) snappyuncomp : 0.691 micros/op; 5656.3 MB/s zstdcomp : 15.731 micros/op; 248.3 MB/s (output: 44.1%) zstduncomp : 4.218 micros/op; 926.2 MB/s PiperOrigin-RevId: 509957778
1年前
Add support for Zstd-based compression in LevelDB. This change implements support for Zstd-based compression in LevelDB. Building up from the Snappy compression (which has been supported since inception), this change adds Zstd as an alternate compression algorithm. We are implementing this to provide alternative options for users who might have different performance and efficiency requirements. For instance, the Zstandard website (https://facebook.github.io/zstd/) claims that the Zstd algorithm can achieve around 30% higher compression ratios than Snappy, with relatively smaller (~10%) slowdowns in de/compression speeds. Benchmarking results: $ blaze-bin/third_party/leveldb/db_bench LevelDB: version 1.23 Date: Thu Feb 2 18:50:06 2023 CPU: 56 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz CPUCache: 35840 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) ------------------------------------------------ fillseq : 2.613 micros/op; 42.3 MB/s fillsync : 3924.432 micros/op; 0.0 MB/s (1000 ops) fillrandom : 3.609 micros/op; 30.7 MB/s overwrite : 4.508 micros/op; 24.5 MB/s readrandom : 6.136 micros/op; (864322 of 1000000 found) readrandom : 5.446 micros/op; (864083 of 1000000 found) readseq : 0.180 micros/op; 613.3 MB/s readreverse : 0.321 micros/op; 344.7 MB/s compact : 827043.000 micros/op; readrandom : 4.603 micros/op; (864105 of 1000000 found) readseq : 0.169 micros/op; 656.3 MB/s readreverse : 0.315 micros/op; 350.8 MB/s fill100K : 854.009 micros/op; 111.7 MB/s (1000 ops) crc32c : 1.227 micros/op; 3184.0 MB/s (4K per op) snappycomp : 3.610 micros/op; 1081.9 MB/s (output: 55.2%) snappyuncomp : 0.691 micros/op; 5656.3 MB/s zstdcomp : 15.731 micros/op; 248.3 MB/s (output: 44.1%) zstduncomp : 4.218 micros/op; 926.2 MB/s PiperOrigin-RevId: 509957778
1年前
  1. // Copyright (c) 2011 The LevelDB Authors. All rights reserved.
  2. // Use of this source code is governed by a BSD-style license that can be
  3. // found in the LICENSE file. See the AUTHORS file for names of contributors.
  4. #include "table/format.h"
  5. #include "leveldb/env.h"
  6. #include "leveldb/options.h"
  7. #include "port/port.h"
  8. #include "table/block.h"
  9. #include "util/coding.h"
  10. #include "util/crc32c.h"
  11. namespace leveldb {
  12. void BlockHandle::EncodeTo(std::string* dst) const {
  13. // Sanity check that all fields have been set
  14. assert(offset_ != ~static_cast<uint64_t>(0));
  15. assert(size_ != ~static_cast<uint64_t>(0));
  16. PutVarint64(dst, offset_);
  17. PutVarint64(dst, size_);
  18. }
  19. Status BlockHandle::DecodeFrom(Slice* input) {
  20. if (GetVarint64(input, &offset_) && GetVarint64(input, &size_)) {
  21. return Status::OK();
  22. } else {
  23. return Status::Corruption("bad block handle");
  24. }
  25. }
  26. void Footer::EncodeTo(std::string* dst) const {
  27. const size_t original_size = dst->size();
  28. metaindex_handle_.EncodeTo(dst);
  29. index_handle_.EncodeTo(dst);
  30. dst->resize(2 * BlockHandle::kMaxEncodedLength); // Padding
  31. PutFixed32(dst, static_cast<uint32_t>(kTableMagicNumber & 0xffffffffu));
  32. PutFixed32(dst, static_cast<uint32_t>(kTableMagicNumber >> 32));
  33. assert(dst->size() == original_size + kEncodedLength);
  34. (void)original_size; // Disable unused variable warning.
  35. }
  36. Status Footer::DecodeFrom(Slice* input) {
  37. if (input->size() < kEncodedLength) {
  38. return Status::Corruption("not an sstable (footer too short)");
  39. }
  40. const char* magic_ptr = input->data() + kEncodedLength - 8;
  41. const uint32_t magic_lo = DecodeFixed32(magic_ptr);
  42. const uint32_t magic_hi = DecodeFixed32(magic_ptr + 4);
  43. const uint64_t magic = ((static_cast<uint64_t>(magic_hi) << 32) |
  44. (static_cast<uint64_t>(magic_lo)));
  45. if (magic != kTableMagicNumber) {
  46. return Status::Corruption("not an sstable (bad magic number)");
  47. }
  48. Status result = metaindex_handle_.DecodeFrom(input);
  49. if (result.ok()) {
  50. result = index_handle_.DecodeFrom(input);
  51. }
  52. if (result.ok()) {
  53. // We skip over any leftover data (just padding for now) in "input"
  54. const char* end = magic_ptr + 8;
  55. *input = Slice(end, input->data() + input->size() - end);
  56. }
  57. return result;
  58. }
  59. Status ReadBlock(RandomAccessFile* file, const ReadOptions& options,
  60. const BlockHandle& handle, BlockContents* result) {
  61. result->data = Slice();
  62. result->cachable = false;
  63. result->heap_allocated = false;
  64. // Read the block contents as well as the type/crc footer.
  65. // See table_builder.cc for the code that built this structure.
  66. size_t n = static_cast<size_t>(handle.size());
  67. char* buf = new char[n + kBlockTrailerSize];
  68. Slice contents;
  69. Status s = file->Read(handle.offset(), n + kBlockTrailerSize, &contents, buf);
  70. if (!s.ok()) {
  71. delete[] buf;
  72. return s;
  73. }
  74. if (contents.size() != n + kBlockTrailerSize) {
  75. delete[] buf;
  76. return Status::Corruption("truncated block read");
  77. }
  78. // Check the crc of the type and the block contents
  79. const char* data = contents.data(); // Pointer to where Read put the data
  80. if (options.verify_checksums) {
  81. const uint32_t crc = crc32c::Unmask(DecodeFixed32(data + n + 1));
  82. const uint32_t actual = crc32c::Value(data, n + 1);
  83. if (actual != crc) {
  84. delete[] buf;
  85. s = Status::Corruption("block checksum mismatch");
  86. return s;
  87. }
  88. }
  89. switch (data[n]) {
  90. case kNoCompression:
  91. if (data != buf) {
  92. // File implementation gave us pointer to some other data.
  93. // Use it directly under the assumption that it will be live
  94. // while the file is open.
  95. delete[] buf;
  96. result->data = Slice(data, n);
  97. result->heap_allocated = false;
  98. result->cachable = false; // Do not double-cache
  99. } else {
  100. result->data = Slice(buf, n);
  101. result->heap_allocated = true;
  102. result->cachable = true;
  103. }
  104. // Ok
  105. break;
  106. case kSnappyCompression: {
  107. size_t ulength = 0;
  108. if (!port::Snappy_GetUncompressedLength(data, n, &ulength)) {
  109. delete[] buf;
  110. return Status::Corruption("corrupted snappy compressed block length");
  111. }
  112. char* ubuf = new char[ulength];
  113. if (!port::Snappy_Uncompress(data, n, ubuf)) {
  114. delete[] buf;
  115. delete[] ubuf;
  116. return Status::Corruption("corrupted snappy compressed block contents");
  117. }
  118. delete[] buf;
  119. result->data = Slice(ubuf, ulength);
  120. result->heap_allocated = true;
  121. result->cachable = true;
  122. break;
  123. }
  124. case kZstdCompression: {
  125. size_t ulength = 0;
  126. if (!port::Zstd_GetUncompressedLength(data, n, &ulength)) {
  127. delete[] buf;
  128. return Status::Corruption("corrupted zstd compressed block length");
  129. }
  130. char* ubuf = new char[ulength];
  131. if (!port::Zstd_Uncompress(data, n, ubuf)) {
  132. delete[] buf;
  133. delete[] ubuf;
  134. return Status::Corruption("corrupted zstd compressed block contents");
  135. }
  136. delete[] buf;
  137. result->data = Slice(ubuf, ulength);
  138. result->heap_allocated = true;
  139. result->cachable = true;
  140. break;
  141. }
  142. default:
  143. delete[] buf;
  144. return Status::Corruption("bad block type");
  145. }
  146. return Status::OK();
  147. }
  148. } // namespace leveldb