作者: 谢瑞阳 10225101483 徐翔宇 10225101535
您最多选择25个主题 主题必须以字母或数字开头,可以包含连字符 (-),并且长度不得超过35个字符

120 行
4.4 KiB

Add support for Zstd-based compression in LevelDB. This change implements support for Zstd-based compression in LevelDB. Building up from the Snappy compression (which has been supported since inception), this change adds Zstd as an alternate compression algorithm. We are implementing this to provide alternative options for users who might have different performance and efficiency requirements. For instance, the Zstandard website (https://facebook.github.io/zstd/) claims that the Zstd algorithm can achieve around 30% higher compression ratios than Snappy, with relatively smaller (~10%) slowdowns in de/compression speeds. Benchmarking results: $ blaze-bin/third_party/leveldb/db_bench LevelDB: version 1.23 Date: Thu Feb 2 18:50:06 2023 CPU: 56 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz CPUCache: 35840 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) ------------------------------------------------ fillseq : 2.613 micros/op; 42.3 MB/s fillsync : 3924.432 micros/op; 0.0 MB/s (1000 ops) fillrandom : 3.609 micros/op; 30.7 MB/s overwrite : 4.508 micros/op; 24.5 MB/s readrandom : 6.136 micros/op; (864322 of 1000000 found) readrandom : 5.446 micros/op; (864083 of 1000000 found) readseq : 0.180 micros/op; 613.3 MB/s readreverse : 0.321 micros/op; 344.7 MB/s compact : 827043.000 micros/op; readrandom : 4.603 micros/op; (864105 of 1000000 found) readseq : 0.169 micros/op; 656.3 MB/s readreverse : 0.315 micros/op; 350.8 MB/s fill100K : 854.009 micros/op; 111.7 MB/s (1000 ops) crc32c : 1.227 micros/op; 3184.0 MB/s (4K per op) snappycomp : 3.610 micros/op; 1081.9 MB/s (output: 55.2%) snappyuncomp : 0.691 micros/op; 5656.3 MB/s zstdcomp : 15.731 micros/op; 248.3 MB/s (output: 44.1%) zstduncomp : 4.218 micros/op; 926.2 MB/s PiperOrigin-RevId: 509957778
1年前
Add support for Zstd-based compression in LevelDB. This change implements support for Zstd-based compression in LevelDB. Building up from the Snappy compression (which has been supported since inception), this change adds Zstd as an alternate compression algorithm. We are implementing this to provide alternative options for users who might have different performance and efficiency requirements. For instance, the Zstandard website (https://facebook.github.io/zstd/) claims that the Zstd algorithm can achieve around 30% higher compression ratios than Snappy, with relatively smaller (~10%) slowdowns in de/compression speeds. Benchmarking results: $ blaze-bin/third_party/leveldb/db_bench LevelDB: version 1.23 Date: Thu Feb 2 18:50:06 2023 CPU: 56 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz CPUCache: 35840 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) ------------------------------------------------ fillseq : 2.613 micros/op; 42.3 MB/s fillsync : 3924.432 micros/op; 0.0 MB/s (1000 ops) fillrandom : 3.609 micros/op; 30.7 MB/s overwrite : 4.508 micros/op; 24.5 MB/s readrandom : 6.136 micros/op; (864322 of 1000000 found) readrandom : 5.446 micros/op; (864083 of 1000000 found) readseq : 0.180 micros/op; 613.3 MB/s readreverse : 0.321 micros/op; 344.7 MB/s compact : 827043.000 micros/op; readrandom : 4.603 micros/op; (864105 of 1000000 found) readseq : 0.169 micros/op; 656.3 MB/s readreverse : 0.315 micros/op; 350.8 MB/s fill100K : 854.009 micros/op; 111.7 MB/s (1000 ops) crc32c : 1.227 micros/op; 3184.0 MB/s (4K per op) snappycomp : 3.610 micros/op; 1081.9 MB/s (output: 55.2%) snappyuncomp : 0.691 micros/op; 5656.3 MB/s zstdcomp : 15.731 micros/op; 248.3 MB/s (output: 44.1%) zstduncomp : 4.218 micros/op; 926.2 MB/s PiperOrigin-RevId: 509957778
1年前
Add support for Zstd-based compression in LevelDB. This change implements support for Zstd-based compression in LevelDB. Building up from the Snappy compression (which has been supported since inception), this change adds Zstd as an alternate compression algorithm. We are implementing this to provide alternative options for users who might have different performance and efficiency requirements. For instance, the Zstandard website (https://facebook.github.io/zstd/) claims that the Zstd algorithm can achieve around 30% higher compression ratios than Snappy, with relatively smaller (~10%) slowdowns in de/compression speeds. Benchmarking results: $ blaze-bin/third_party/leveldb/db_bench LevelDB: version 1.23 Date: Thu Feb 2 18:50:06 2023 CPU: 56 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz CPUCache: 35840 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) ------------------------------------------------ fillseq : 2.613 micros/op; 42.3 MB/s fillsync : 3924.432 micros/op; 0.0 MB/s (1000 ops) fillrandom : 3.609 micros/op; 30.7 MB/s overwrite : 4.508 micros/op; 24.5 MB/s readrandom : 6.136 micros/op; (864322 of 1000000 found) readrandom : 5.446 micros/op; (864083 of 1000000 found) readseq : 0.180 micros/op; 613.3 MB/s readreverse : 0.321 micros/op; 344.7 MB/s compact : 827043.000 micros/op; readrandom : 4.603 micros/op; (864105 of 1000000 found) readseq : 0.169 micros/op; 656.3 MB/s readreverse : 0.315 micros/op; 350.8 MB/s fill100K : 854.009 micros/op; 111.7 MB/s (1000 ops) crc32c : 1.227 micros/op; 3184.0 MB/s (4K per op) snappycomp : 3.610 micros/op; 1081.9 MB/s (output: 55.2%) snappyuncomp : 0.691 micros/op; 5656.3 MB/s zstdcomp : 15.731 micros/op; 248.3 MB/s (output: 44.1%) zstduncomp : 4.218 micros/op; 926.2 MB/s PiperOrigin-RevId: 509957778
1年前
  1. // Copyright (c) 2011 The LevelDB Authors. All rights reserved.
  2. // Use of this source code is governed by a BSD-style license that can be
  3. // found in the LICENSE file. See the AUTHORS file for names of contributors.
  4. //
  5. // This file contains the specification, but not the implementations,
  6. // of the types/operations/etc. that should be defined by a platform
  7. // specific port_<platform>.h file. Use this file as a reference for
  8. // how to port this package to a new platform.
  9. #ifndef STORAGE_LEVELDB_PORT_PORT_EXAMPLE_H_
  10. #define STORAGE_LEVELDB_PORT_PORT_EXAMPLE_H_
  11. #include "port/thread_annotations.h"
  12. namespace leveldb {
  13. namespace port {
  14. // TODO(jorlow): Many of these belong more in the environment class rather than
  15. // here. We should try moving them and see if it affects perf.
  16. // ------------------ Threading -------------------
  17. // A Mutex represents an exclusive lock.
  18. class LOCKABLE Mutex {
  19. public:
  20. Mutex();
  21. ~Mutex();
  22. // Lock the mutex. Waits until other lockers have exited.
  23. // Will deadlock if the mutex is already locked by this thread.
  24. void Lock() EXCLUSIVE_LOCK_FUNCTION();
  25. // Unlock the mutex.
  26. // REQUIRES: This mutex was locked by this thread.
  27. void Unlock() UNLOCK_FUNCTION();
  28. // Optionally crash if this thread does not hold this mutex.
  29. // The implementation must be fast, especially if NDEBUG is
  30. // defined. The implementation is allowed to skip all checks.
  31. void AssertHeld() ASSERT_EXCLUSIVE_LOCK();
  32. };
  33. class CondVar {
  34. public:
  35. explicit CondVar(Mutex* mu);
  36. ~CondVar();
  37. // Atomically release *mu and block on this condition variable until
  38. // either a call to SignalAll(), or a call to Signal() that picks
  39. // this thread to wakeup.
  40. // REQUIRES: this thread holds *mu
  41. void Wait();
  42. // If there are some threads waiting, wake up at least one of them.
  43. void Signal();
  44. // Wake up all waiting threads.
  45. void SignalAll();
  46. };
  47. // ------------------ Compression -------------------
  48. // Store the snappy compression of "input[0,input_length-1]" in *output.
  49. // Returns false if snappy is not supported by this port.
  50. bool Snappy_Compress(const char* input, size_t input_length,
  51. std::string* output);
  52. // If input[0,input_length-1] looks like a valid snappy compressed
  53. // buffer, store the size of the uncompressed data in *result and
  54. // return true. Else return false.
  55. bool Snappy_GetUncompressedLength(const char* input, size_t length,
  56. size_t* result);
  57. // Attempt to snappy uncompress input[0,input_length-1] into *output.
  58. // Returns true if successful, false if the input is invalid snappy
  59. // compressed data.
  60. //
  61. // REQUIRES: at least the first "n" bytes of output[] must be writable
  62. // where "n" is the result of a successful call to
  63. // Snappy_GetUncompressedLength.
  64. bool Snappy_Uncompress(const char* input_data, size_t input_length,
  65. char* output);
  66. // Store the zstd compression of "input[0,input_length-1]" in *output.
  67. // Returns false if zstd is not supported by this port.
  68. bool Zstd_Compress(int level, const char* input, size_t input_length,
  69. std::string* output);
  70. // If input[0,input_length-1] looks like a valid zstd compressed
  71. // buffer, store the size of the uncompressed data in *result and
  72. // return true. Else return false.
  73. bool Zstd_GetUncompressedLength(const char* input, size_t length,
  74. size_t* result);
  75. // Attempt to zstd uncompress input[0,input_length-1] into *output.
  76. // Returns true if successful, false if the input is invalid zstd
  77. // compressed data.
  78. //
  79. // REQUIRES: at least the first "n" bytes of output[] must be writable
  80. // where "n" is the result of a successful call to
  81. // Zstd_GetUncompressedLength.
  82. bool Zstd_Uncompress(const char* input_data, size_t input_length, char* output);
  83. // ------------------ Miscellaneous -------------------
  84. // If heap profiling is not supported, returns false.
  85. // Else repeatedly calls (*func)(arg, data, n) and then returns true.
  86. // The concatenation of all "data[0,n-1]" fragments is the heap profile.
  87. bool GetHeapProfile(void (*func)(void*, const char*, int), void* arg);
  88. // Extend the CRC to include the first n bytes of buf.
  89. //
  90. // Returns zero if the CRC cannot be extended using acceleration, else returns
  91. // the newly extended CRC value (which may also be zero).
  92. uint32_t AcceleratedCRC32C(uint32_t crc, const char* buf, size_t size);
  93. } // namespace port
  94. } // namespace leveldb
  95. #endif // STORAGE_LEVELDB_PORT_PORT_EXAMPLE_H_