小组成员:谢瑞阳、徐翔宇
Nevar pievienot vairāk kā 25 tēmas Tēmai ir jāsākas ar burtu vai ciparu, tā var saturēt domu zīmes ('-') un var būt līdz 35 simboliem gara.

213 rindas
7.4 KiB

Release 1.18 Changes are: * Update version number to 1.18 * Replace the basic fprintf call with a call to fwrite in order to work around the apparent compiler optimization/rewrite failure that we are seeing with the new toolchain/iOS SDKs provided with Xcode6 and iOS8. * Fix ALL the header guards. * Createed a README.md with the LevelDB project description. * A new CONTRIBUTING file. * Don't implicitly convert uint64_t to size_t or int. Either preserve it as uint64_t, or explicitly cast. This fixes MSVC warnings about possible value truncation when compiling this code in Chromium. * Added a DumpFile() library function that encapsulates the guts of the "leveldbutil dump" command. This will allow clients to dump data to their log files instead of stdout. It will also allow clients to supply their own environment. * leveldb: Remove unused function 'ConsumeChar'. * leveldbutil: Remove unused member variables from WriteBatchItemPrinter. * OpenBSD, NetBSD and DragonflyBSD have _LITTLE_ENDIAN, so define PLATFORM_IS_LITTLE_ENDIAN like on FreeBSD. This fixes: * issue #143 * issue #198 * issue #249 * Switch from <cstdatomic> to <atomic>. The former never made it into the standard and doesn't exist in modern gcc versions at all. The later contains everything that leveldb was using from the former. This problem was noticed when porting to Portable Native Client where no memory barrier is defined. The fact that <cstdatomic> is missing normally goes unnoticed since memory barriers are defined for most architectures. * Make Hash() treat its input as unsigned. Before this change LevelDB files from platforms with different signedness of char were not compatible. This change fixes: issue #243 * Verify checksums of index/meta/filter blocks when paranoid_checks set. * Invoke all tools for iOS with xcrun. (This was causing problems with the new XCode 5.1.1 image on pulse.) * include <sys/stat.h> only once, and fix the following linter warning: "Found C system header after C++ system header" * When encountering a corrupted table file, return Status::Corruption instead of Status::InvalidArgument. * Support cygwin as build platform, patch is from https://code.google.com/p/leveldb/issues/detail?id=188 * Fix typo, merge patch from https://code.google.com/p/leveldb/issues/detail?id=159 * Fix typos and comments, and address the following two issues: * issue #166 * issue #241 * Add missing db synchronize after "fillseq" in the benchmark. * Removed unused variable in SeekRandom: value (issue #201)
pirms 10 gadiem
  1. // Copyright (c) 2011 The LevelDB Authors. All rights reserved.
  2. // Use of this source code is governed by a BSD-style license that can be
  3. // found in the LICENSE file. See the AUTHORS file for names of contributors.
  4. #ifndef STORAGE_LEVELDB_INCLUDE_OPTIONS_H_
  5. #define STORAGE_LEVELDB_INCLUDE_OPTIONS_H_
  6. #include <stddef.h>
  7. namespace leveldb {
  8. class Cache;
  9. class Comparator;
  10. class Env;
  11. class FilterPolicy;
  12. class Logger;
  13. class Snapshot;
  14. // DB contents are stored in a set of blocks, each of which holds a
  15. // sequence of key,value pairs. Each block may be compressed before
  16. // being stored in a file. The following enum describes which
  17. // compression method (if any) is used to compress a block.
  18. enum CompressionType {
  19. // NOTE: do not change the values of existing entries, as these are
  20. // part of the persistent format on disk.
  21. kNoCompression = 0x0,
  22. kSnappyCompression = 0x1
  23. };
  24. // Options to control the behavior of a database (passed to DB::Open)
  25. struct Options {
  26. // -------------------
  27. // Parameters that affect behavior
  28. // Comparator used to define the order of keys in the table.
  29. // Default: a comparator that uses lexicographic byte-wise ordering
  30. //
  31. // REQUIRES: The client must ensure that the comparator supplied
  32. // here has the same name and orders keys *exactly* the same as the
  33. // comparator provided to previous open calls on the same DB.
  34. const Comparator* comparator;
  35. // If true, the database will be created if it is missing.
  36. // Default: false
  37. bool create_if_missing;
  38. // If true, an error is raised if the database already exists.
  39. // Default: false
  40. bool error_if_exists;
  41. // If true, the implementation will do aggressive checking of the
  42. // data it is processing and will stop early if it detects any
  43. // errors. This may have unforeseen ramifications: for example, a
  44. // corruption of one DB entry may cause a large number of entries to
  45. // become unreadable or for the entire DB to become unopenable.
  46. // Default: false
  47. bool paranoid_checks;
  48. // Use the specified object to interact with the environment,
  49. // e.g. to read/write files, schedule background work, etc.
  50. // Default: Env::Default()
  51. Env* env;
  52. // Any internal progress/error information generated by the db will
  53. // be written to info_log if it is non-NULL, or to a file stored
  54. // in the same directory as the DB contents if info_log is NULL.
  55. // Default: NULL
  56. Logger* info_log;
  57. // -------------------
  58. // Parameters that affect performance
  59. // Amount of data to build up in memory (backed by an unsorted log
  60. // on disk) before converting to a sorted on-disk file.
  61. //
  62. // Larger values increase performance, especially during bulk loads.
  63. // Up to two write buffers may be held in memory at the same time,
  64. // so you may wish to adjust this parameter to control memory usage.
  65. // Also, a larger write buffer will result in a longer recovery time
  66. // the next time the database is opened.
  67. //
  68. // Default: 4MB
  69. size_t write_buffer_size;
  70. // Number of open files that can be used by the DB. You may need to
  71. // increase this if your database has a large working set (budget
  72. // one open file per 2MB of working set).
  73. //
  74. // Default: 1000
  75. int max_open_files;
  76. // Control over blocks (user data is stored in a set of blocks, and
  77. // a block is the unit of reading from disk).
  78. // If non-NULL, use the specified cache for blocks.
  79. // If NULL, leveldb will automatically create and use an 8MB internal cache.
  80. // Default: NULL
  81. Cache* block_cache;
  82. // Approximate size of user data packed per block. Note that the
  83. // block size specified here corresponds to uncompressed data. The
  84. // actual size of the unit read from disk may be smaller if
  85. // compression is enabled. This parameter can be changed dynamically.
  86. //
  87. // Default: 4K
  88. size_t block_size;
  89. // Number of keys between restart points for delta encoding of keys.
  90. // This parameter can be changed dynamically. Most clients should
  91. // leave this parameter alone.
  92. //
  93. // Default: 16
  94. int block_restart_interval;
  95. // Leveldb will write up to this amount of bytes to a file before
  96. // switching to a new one.
  97. // Most clients should leave this parameter alone. However if your
  98. // filesystem is more efficient with larger files, you could
  99. // consider increasing the value. The downside will be longer
  100. // compactions and hence longer latency/performance hiccups.
  101. // Another reason to increase this parameter might be when you are
  102. // initially populating a large database.
  103. //
  104. // Default: 2MB
  105. size_t max_file_size;
  106. // Compress blocks using the specified compression algorithm. This
  107. // parameter can be changed dynamically.
  108. //
  109. // Default: kSnappyCompression, which gives lightweight but fast
  110. // compression.
  111. //
  112. // Typical speeds of kSnappyCompression on an Intel(R) Core(TM)2 2.4GHz:
  113. // ~200-500MB/s compression
  114. // ~400-800MB/s decompression
  115. // Note that these speeds are significantly faster than most
  116. // persistent storage speeds, and therefore it is typically never
  117. // worth switching to kNoCompression. Even if the input data is
  118. // incompressible, the kSnappyCompression implementation will
  119. // efficiently detect that and will switch to uncompressed mode.
  120. CompressionType compression;
  121. // EXPERIMENTAL: If true, append to existing MANIFEST and log files
  122. // when a database is opened. This can significantly speed up open.
  123. //
  124. // Default: currently false, but may become true later.
  125. bool reuse_logs;
  126. // If non-NULL, use the specified filter policy to reduce disk reads.
  127. // Many applications will benefit from passing the result of
  128. // NewBloomFilterPolicy() here.
  129. //
  130. // Default: NULL
  131. const FilterPolicy* filter_policy;
  132. // Create an Options object with default values for all fields.
  133. Options();
  134. };
  135. // Options that control read operations
  136. struct ReadOptions {
  137. // If true, all data read from underlying storage will be
  138. // verified against corresponding checksums.
  139. // Default: false
  140. bool verify_checksums;
  141. // Should the data read for this iteration be cached in memory?
  142. // Callers may wish to set this field to false for bulk scans.
  143. // Default: true
  144. bool fill_cache;
  145. // If "snapshot" is non-NULL, read as of the supplied snapshot
  146. // (which must belong to the DB that is being read and which must
  147. // not have been released). If "snapshot" is NULL, use an implicit
  148. // snapshot of the state at the beginning of this read operation.
  149. // Default: NULL
  150. const Snapshot* snapshot;
  151. ReadOptions()
  152. : verify_checksums(false),
  153. fill_cache(true),
  154. snapshot(NULL) {
  155. }
  156. };
  157. // Options that control write operations
  158. struct WriteOptions {
  159. // If true, the write will be flushed from the operating system
  160. // buffer cache (by calling WritableFile::Sync()) before the write
  161. // is considered complete. If this flag is true, writes will be
  162. // slower.
  163. //
  164. // If this flag is false, and the machine crashes, some recent
  165. // writes may be lost. Note that if it is just the process that
  166. // crashes (i.e., the machine does not reboot), no writes will be
  167. // lost even if sync==false.
  168. //
  169. // In other words, a DB write with sync==false has similar
  170. // crash semantics as the "write()" system call. A DB write
  171. // with sync==true has similar crash semantics to a "write()"
  172. // system call followed by "fsync()".
  173. //
  174. // Default: false
  175. bool sync;
  176. WriteOptions()
  177. : sync(false) {
  178. }
  179. };
  180. } // namespace leveldb
  181. #endif // STORAGE_LEVELDB_INCLUDE_OPTIONS_H_