提供基本的ttl测试用例
25개 이상의 토픽을 선택하실 수 없습니다. Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

222 lines
8.6 KiB

  1. <!DOCTYPE html>
  2. <html>
  3. <head>
  4. <link rel="stylesheet" type="text/css" href="doc.css" />
  5. <title>Leveldb file layout and compactions</title>
  6. </head>
  7. <body>
  8. <h1>Files</h1>
  9. The implementation of leveldb is similar in spirit to the
  10. representation of a single
  11. <a href="http://labs.google.com/papers/bigtable.html">
  12. Bigtable tablet (section 5.3)</a>.
  13. However the organization of the files that make up the representation
  14. is somewhat different and is explained below.
  15. <p>
  16. Each database is represented by a set of file stored in a directory.
  17. There are several different types of files as documented below:
  18. <p>
  19. <h2>Log files</h2>
  20. <p>
  21. A log file (*.log) stores a sequence of recent updates. Each update
  22. is appended to the current log file. When the log file reaches a
  23. pre-determined size (approximately 1MB by default), it is converted
  24. to a sorted table (see below) and a new log file is created for future
  25. updates.
  26. <p>
  27. A copy of the current log file is kept in an in-memory structure (the
  28. <code>memtable</code>). This copy is consulted on every read so that read
  29. operations reflect all logged updates.
  30. <p>
  31. <h2>Sorted tables</h2>
  32. <p>
  33. A sorted table (*.sst) stores a sequence of entries sorted by key.
  34. Each entry is either a value for the key, or a deletion marker for the
  35. key. (Deletion markers are kept around to hide obsolete values
  36. present in older sorted tables).
  37. <p>
  38. The set of sorted tables are organized into a sequence of levels. The
  39. sorted table generated from a log file is placed in a special <code>young</code>
  40. level (also called level-0). When the number of young files exceeds a
  41. certain threshold (currently four), all of the young files are merged
  42. together with all of the overlapping level-1 files to produce a
  43. sequence of new level-1 files (we create a new level-1 file for every
  44. 2MB of data.)
  45. <p>
  46. Files in the young level may contain overlapping keys. However files
  47. in other levels have distinct non-overlapping key ranges. Consider
  48. level number L where L >= 1. When the combined size of files in
  49. level-L exceeds (10^L) MB (i.e., 10MB for level-1, 100MB for level-2,
  50. ...), one file in level-L, and all of the overlapping files in
  51. level-(L+1) are merged to form a set of new files for level-(L+1).
  52. These merges have the effect of gradually migrating new updates from
  53. the young level to the largest level using only bulk reads and writes
  54. (i.e., minimizing expensive seeks).
  55. <h2>Large value files</h2>
  56. <p>
  57. Each large value (greater than 64KB by default) is placed in a large
  58. value file (*.val) of its own. An entry is maintained in the log
  59. and/or sorted tables that maps from the corresponding key to the
  60. name of this large value file. The name of the large value file
  61. is derived from a SHA1 hash of the value and its length so that
  62. identical values share the same file.
  63. <p>
  64. <h2>Manifest</h2>
  65. <p>
  66. A MANIFEST file lists the set of sorted tables that make up each
  67. level, the corresponding key ranges, and other important metadata.
  68. A new MANIFEST file (with a new number embedded in the file name)
  69. is created whenever the database is reopened. The MANIFEST file is
  70. formatted as a log, and changes made to the serving state (as files
  71. are added or removed) are appended to this log.
  72. <p>
  73. <h2>Current</h2>
  74. <p>
  75. CURRENT is a simple text file that contains the name of the latest
  76. MANIFEST file.
  77. <p>
  78. <h2>Info logs</h2>
  79. <p>
  80. Informational messages are printed to files named LOG and LOG.old.
  81. <p>
  82. <h2>Others</h2>
  83. <p>
  84. Other files used for miscellaneous purposes may also be present
  85. (LOCK, *.dbtmp).
  86. <h1>Level 0</h1>
  87. When the log file grows above a certain size (1MB by default):
  88. <ul>
  89. <li>Write the contents of the current memtable to an sstable
  90. <li>Replace the current memtable by a brand new empty memtable
  91. <li>Switch to a new log file
  92. <li>Delete the old log file and the old memtable
  93. </ul>
  94. Experimental measurements show that generating an sstable from a 1MB
  95. log file takes ~12ms, which seems like an acceptable latency hiccup to
  96. add infrequently to a log write.
  97. <p>
  98. The new sstable is added to a special level-0 level. level-0 contains
  99. a set of files (up to 4 by default). However unlike other levels,
  100. these files do not cover disjoint ranges, but may overlap each other.
  101. <h1>Compactions</h1>
  102. <p>
  103. When the size of level L exceeds its limit, we compact it in a
  104. background thread. The compaction picks a file from level L and all
  105. overlapping files from the next level L+1. Note that if a level-L
  106. file overlaps only part of a level-(L+1) file, the entire file at
  107. level-(L+1) is used as an input to the compaction and will be
  108. discarded after the compaction. Aside: because level-0 is special
  109. (files in it may overlap each other), we treat compactions from
  110. level-0 to level-1 specially: a level-0 compaction may pick more than
  111. one level-0 file in case some of these files overlap each other.
  112. <p>
  113. A compaction merges the contents of the picked files to produce a
  114. sequence of level-(L+1) files. We switch to producing a new
  115. level-(L+1) file after the current output file has reached the target
  116. file size (2MB). The old files are discarded and the new files are
  117. added to the serving state.
  118. <p>
  119. Compactions for a particular level rotate through the key space. In
  120. more detail, for each level L, we remember the ending key of the last
  121. compaction at level L. The next compaction for level L will pick the
  122. first file that starts after this key (wrapping around to the
  123. beginning of the key space if there is no such file).
  124. <p>
  125. Compactions drop overwritten values. They also drop deletion markers
  126. if there are no higher numbered levels that contain a file whose range
  127. overlaps the current key.
  128. <h2>Timing</h2>
  129. Level-0 compactions will read up to four 1MB files from level-0, and
  130. at worst all the level-1 files (10MB). I.e., we will read 14MB and
  131. write 14MB.
  132. <p>
  133. Other than the special level-0 compactions, we will pick one 2MB file
  134. from level L. In the worst case, this will overlap ~ 12 files from
  135. level L+1 (10 because level-(L+1) is ten times the size of level-L,
  136. and another two at the boundaries since the file ranges at level-L
  137. will usually not be aligned with the file ranges at level-L+1). The
  138. compaction will therefore read 26MB and write 26MB. Assuming a disk
  139. IO rate of 100MB/s (ballpark range for modern drives), the worst
  140. compaction cost will be approximately 0.5 second.
  141. <p>
  142. If we throttle the background writing to something small, say 10% of
  143. the full 100MB/s speed, a compaction may take up to 5 seconds. If the
  144. user is writing at 10MB/s, we might build up lots of level-0 files
  145. (~50 to hold the 5*10MB). This may signficantly increase the cost of
  146. reads due to the overhead of merging more files together on every
  147. read.
  148. <p>
  149. Solution 1: To reduce this problem, we might want to increase the log
  150. switching threshold when the number of level-0 files is large. Though
  151. the downside is that the larger this threshold, the larger the delay
  152. that we will add to write latency when a write triggers a log switch.
  153. <p>
  154. Solution 2: We might want to decrease write rate artificially when the
  155. number of level-0 files goes up.
  156. <p>
  157. Solution 3: We work on reducing the cost of very wide merges.
  158. Perhaps most of the level-0 files will have their blocks sitting
  159. uncompressed in the cache and we will only need to worry about the
  160. O(N) complexity in the merging iterator.
  161. <h2>Number of files</h2>
  162. Instead of always making 2MB files, we could make larger files for
  163. larger levels to reduce the total file count, though at the expense of
  164. more bursty compactions. Alternatively, we could shard the set of
  165. files into multiple directories.
  166. <p>
  167. An experiment on an <code>ext3</code> filesystem on Feb 04, 2011 shows
  168. the following timings to do 100K file opens in directories with
  169. varying number of files:
  170. <table class="datatable">
  171. <tr><th>Files in directory</th><th>Microseconds to open a file</th></tr>
  172. <tr><td>1000</td><td>9</td>
  173. <tr><td>10000</td><td>10</td>
  174. <tr><td>100000</td><td>16</td>
  175. </table>
  176. So maybe even the sharding is not necessary on modern filesystems?
  177. <h1>Recovery</h1>
  178. <ul>
  179. <li> Read CURRENT to find name of the latest committed MANIFEST
  180. <li> Read the named MANIFEST file
  181. <li> Clean up stale files
  182. <li> We could open all sstables here, but it is probably better to be lazy...
  183. <li> Convert log chunk to a new level-0 sstable
  184. <li> Start directing new writes to a new log file with recovered sequence#
  185. </ul>
  186. <h1>Garbage collection of files</h1>
  187. <code>DeleteObsoleteFiles()</code> is called at the end of every
  188. compaction and at the end of recovery. It finds the names of all
  189. files in the database. It deletes all log files that are not the
  190. current log file. It deletes all table files that are not referenced
  191. from some level and are not the output of an active compaction. It
  192. deletes all large value files that are not referenced from any live
  193. table or log file.
  194. </body>
  195. </html>