diff --git a/doc/benchmark.html b/doc/benchmark.html index b84f171..6a79bc7 100644 --- a/doc/benchmark.html +++ b/doc/benchmark.html @@ -176,34 +176,28 @@ parameters are varied. For the baseline:

A. Large Values

For this benchmark, we start with an empty database, and write 100,000 byte values (~50% compressible). To keep the benchmark running time reasonable, we stop after writing 1000 values.

Sequential Writes

- +
- + - - + - - +
LevelDB 1,060 ops/sec
 
-
(1.17x baseline)
 
Kyoto TreeDB 1,020 ops/sec
 
(2.57x baseline)
 
SQLite3 2,910 ops/sec
 
(93.3x baseline)
 

Random Writes

- +
- - + - - + - - +
LevelDB 480 ops/sec
 
(2.52x baseline)
 
Kyoto TreeDB 1,100 ops/sec
 
(10.72x baseline)
 
SQLite3 2,200 ops/sec
 
(4,516x baseline)
 

LevelDB doesn't perform as well with large values of 100,000 bytes each. This is because LevelDB writes keys and values at least twice: first time to the transaction log, and second time (during a compaction) to a sorted file. With larger values, LevelDB's per-operation efficiency is swamped by the