From e8dee348b69111c7bbdfb176eb3484a0e9f5cc73 Mon Sep 17 00:00:00 2001 From: "gabor@google.com" Date: Wed, 27 Jul 2011 04:39:46 +0000 Subject: [PATCH] Minor edit in benchmark page. (Baseline comparison does not make sense for large values.) git-svn-id: https://leveldb.googlecode.com/svn/trunk@43 62dab493-f737-651d-591e-8d6aee1b9529 --- doc/benchmark.html | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/doc/benchmark.html b/doc/benchmark.html index b84f171..6a79bc7 100644 --- a/doc/benchmark.html +++ b/doc/benchmark.html @@ -176,34 +176,28 @@ parameters are varied. For the baseline:

A. Large Values

For this benchmark, we start with an empty database, and write 100,000 byte values (~50% compressible). To keep the benchmark running time reasonable, we stop after writing 1000 values.

Sequential Writes

- +
- + - - + - - +
LevelDB 1,060 ops/sec
 
-
(1.17x baseline)
 
Kyoto TreeDB 1,020 ops/sec
 
(2.57x baseline)
 
SQLite3 2,910 ops/sec
 
(93.3x baseline)
 

Random Writes

- +
- - + - - + - - +
LevelDB 480 ops/sec
 
(2.52x baseline)
 
Kyoto TreeDB 1,100 ops/sec
 
(10.72x baseline)
 
SQLite3 2,200 ops/sec
 
(4,516x baseline)
 

LevelDB doesn't perform as well with large values of 100,000 bytes each. This is because LevelDB writes keys and values at least twice: first time to the transaction log, and second time (during a compaction) to a sorted file. With larger values, LevelDB's per-operation efficiency is swamped by the