LevelDB benchmark

classic Classic list List threaded Threaded
34 messages Options
12
Reply | Threaded
Open this post in threaded view
|

LevelDB benchmark

Stephan Wehner
There are some benchmark's at
http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html

I don't have anything to point to, but I thought sqlite3 does better
than stated there.

In particular, 26,900 sequential writes per second and 420 random writes
per second from section "1. Baseline Performance" look suspicious.

What you say?

Stephan
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Simon Slavin-3

On 28 Jul 2011, at 2:22am, Stephan Wehner wrote:

> There are some benchmark's at
> http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
>
> I don't have anything to point to, but I thought sqlite3 does better
> than stated there.

i looked through their source code, trying to see if they defined transactions.  But I couldn't even find an INSERT command.

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

J Decker
In reply to this post by Stephan Wehner
On Wed, Jul 27, 2011 at 6:22 PM, Stephan Wehner <[hidden email]> wrote:
> There are some benchmark's at
> http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
>
> I don't have anything to point to, but I thought sqlite3 does better
> than stated there.
>
> In particular, 26,900 sequential writes per second and 420 random writes
> per second from section "1. Baseline Performance" look suspicious.
>

Wow, that's a bad mark for sqlite; I dunno it's somewhat misleading,
because I do know that if I use sqlite as a logging database, and
stream data to it it's kinda slow, and works better if I bunch up
inserts with multiple value sets.  But, enabling transactions, and
doing the same thing, write speed goes way up.  And now with WAL
journal, it might affect that speed test also in auto transact mode
especially

> What you say?
>
> Stephan
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Stephan Wehner
In reply to this post by Simon Slavin-3
On Wed, Jul 27, 2011 at 6:44 PM, Simon Slavin <[hidden email]> wrote:

>
> On 28 Jul 2011, at 2:22am, Stephan Wehner wrote:
>
>> There are some benchmark's at
>> http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
>>
>> I don't have anything to point to, but I thought sqlite3 does better
>> than stated there.
>
> i looked through their source code, trying to see if they defined transactions.  But I couldn't even find an INSERT command.
>

Well, LevelDB is much simpler than sqlite3: it's a key-value store.

Stephan


> Simon.
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



--
Stephan Wehner

-> http://stephan.sugarmotor.org (blog and homepage)
-> http://loggingit.com
-> http://www.thrackle.org
-> http://www.buckmaster.ca
-> http://www.trafficlife.com
-> http://stephansmap.org -- http://blog.stephansmap.org
-> http://twitter.com/stephanwehner / @stephanwehner
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Simon Slavin-3

On 28 Jul 2011, at 2:53am, Stephan Wehner wrote:

> On Wed, Jul 27, 2011 at 6:44 PM, Simon Slavin <[hidden email]> wrote:
>>
>> On 28 Jul 2011, at 2:22am, Stephan Wehner wrote:
>>
>>> There are some benchmark's at
>>> http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
>>>
>>> I don't have anything to point to, but I thought sqlite3 does better
>>> than stated there.
>>
>> i looked through their source code, trying to see if they defined transactions. But I couldn't even find an INSERT command.
>
> Well, LevelDB is much simpler than sqlite3: it's a key-value store.

Okay, but if they include their source code for testing SQLite I should be able to find the word 'INSERT' somewhere in it, right ?

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

David Garfield
In reply to this post by Simon Slavin-3
They used REPLACE.  See
http://code.google.com/p/leveldb/source/browse/trunk/doc/bench/db_bench_sqlite3.cc#492

They used explicit transactions, and tested with both single REPLACE
transactions and 1000 REPLACE transactions.  Section 1A would be the
single REPLACE transactions, while 2B is the batches.

--David Garfield

Simon Slavin writes:

>
> On 28 Jul 2011, at 2:22am, Stephan Wehner wrote:
>
> > There are some benchmark's at
> > http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
> >
> > I don't have anything to point to, but I thought sqlite3 does better
> > than stated there.
>
> i looked through their source code, trying to see if they defined transactions.  But I couldn't even find an INSERT command.
>
> Simon.
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Simon Slavin-3

On 28 Jul 2011, at 3:01am, David Garfield wrote:

> They used REPLACE.  See
> http://code.google.com/p/leveldb/source/browse/trunk/doc/bench/db_bench_sqlite3.cc#492
>
> They used explicit transactions, and tested with both single REPLACE
> transactions and 1000 REPLACE transactions.  Section 1A would be the
> single REPLACE transactions, while 2B is the batches.

Ah.  Thanks.  That explains it.

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Martin Gadbois-2
In reply to this post by Stephan Wehner
On Wed, Jul 27, 2011 at 9:22 PM, Stephan Wehner <[hidden email]>wrote:

> There are some benchmark's at
> http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
>
> I don't have anything to point to, but I thought sqlite3 does better
> than stated there.
>
> In particular, 26,900 sequential writes per second and 420 random writes
> per second from section "1. Baseline Performance" look suspicious.
>
> What you say?
>
>
I wish they compared  BerkerlyDB too ....

--
Martin
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Stephan Wehner
In reply to this post by Simon Slavin-3
On Wed, Jul 27, 2011 at 7:00 PM, Simon Slavin <[hidden email]> wrote:

>
> On 28 Jul 2011, at 2:53am, Stephan Wehner wrote:
>
>> On Wed, Jul 27, 2011 at 6:44 PM, Simon Slavin <[hidden email]> wrote:
>>>
>>> On 28 Jul 2011, at 2:22am, Stephan Wehner wrote:
>>>
>>>> There are some benchmark's at
>>>> http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
>>>>
>>>> I don't have anything to point to, but I thought sqlite3 does better
>>>> than stated there.
>>>
>>> i looked through their source code, trying to see if they defined transactions. But I couldn't even find an INSERT command.
>>
>> Well, LevelDB is much simpler than sqlite3: it's a key-value store.
>
> Okay, but if they include their source code for testing SQLite I should be able to find the word 'INSERT' somewhere in it, right ?
>

Sorry, I misunderstood -- S

> Simon.
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



--
Stephan Wehner

-> http://stephan.sugarmotor.org (blog and homepage)
-> http://loggingit.com
-> http://www.thrackle.org
-> http://www.buckmaster.ca
-> http://www.trafficlife.com
-> http://stephansmap.org -- http://blog.stephansmap.org
-> http://twitter.com/stephanwehner / @stephanwehner
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

reseok
In reply to this post by J Decker
they used

CREATE TABLE test (key blob, value blob, PRIMARY KEY(key))
CREATE INDEX keyindex ON test (key)


on random replaces it doubles the write operations.



J Decker schrieb:

> On Wed, Jul 27, 2011 at 6:22 PM, Stephan Wehner <[hidden email]> wrote:
>> There are some benchmark's at
>> http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
>>
>> I don't have anything to point to, but I thought sqlite3 does better
>> than stated there.
>>
>> In particular, 26,900 sequential writes per second and 420 random writes
>> per second from section "1. Baseline Performance" look suspicious.
>>
>
> Wow, that's a bad mark for sqlite; I dunno it's somewhat misleading,
> because I do know that if I use sqlite as a logging database, and
> stream data to it it's kinda slow, and works better if I bunch up
> inserts with multiple value sets.  But, enabling transactions, and
> doing the same thing, write speed goes way up.  And now with WAL
> journal, it might affect that speed test also in auto transact mode
> especially
>
>> What you say?
>>
>> Stephan
>> _______________________________________________
>> sqlite-users mailing list
>> [hidden email]
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Alexey Pechnikov-2
In reply to this post by Stephan Wehner
LevelDB use append log but SQLite is tested without WAL :)
I check and some tests 2.5x faster with WAL.


--
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Alexey Pechnikov-2
In reply to this post by Stephan Wehner
Hm, I test I find index on PK field:

CREATE TABLE test (key blob, value blob, PRIMARY KEY(key))
CREATE INDEX keyindex ON test (key)

Epic fail, I think :D


Default test on Intel(R) Atom(TM) CPU N450   @ 1.66GHz
fillseq      :     442.937 micros/op;    0.2 MB/s
fillseqsync  :    1678.168 micros/op;    0.1 MB/s (10000 ops)
fillseqbatch :      73.016 micros/op;    1.5 MB/s
...

And with enabled WAL and synchronous=NORMAL and wal_autocheckpoint=4096
(LevelDB log size is 4Mb by default) and without index on PK field (!):
fillseq      :     139.190 micros/op;    0.8 MB/s
fillseqsync  :     228.869 micros/op;    0.5 MB/s (10000 ops)
fillseqbatch :      56.131 micros/op;    2.0 MB/s
...

--
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Richard Hipp-3
In reply to this post by reseok
On Thu, Jul 28, 2011 at 12:27 AM, <[hidden email]> wrote:

> they used
>
> CREATE TABLE test (key blob, value blob, PRIMARY KEY(key))
> CREATE INDEX keyindex ON test (key)
>

Notice the inefficiencies inherent in this schema.

(1) A primary key on a BLOB?  Really?
(2) They create an redundant index on the primary key.  They would double
the write performance with no impact on read performance simply be omitting
the index.

I propose a new set of benchmarks, SQLite vs. LevelDB, with the following
schema:

   CREATE TABLE test(key INTEGER PRIMARY KEY, value BLOB);

I'm thinking SQLite will do much better in that case, and may well exceed
the performance of LevelDB in most cases.  (This is a guess - I have not
actually tried it.)

Of course, if you really do need a blob-to-blob mapping, then I suppose
LevelDB might be a better choice.  But not many applications do that kind of
thing.  What SQL database (other than SQLite) even allows an index or
primary key on a blob???

I hereby call on all you loyal readers out there to help me come up with a
more balanced comparison between SQLite and LevelDB.  The published
benchmark from Google strikes me more as a hit piece than a reasonable
comparison between the databases.

I'm on a business trip and am unable to help a great deal until early next
week.  So your cooperation will be greatly appreciated.  May I suggest the
following comparisons as a start:

(1) Rerun the Google benchmarks with (a) WAL enabled and (b) the redundant
index removed.

(2) Modify the Google benchmarks to test an INTEGER->BLOB mapping using an
INTEGER PRIMARY KEY on SQLite, instead of the BLOB->BLOB mapping.

I suspect that I will come up with other suggestions once I have a chance to
dig a little deeper into the benchmarks.  If you have suggestions, please
publish them here.

Thanks for your help and support!



>
>
> on random replaces it doubles the write operations.
>
>
>
> J Decker schrieb:
> > On Wed, Jul 27, 2011 at 6:22 PM, Stephan Wehner <[hidden email]>
> wrote:
> >> There are some benchmark's at
> >> http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
> >>
> >> I don't have anything to point to, but I thought sqlite3 does better
> >> than stated there.
> >>
> >> In particular, 26,900 sequential writes per second and 420 random writes
> >> per second from section "1. Baseline Performance" look suspicious.
> >>
> >
> > Wow, that's a bad mark for sqlite; I dunno it's somewhat misleading,
> > because I do know that if I use sqlite as a logging database, and
> > stream data to it it's kinda slow, and works better if I bunch up
> > inserts with multiple value sets.  But, enabling transactions, and
> > doing the same thing, write speed goes way up.  And now with WAL
> > journal, it might affect that speed test also in auto transact mode
> > especially
> >
> >> What you say?
> >>
> >> Stephan
> >> _______________________________________________
> >> sqlite-users mailing list
> >> [hidden email]
> >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >>
> > _______________________________________________
> > sqlite-users mailing list
> > [hidden email]
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
>
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



--
D. Richard Hipp
[hidden email]
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Alexey Pechnikov-2
There are the LevelDB sources and tests
svn checkout http://leveldb.googlecode.com/svn/trunk/ leveldb-read-only

Build SQLite test as
make db_bench_sqlite3
And LevelDB test as
make db_bench

My patch for leveldb-read-only/doc/bench/db_bench_sqlite3.cc to disable
redudant index and enable WAL is here:
http://pastebin.com/dM2iqdvj

And patch as above plus integer keys instead of blobs
http://pastebin.com/CnBeChWg

P.S. For blob-to-blob mapping we may use table with index on hashed key.
Virtual table can simplify this.

--
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Afriza N. Arief
In reply to this post by Richard Hipp-3
On Fri, Jul 29, 2011 at 8:53 AM, Richard Hipp <[hidden email]> wrote:

> On Thu, Jul 28, 2011 at 12:27 AM, <[hidden email]> wrote:
>
> > they used
> >
> > CREATE TABLE test (key blob, value blob, PRIMARY KEY(key))
> > CREATE INDEX keyindex ON test (key)
> >
>
> Notice the inefficiencies inherent in this schema.
>
> (1) A primary key on a BLOB?  Really?
> (2) They create an redundant index on the primary key.  They would double
> the write performance with no impact on read performance simply be omitting
> the index.
>
> I propose a new set of benchmarks, SQLite vs. LevelDB, with the following
> schema:
>
>   CREATE TABLE test(key INTEGER PRIMARY KEY, value BLOB);
>
> I'm thinking SQLite will do much better in that case, and may well exceed
> the performance of LevelDB in most cases.  (This is a guess - I have not
> actually tried it.)
>
> Of course, if you really do need a blob-to-blob mapping, then I suppose
> LevelDB might be a better choice.  But not many applications do that kind
> of
> thing.  What SQL database (other than SQLite) even allows an index or
> primary key on a blob???


Actually as per their blog-post (
http://google-opensource.blogspot.com/2011/07/leveldb-fast-persistent-key-value-store.html
) .
They probably want to simulate "an ordered mapping from string keys to
string values" and to test "batch updates that modify many keys scattered
across a large key space".
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Simon Slavin-3

On 29 Jul 2011, at 2:51am, Afriza N. Arief wrote:

> Actually as per their blog-post (
> http://google-opensource.blogspot.com/2011/07/leveldb-fast-persistent-key-value-store.html
> ) .
> They probably want to simulate "an ordered mapping from string keys to
> string values" and to test "batch updates that modify many keys scattered
> across a large key space".

Would it improve the SQLite time if it was changed to strings instead of BLOBs ?

Simon.

_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Roger Binns
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 07/28/2011 06:57 PM, Simon Slavin wrote:
> Would it improve the SQLite time if it was changed to strings instead of BLOBs ?

Note that internally SQLite treats strings and blobs virtually identically.
 Usually the same data structure and functions are used for them.  At the
end of the day they are both a bag of bytes.

The major difference is that strings also have an encoding which needs to be
taken into account should the bag of bytes need to be passed to a user
defined function, collation, return code etc.

Roger
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk4yHCYACgkQmOOfHg372QQ7ugCgm+ZBxtQJdpeLc/+ibVX8yVD9
8cEAn0GtwGRnH9hQfc8JuQZ580vP3Ia7
=oPtH
-----END PGP SIGNATURE-----
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Simon Slavin-3

On 29 Jul 2011, at 3:34am, Roger Binns wrote:

> On 07/28/2011 06:57 PM, Simon Slavin wrote:
>> Would it improve the SQLite time if it was changed to strings instead of BLOBs ?
>
> Note that internally SQLite treats strings and blobs virtually identically.
> Usually the same data structure and functions are used for them.  At the
> end of the day they are both a bag of bytes.
>
> The major difference is that strings also have an encoding which needs to be
> taken into account should the bag of bytes need to be passed to a user
> defined function, collation, return code etc.

So that's a "no".  Actually I don't see how BLOBs can be used in an index anyway, since technically blobs have no ordering.  But the nature of SQL is that if you can't sort on it you can't index it, and that would mean you couldn't search for a BLOB.

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Roger Binns
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 07/28/2011 07:39 PM, Simon Slavin wrote:
> Actually I don't see how BLOBs can be used in an index anyway, since technically blobs have no ordering.

memcmp provides an ordering just as it can for strings without a collation.
 (That is what SQLite implements - you can decide how that complies with "SQL".)

Roger
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk4yHnIACgkQmOOfHg372QQ2lACgighSjQE9YWG4PyV0ZJh3niCj
oHgAn2pn23N0ZQtJ8f2pZS84lg2PS8fN
=csl7
-----END PGP SIGNATURE-----
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: LevelDB benchmark

Alexey Pechnikov-2
In reply to this post by Richard Hipp-3
With integer->blob mapping patch I get these results:


$ ./db_bench_sqlite3
SQLite:     version 3.7.7.1
Date:       Fri Jul 29 05:32:05 2011
CPU:        2 * Intel(R) Atom(TM) CPU N450   @ 1.66GHz
CPUCache:   512 KB
Keys:       16 bytes each
Values:     100 bytes each
Entries:    1000000
RawSize:    110.6 MB (estimated)
------------------------------------------------
fillseq      :      77.394 micros/op;    1.3 MB/s
fillseqsync  :     133.326 micros/op;    0.7 MB/s (10000 ops)
fillseqbatch :      31.511 micros/op;    3.1 MB/s
fillrandom   :     518.605 micros/op;    0.2 MB/s
fillrandsync :     227.374 micros/op;    0.4 MB/s (10000 ops)
fillrandbatch :     411.859 micros/op;    0.2 MB/s
overwrite    :     793.869 micros/op;    0.1 MB/s
overwritebatch :     743.661 micros/op;    0.1 MB/s
readrandom   :      31.236 micros/op;
readseq      :      20.331 micros/op;
fillrand100K :    4872.027 micros/op;   19.6 MB/s (1000 ops)
fillseq100K  :    7249.182 micros/op;   13.2 MB/s (1000 ops)
readseq100K  :     634.887 micros/op;
readrand100K :     606.026 micros/op;


$ ./db_bench
LevelDB:    version 1.2
Date:       Fri Jul 29 11:20:59 2011
CPU:        2 * Intel(R) Atom(TM) CPU N450   @ 1.66GHz
CPUCache:   512 KB
Keys:       16 bytes each
Values:     100 bytes each (50 bytes after compression)
Entries:    1000000
RawSize:    110.6 MB (estimated)
FileSize:   62.9 MB (estimated)
WARNING: Snappy compression is not enabled
------------------------------------------------
fillseq      :      10.107 micros/op;   10.9 MB/s
fillsync     :     276.920 micros/op;    0.4 MB/s (1000 ops)
fillrandom   :      21.275 micros/op;    5.2 MB/s
overwrite    :      30.717 micros/op;    3.6 MB/s
readrandom   :      48.781 micros/op;
readrandom   :      39.841 micros/op;
readseq      :       2.227 micros/op;   49.7 MB/s
readreverse  :       3.549 micros/op;   31.2 MB/s
compact      : 5274551.868 micros/op;
readrandom   :      35.392 micros/op;
readseq      :       1.743 micros/op;   63.5 MB/s
readreverse  :       2.927 micros/op;   37.8 MB/s
fill100K     :    6631.138 micros/op;   14.4 MB/s (1000 ops)
crc32c       :      11.447 micros/op;  341.2 MB/s (4K per op)
snappycomp   :       8.106 micros/op; (snappy failure)
snappyuncomp :      26.941 micros/op; (snappy failure)
acquireload  :       1.407 micros/op; (each op is 1000 loads)



--
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
12