performance statistics

classic Classic list List threaded Threaded
24 messages Options
12
Reply | Threaded
Open this post in threaded view
|

performance statistics

jalburger




All -

I am currently investigating porting my project from postgres to SQLite due
to anticipated performance issues (we will have to start handling lots more
data).  My initial speed testing of handling the expanded amount data has
suggested that the postgres performance will be unacceptable.  I'm
convinced that SQLite will solve my performance issues, however, the speed
comparison data found on the SQLite site (http://www.sqlite.org/speed.html)
is old.  This is the type of data I need, but I'd like to have more recent
data to present to my manager, if it is available.  Can anybody point me
anywhere that may have similar but more recent data?

Thanks in advance!

Jason Alburger
HID/NAS/LAN Engineer
L3/ATO-E En Route Peripheral Systems Support
609-485-7225
Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Jay Sprenkle
> All -
>
> I am currently investigating porting my project from postgres to SQLite due
> to anticipated performance issues (we will have to start handling lots more
> data).  My initial speed testing of handling the expanded amount data has
> suggested that the postgres performance will be unacceptable.  I'm
> convinced that SQLite will solve my performance issues, however, the speed
> comparison data found on the SQLite site (http://www.sqlite.org/speed.html)
> is old.  This is the type of data I need, but I'd like to have more recent
> data to present to my manager, if it is available.  Can anybody point me
> anywhere that may have similar but more recent data?

This might be valuable for you:
http://sqlite.phxsoftware.com/forums/9/ShowForum.aspx
Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

D. Richard Hipp
In reply to this post by jalburger
[hidden email] wrote:
>
> I am currently investigating porting my project from postgres to SQLite due
> to anticipated performance issues
>

I do not thing speed should really be the prime consideration
here.  PostgreSQL and SQLite solve very different problems.
I think you should choose the system that is the best map to
the problem you are trying to solve.

PostgreSQL is designed to support a large number of clients
distributed across multiple machines and accessing a relatively
large data store that is in a fixed location.  PostgreSQL is
designed to replace Oracle.

SQLite is designed to support a smaller number of clients
all located on the same host computer and accessing a portable
data store of only a few dozen gigabytes which is eaily copied
or moved.  SQLite is designed to replace fopen().

Both SQLite and PostgreSQL can be used to solve problems outside
their primary focus.  And so a high-end use of SQLite will
certainly overlap a low-end use of PostgreSQL.  But you will
be happiest if you will use them both for what they were
originally designed for.

If you give us some more clues about what your requirements
are we can give you better guidance about which database might
be the best choice.

--
D. Richard Hipp   <[hidden email]>

Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Serge Semashko
In reply to this post by jalburger
[hidden email] wrote:

> I am currently investigating porting my project from postgres to
> SQLite due to anticipated performance issues (we will have to start
> handling lots more data).  My initial speed testing of handling the
> expanded amount data has suggested that the postgres performance will
>  be unacceptable.  I'm convinced that SQLite will solve my
> performance issues, however, the speed comparison data found on the
> SQLite site (http://www.sqlite.org/speed.html) is old.  This is the
> type of data I need, but I'd like to have more recent data to present
> to my manager, if it is available.  Can anybody point me anywhere
> that may have similar but more recent data?
>
> Thanks in advance!
>
> Jason Alburger HID/NAS/LAN Engineer L3/ATO-E En Route Peripheral
> Systems Support 609-485-7225

Actually I have quite the opposite experience :)

We started with using sqlite3, but the database has grown now to
something like 1GB and has millions of rows. It does not perform as fast
as we would like, so we looked for alternatives. We tried to convert
it to both mysql and postgresql and tried to run the same query we are
using quite often (the query is rather big and contains a lot of
conditions, but it extracts only about a hundred matching rows). The
result was a bit surprising. Mysql just locked down and could not
provide any results. After killing it, increasing memory limits in its
configuration to use all the available memory, it managed to complete
the query but was still slower than sqlite3 (lost about 30%). Postgresql
on the other hand was a really nice surprise and it was several times
faster than sqlite3! Now we are converting to postgresql :)

I'm in no way a database expert, but the tests on the benchmarking page
seem a bit trivial and looks like they only test database API (data
fetching throughoutput), but not the engine performance. I would like to
see some benchmarks involving really huge databases and complicated
queries and wonder if the results will be similar to those I have
observed...



Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

D. Richard Hipp
Serge Semashko <[hidden email]> wrote:

>>
> We started with using sqlite3, but the database has grown now to
> something like 1GB and has millions of rows. It does not perform as fast
> as we would like, so we looked for alternatives. We tried to convert
> it to both mysql and postgresql and tried to run the same query we are
> using quite often (the query is rather big and contains a lot of
> conditions, but it extracts only about a hundred matching rows). The
> result was a bit surprising. Mysql just locked down and could not
> provide any results. After killing it, increasing memory limits in its
> configuration to use all the available memory, it managed to complete
> the query but was still slower than sqlite3 (lost about 30%). Postgresql
> on the other hand was a really nice surprise and it was several times
> faster than sqlite3! Now we are converting to postgresql :)
>

PostgreSQL has a much better query optimizer than SQLite.
(You can do that when you have a multi-megabyte memory footprint
budget versus 250KiB for SQLite.)  In your particular case,
I would guess you could get SQLite to run as fast or faster
than PostgreSQL by hand-optimizing your admittedly complex
queries.
--
D. Richard Hipp   <[hidden email]>

Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

jalburger
In reply to this post by D. Richard Hipp




well....The database and the applications accessing the database are all
located on the same machine, so distribution across multiple machines
doesn't apply here.   The system is designed so that only one application
handles all the writes to the DB.   Another application handles all the
reads, and there may be up to two instances of that application running at
any one time, so I guess that shows a small number of clients.   When the
application that reads the DB data starts, it reads *all* the data in the
DB and ships it elsewhere.

I anticipate 2 bottlenecks...

1. My anticipated bottleneck under postgres is that the DB-writing app.
must parse incoming bursts of data and store in the DB.  The machine
sending this data is seeing a delay in processing.  Debugging has shown
that the INSERTS (on the order of a few thousand) is where most of the time
is wasted.

2. The other bottleneck is data retrieval.  My DB-reading application must
read the DB record-by-record (opens a cursor and reads one-by-one), build
the data into a message according to a system ICD, and ship it out.
postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.

The expansion of data will force me to go from a maximum 3400 row table to
a maximum of 11560.

From what I gather in reading about SQLite, it seems to be better equipped
for performance.  All my testing of the current system points to postgres
(postmaster) being my bottleneck.

Jason Alburger
HID/NAS/LAN Engineer
L3/ATO-E En Route Peripheral Systems Support
609-485-7225


                                                                           
             [hidden email]                                                
                                                                           
             03/01/2006 09:54                                           To
             AM                        [hidden email]            
                                                                        cc
                                                                           
             Please respond to                                     Subject
             sqlite-users@sqli         Re: [sqlite] performance statistics
                  te.org                                                  
                                                                           
                                                                           
                                                                           
                                                                           
                                                                           




[hidden email] wrote:
>
> I am currently investigating porting my project from postgres to SQLite
due
> to anticipated performance issues
>

I do not thing speed should really be the prime consideration
here.  PostgreSQL and SQLite solve very different problems.
I think you should choose the system that is the best map to
the problem you are trying to solve.

PostgreSQL is designed to support a large number of clients
distributed across multiple machines and accessing a relatively
large data store that is in a fixed location.  PostgreSQL is
designed to replace Oracle.

SQLite is designed to support a smaller number of clients
all located on the same host computer and accessing a portable
data store of only a few dozen gigabytes which is eaily copied
or moved.  SQLite is designed to replace fopen().

Both SQLite and PostgreSQL can be used to solve problems outside
their primary focus.  And so a high-end use of SQLite will
certainly overlap a low-end use of PostgreSQL.  But you will
be happiest if you will use them both for what they were
originally designed for.

If you give us some more clues about what your requirements
are we can give you better guidance about which database might
be the best choice.

--
D. Richard Hipp   <[hidden email]>

Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Denis Sbragion
In reply to this post by Serge Semashko
Hello Serge,

On Wed, March 1, 2006 16:11, Serge Semashko wrote:
...
> I'm in no way a database expert, but the tests on the benchmarking page
> seem a bit trivial and looks like they only test database API (data
> fetching throughoutput), but not the engine performance. I would like to
> see some benchmarks involving really huge databases and complicated
> queries and wonder if the results will be similar to those I have
> observed...

those benchmarks target the primary use of SQLite, which isn't the same as
other database engines, as perfectly explained by DRH himself. Even though its
performances and rich feature list might make us forget which is the intended
use of SQLite, we must remember that it is firt of all a compact, lightweight,
excellent *embedded* database engine. SQLite simply isn't designed for huge
databases and complicated queries, even though most of the times it is able to
cope with both, being at least a bit more than an fopen() replacement. Don't
be shy Dr. Hipp! :)

Bye,

--
        Denis Sbragion
        InfoTecna
        Tel: +39 0362 805396, Fax: +39 0362 805404
        URL: http://www.infotecna.it

Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Ran-3
In reply to this post by D. Richard Hipp
In light of your answer, I wonder if it is possible to implement such
optimizer that does the hand-optimizing automatically, but of course BEFORE
they are actually being used by SQLite.

So the idea is not to make SQLite optimizer better, but to create a kind of
SQL optimizer that gets as input SQL statements and gives as output
optimized (specifically for SQLite) SQL statements.

Ran

On 3/1/06, [hidden email] <[hidden email]> wrote:

>
> PostgreSQL has a much better query optimizer than SQLite.
> (You can do that when you have a multi-megabyte memory footprint
> budget versus 250KiB for SQLite.)  In your particular case,
> I would guess you could get SQLite to run as fast or faster
> than PostgreSQL by hand-optimizing your admittedly complex
> queries.
> --
> D. Richard Hipp   <[hidden email]>
>
>
Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Denis Sbragion
In reply to this post by jalburger
Hello Jason,

On Wed, March 1, 2006 16:20, [hidden email] wrote:
...

> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the time
> is wasted.
>
> 2. The other bottleneck is data retrieval.  My DB-reading application must
> read the DB record-by-record (opens a cursor and reads one-by-one), build
> the data into a message according to a system ICD, and ship it out.
> postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.
...

though your application seems a good candidate for SQLite use, have you tried
surrounding each burst of inserts and reads in a single transaction? With
PostgreSQL, but also with SQLite, performances might increase dramatically
with proper transaction handling in place. Furthermore having both a reader
and a writer at the same time the MVCC "better than row level locking"
mechanism might provide you better performances than SQLite, but here the
devil's in the detail. A lot depends on how much the read and write operations
overlap each others.

Bye,

--
        Denis Sbragion
        InfoTecna
        Tel: +39 0362 805396, Fax: +39 0362 805404
        URL: http://www.infotecna.it

Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

D. Richard Hipp
In reply to this post by jalburger
[hidden email] wrote:
> well....The database and the applications accessing the database are all
> located on the same machine, so distribution across multiple machines
> doesn't apply here.   The system is designed so that only one application
> handles all the writes to the DB.   Another application handles all the
> reads, and there may be up to two instances of that application running at
> any one time, so I guess that shows a small number of clients.   When the
> application that reads the DB data starts, it reads *all* the data in the
> DB and ships it elsewhere.

I think either SQLite or PostgreSQL would be appropriate here.  I'm
guessing that SQLite will have the speed advantage in this particular
case if you are careful in how you code it up.

>
> I anticipate 2 bottlenecks...
>
> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the time
> is wasted.

You will do well to gather your incoming data into a TEMP table then
insert the whole wad into the main database all in one go using
something like this:

    INSERT INTO maintable SELECT * FROM temptable;
    DELETE FROM temptable;

Actually, this same trick might solve your postgresql performance
problem and thus obviate the need to port your code.

>
> 2. The other bottleneck is data retrieval.  My DB-reading application must
> read the DB record-by-record (opens a cursor and reads one-by-one), build
> the data into a message according to a system ICD, and ship it out.
> postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.
>
> The expansion of data will force me to go from a maximum 3400 row table to
> a maximum of 11560.

Unless each row is particularly large, this is not a very big database
and should not present a problem to either SQLite or PostgreSQL.  Unless
you are doing some kind of strange join that you haven't told us about.

If your data formatting takes a long time, the reader might block the
writer in SQLite.  The writer process will have to wait to do its write
until the reader has finished.  You can avoid this by making a copy of
the data to be read into a temporary table before formatting it:

    CREATE TEMP TABLE outbuf AS SELECT * FROM maintable;
    SELECT * FROM outbuf;
      -- Do your formatting and sending
    DROP TABLE outbuf;

Since PostgreSQL supports READ COMMITTED isolation by default, the
writer lock will not be a problem there.  But you will have the same
issue on PosgreSQL if you select SERIALIZABLE isolation.  SQLite only
does SERIALIZABLE for database connections running in separate
processes.
--
D. Richard Hipp   <[hidden email]>

Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Jay Sprenkle
In reply to this post by Ran-3
On 3/1/06, Ran <[hidden email]> wrote:
> In light of your answer, I wonder if it is possible to implement such
> optimizer that does the hand-optimizing automatically, but of course BEFORE
> they are actually being used by SQLite.
>
> So the idea is not to make SQLite optimizer better, but to create a kind of
> SQL optimizer that gets as input SQL statements and gives as output
> optimized (specifically for SQLite) SQL statements.

I think the concept so far has been that the programmer is the query
optimizer so it stays fast and lightweight. ;)
Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

D. Richard Hipp
In reply to this post by Denis Sbragion
"Denis Sbragion" <[hidden email]> wrote:
> Furthermore having both a reader
> and a writer at the same time the MVCC "better than row level locking"
> mechanism might provide you better performances than SQLite, but here the
> devil's in the detail.

"D. Richard Hipp" <[hidden email]> wrote:
> Since PostgreSQL supports READ COMMITTED isolation by default, the
> writer lock will not be a problem there.  But you will have the same
> issue on PosgreSQL if you select SERIALIZABLE isolation.  SQLite only
> does SERIALIZABLE for database connections running in separate
> processes.

To combine and clarify our remarks:

If you use READ COMMITTED isolation (the default in PostgreSQL)
then your writes are not atomic as seen by the reader.  In other
words, if a burst of inserts occurs while a read is in process,
the read might end up seeing some old data from before the burst
and some new data from afterwards.  This may or may not be a
problem for you depending on your application.  If it is a problem,
then you need to select SERIALIZABLE isolation in PostgreSQL
in which case the MVCC is not going to give you any advantage
over SQLite.

--
D. Richard Hipp   <[hidden email]>

Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Derrell Lipman
In reply to this post by D. Richard Hipp
[hidden email] writes:

> PostgreSQL has a much better query optimizer than SQLite.
> (You can do that when you have a multi-megabyte memory footprint
> budget versus 250KiB for SQLite.)  In your particular case,
> I would guess you could get SQLite to run as fast or faster
> than PostgreSQL by hand-optimizing your admittedly complex
> queries.

In this light, I had a single query that took about 24 *hours* to complete in
sqlite (2.8.x).  I hand optimized the query by breaking it into multiple (14
I think) separate sequential queries which generate temporary tables for the
next query to work with, and building some indexes on the temporary tables.
The 24 hour query was reduced to a few *seconds*.

Query optimization is critical for large queries in sqlite, and sqlite can be
made VERY fast if you take the time to optimize the queries that are taking a
long time to execute.

Derrell
Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Clay Dowling
In reply to this post by jalburger

[hidden email] said:
> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the
> time
> is wasted.

Jason,

You might be better performance simply by wrapping the insert into a
transaction, or wrapping a transaction around a few hundred inserts at a
time.  A transaction is a very expensive operation, and unless you group
your inserts into transactions of several inserts, you pay the transaction
price for each single insert.  That has a devastating impact on
performance no matter what database you're using, so long as it's ACID
compliant.

SQLite is a wonderful tool and absolutely saving my bacon on a current
project, but you can save yourself the trouble of rewriting your database
access by making a slight modification to your code.  This assumes, of
course, that you aren't already using transactions.

Clay Dowling
--
Simple Content Management
http://www.ceamus.com

Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Denis Sbragion
In reply to this post by D. Richard Hipp
Hello DRH,

On Wed, March 1, 2006 16:53, [hidden email] wrote:
...
> If you use READ COMMITTED isolation (the default in PostgreSQL)
> then your writes are not atomic as seen by the reader.  In other
...
> then you need to select SERIALIZABLE isolation in PostgreSQL
> in which case the MVCC is not going to give you any advantage
> over SQLite.

indeed. Another trick which may be useful and that we often used in our
applications, which sometimes have similar needs: use an explicity "status"
field to mark the record situation.

Insert records as "processing by writer", update them to "ready to be
processed" with a single atomic update after a burst of inserts, update the
status of all "ready to be processed" records to the "to be processed by
reader" status with another single atomic update in the reader, process all
the "to be processed by reader" records, mark all the "to be processed by
reader" records as "processed" again with a single atomic update when
finished, if needed delete "processed" records.

This kind of approach requires just an index on the status field and is also
really useful when something goes wrong (application bug, power outage and so
on) because it becomes pretty easy to reprocess all the unprocessed records
just by looking at the status. The end results should be pretty similar to the
use of temporary tables, but without the need of additional tables.

Bye,

--
        Dr. Denis Sbragion
        InfoTecna
        Tel: +39 0362 805396, Fax: +39 0362 805404
        URL: http://www.infotecna.it

Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Ran-3
In reply to this post by Jay Sprenkle
My question is not about extending/improving SQLite but about having an
extra tool which helps to optimize the SQL written for SQLite. So SQLite
stays indeed lightweight and fast, but the SQL it is fed with is
automatically optimized.

Ran

On 3/1/06, Jay Sprenkle <[hidden email]> wrote:

>
> On 3/1/06, Ran <[hidden email]> wrote:
> > In light of your answer, I wonder if it is possible to implement such
> > optimizer that does the hand-optimizing automatically, but of course
> BEFORE
> > they are actually being used by SQLite.
> >
> > So the idea is not to make SQLite optimizer better, but to create a kind
> of
> > SQL optimizer that gets as input SQL statements and gives as output
> > optimized (specifically for SQLite) SQL statements.
>
> I think the concept so far has been that the programmer is the query
> optimizer so it stays fast and lightweight. ;)
>
Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Andrew Piskorski
In reply to this post by D. Richard Hipp
On Wed, Mar 01, 2006 at 10:53:12AM -0500, [hidden email] wrote:
> If you use READ COMMITTED isolation (the default in PostgreSQL)

> If it is a problem,
> then you need to select SERIALIZABLE isolation in PostgreSQL
> in which case the MVCC is not going to give you any advantage
> over SQLite.

Is that in fact true?  I am not familiar with how PostgreSQL
implements the SERIALIZABLE isolation level, but I assume that
PostgreSQL's MVCC would still give some advantage even under
SERIALIZABLE: It should allow the readers and (at least one of) the
writers to run concurrently.  Am I mistaken?

--
Andrew Piskorski <[hidden email]>
http://www.piskorski.com/
Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Jay Sprenkle
In reply to this post by Ran-3
> My question is not about extending/improving SQLite but about having an
> extra tool which helps to optimize the SQL written for SQLite. So SQLite
> stays indeed lightweight and fast, but the SQL it is fed with is
> automatically optimized.

Like I said, the optimizer tool is the programmer.
In a lot of cases the sql in a program doesn't change so the best
place to optimize it would
be when the program is designed, not at query time.
If anyone wrote a tool like that I'm sure it would be useful.
Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Denis Sbragion
In reply to this post by Andrew Piskorski
Hello Andrew,

On Wed, March 1, 2006 17:31, Andrew Piskorski wrote:
> Is that in fact true?  I am not familiar with how PostgreSQL
> implements the SERIALIZABLE isolation level, but I assume that
> PostgreSQL's MVCC would still give some advantage even under
> SERIALIZABLE: It should allow the readers and (at least one of) the
> writers to run concurrently.  Am I mistaken?

PostgreSQL always played the "readers are never blocked" mantra. Nevertheless
I really wonder how the strict serializable constraints could be satisfied
without blocking the readers while a write is in place.

Bye,

--
        Denis Sbragion
        InfoTecna
        Tel: +39 0362 805396, Fax: +39 0362 805404
        URL: http://www.infotecna.it

Reply | Threaded
Open this post in threaded view
|

Re: performance statistics

Jim Dodgen
In reply to this post by jalburger
Quoting [hidden email]:
>
> I anticipate 2 bottlenecks...
>
> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the time
> is wasted.

I would wrap the "bursts" in a transaction if you can (begin; and commit;
statements)

>
> 2. The other bottleneck is data retrieval.  My DB-reading application must
> read the DB record-by-record (opens a cursor and reads one-by-one), build
> the data into a message according to a system ICD, and ship it out.
> postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.
>

I do a simular thing in my application, what I do is to snapshot (copy) the
database (A sqlite database is a single file) and then run my batch process
against the copy.
 
> The expansion of data will force me to go from a maximum 3400 row table to
> a maximum of 11560.

My tables are a simular size

>
> From what I gather in reading about SQLite, it seems to be better equipped
> for performance.  All my testing of the current system points to postgres
> (postmaster) being my bottleneck.
>
> Jason Alburger
> HID/NAS/LAN Engineer
> L3/ATO-E En Route Peripheral Systems Support
> 609-485-7225
>
>
>                                                                            
>              [hidden email]                                                
>                                                                            
>              03/01/2006 09:54                                           To
>              AM                        [hidden email]            
>                                                                         cc
>                                                                            
>              Please respond to                                     Subject
>              sqlite-users@sqli         Re: [sqlite] performance statistics
>                   te.org                                                  
>                                                                            
>                                                                            
>                                                                            
>                                                                            
>                                                                            
>
>
>
>
> [hidden email] wrote:
> >
> > I am currently investigating porting my project from postgres to SQLite
> due
> > to anticipated performance issues
> >
>
> I do not thing speed should really be the prime consideration
> here.  PostgreSQL and SQLite solve very different problems.
> I think you should choose the system that is the best map to
> the problem you are trying to solve.
>
> PostgreSQL is designed to support a large number of clients
> distributed across multiple machines and accessing a relatively
> large data store that is in a fixed location.  PostgreSQL is
> designed to replace Oracle.
>
> SQLite is designed to support a smaller number of clients
> all located on the same host computer and accessing a portable
> data store of only a few dozen gigabytes which is eaily copied
> or moved.  SQLite is designed to replace fopen().
>
> Both SQLite and PostgreSQL can be used to solve problems outside
> their primary focus.  And so a high-end use of SQLite will
> certainly overlap a low-end use of PostgreSQL.  But you will
> be happiest if you will use them both for what they were
> originally designed for.
>
> If you give us some more clues about what your requirements
> are we can give you better guidance about which database might
> be the best choice.
>
> --
> D. Richard Hipp   <[hidden email]>
>
>





12