Database locking problems

classic Classic list List threaded Threaded
30 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Database locking problems

Keith Medcalf

On Sunday, 20 January, 2019 17:19, Simon Slavin <[hidden email]> wrote:

>> If the issue is the dead lock, you get similar issues with all
>> DBMSes.

> I'm not perfectly sure of my logic here, but OP posted elsewhere that
> replacing BEGIN with BEGIN IMMEDIATE cures the problem.  I think this
> indicates that his problem isn't deadlock.

This indicates that the problem was indeed a deadlock.  A deadlock (in this case) occurs when there are two transactions in progress, each of them has a SHARED (READ) lock, one of them (we will call it "A") successfully upgrades to a PENDING lock on the way to an EXCLUSIVE (WRITE) lock, and the other transaction (which we shall call "B") rather than releasing the SHARED lock to permit the PENDING LOCK to upgrade to EXCLUSIVE (WRITE) status, instead tries to acquire a PENDING lock on the way to an EXCLUSIVE (WRITE) lock.  

The EXCLUSIVE lock cannot be granted to A until the SHARED lock is released by B.  However, instead of releasing the SHARED lock, B has also tried to acquire a PENDING lock on the way to an EXCLUSIVE lock.  If the PENDING lock is granted to B then you will have two processes that are "deadlocked" (also known as a Mexican Standoff or any number of other colourful names) each waiting forever for the other to give up its SHARED lock so it can proceed.

Since this outcome would be double-plus-ungood, the second transaction (B) is terminated by tossing it to the wolves (Return SQLITE_LOCKED or some such error to indicate a deadlock situation).  This will cause the second transaction (B) to roll-back and release its SHARED lock permitting A to proceed to completion. (The SHARED lock must also be forced to be released because it represents a lock against a state which no longer exists now that A has been permitted to proceed).

>The problem is more about how SQLite implements BEGIN without
>IMMEDIATE.  As others have explained in this thread, SQLite does its
>best to avoid locking the database until it really needs to.  But it
>does need to obtain a shared lock, and this fails in the specific
>pattern OP uses.

>Which is why BEGIN IMMEDIATE exists.  So use BEGIN IMMEDIATE.  Done.

The procedure to be used in order to avoid deadlocks in multiprocessing code has been known for a long time.

To avoid deadlocks, concurrent code must be able to acquire all the locks it requires at once.  The "at once" part can be spread out in time by always acquiring the same set of locks in the same order.  If a process fails to acquire any one of them it must relinquish all the locks it has obtained before trying again (that is, it must rebuild its state anew from nothingness).

BEGIN IMMEDIATE acquires the SHARED and WRITE lock from the get-go and if successful cannot be a "deadlock victim" since it has acquired all the locks it will ever need.  A bare BEGIN, however, can be a deadlock victim since it has only acquired a SHARED lock and if it tries and fails to acquire an additional (WRITE) lock it does not hold but cannot obtain, then it must relinquish all locks (including the SHARED lock) before being permitted to try again.

Fully opportunistic locking systems (such as you get in network filesystems or in database servers) often randomly choose the victim in order to avoid deadlocks unless they are able to determine an appropriate victim.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a lot about anticipated traffic volume.




_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Database locking problems

Andrew.Goth
In reply to this post by Keith Medcalf
Keith Medcalf wrote:

> Andy Goth wrote:
>> There are two possibilities:
>>
>> 1. Transactions do work, but I'm misusing them and must learn how to be
>> more careful.  In this case, I will update documentation to properly
>> explain their use to others.
>>
>> 2. Transactions don't work, at least not for my task.  In this case, I
>> will do my best to investigate the problem and suggest a correction.
>>
>> Either way, the right thing for me to do is continue digging in.
>>
>> Going from BEGIN to BEGIN IMMEDIATE indeed fixed the test program I
>> wrote, but my application is still broken, and I don't know what's
>> different about it.  I'm working on instrumenting the fcntl() calls to
>> log the sequence of operations.
>
> Make sure you are doing nothing foolish like using "shared cache".

I investigated shared cache because the notifier sounded like a fun time, but I quickly realized the cache sharing is only between different threads that are already sharing a single database connection.  While I need to be prepared for multiple threads within an application, my problem at the moment is with multiple processes, each with their own connection, to a shared database file.

> Make sure that you are using the default SERIALIZED or FULLMUTEX thread
> management on all connections whether you think you know better or do
> not need it or whatever.  A few microseconds of CPU might save you many
> MONTHS of trying to locate a defect in your code.

I am using the default serialized mode.  At first I was explicitly asking for FULLMUTEX, but when I traced through the execution I saw that it was merely reinforcing the default I was getting anyway.

> If you think that you need to use either "shared cache" or some
> threading mode other than the default serialized, then do that ONLY
> AFTER your application is working using the defaults.  Do not fall into
> the premature optimization trap.

It's vastly more cost-effective to optimize engineer time than CPU time.

> Place all read and write statements which require a consistent database
> context in the same transaction.

Indeed, this is half the reason I'm using transactions to begin with.  The other half is to avoid having to wait for a disk sync every row I insert.

> IF THE TRANSACTION WILL OR MAY WRITE TO THE DATABASE AT SOME POINT
> BEFORE COMMIT/ROLLBACK then start the transaction with BEGIN IMMEDIATE.
>
> [...]
>
> An attempt to UPGRADE a READ transaction to a WRITE transaction will
> DEADLOCK (and immediately return an error) if another WRITE TRANSACTION
> is pending.  (Hence always signal your intent when commencing a
> transaction by using the appropriate BEGIN syntax in order to avoid this
> state of affairs).

This is the major thing I was missing and was exactly the fault with the test program I posted.  Adding "IMMEDIATE" completely fixed it.

Oh, a bit more on that.  I still could get SQLITE_BUSY in that test program, but only if I made it fork() so many times that the cumulative sync times added up to more than the busy timeouts I was setting.  The later processes would give up if they had to wait for a hundred predecessors to insert their row.  This is expected and correct operation, proving that SQLite really is serious about syncing to disk and that it is honoring the specified timeout.  Of course, to make this happen, I was going off the deep end and forking hundreds of processes at once.  Then when I increased the timeout to match, every last one of the processes was able to get its business done without complaint despite them all starting at the same time.  Very nice work, SQLite.

> WAL journalling only works where all database user processes are local
> to each other (ie, between connections all originating from the same
> computer -- where "same computer" means all processes have access to the
> same shared memory, so that you may determine whether a SYSPLEX or NUMA
> architecture constitutes "same computer" or not, by whether or not they
> can all access the same "shared memory" region).

I'm not using WAL.  The nature of the application is that the initial database population phase is dominated by (sometimes simultaneous) writes, but after that point all access is purely read-only.

> File locking only works reliably on local filesystems.  It is unreliable
> for *ALL* remote filesystems, even if the target "remote" filesystem is
> local.

Sadly, I am going to need NFS before I'm done, but since I know NFS is a problem, I'm sticking with local filesystems until things are stable.  For this specific application, I'll bypass NFS during the population phase by keeping the database in /tmp, then I'll move it to NFS for read-only access.  I have another application on the horizon that will need to be able to write over NFS, but I think it will be able to tolerate the theoretical possibility of losing data once in a blue moon.  It'll be an append-only log where it's not a big deal if a record occasionally goes missing just because two users happened to run at the same time.  If anyone disagrees, they can have me write a concurrency server to take the place of fcntl() locking.

NFS is a really massive problem for me, more than you realize.  I had everything working just fine over a year ago as a collection of Perl and Tcl scripts but then had to rewrite it all because of horrid bugs in the NFS driver on one particular computer, doing nasty things like break named pipes and randomly unmount filesystems and change current working directories to ***EMPTY STRING*** for no damn reason.  And that's all on top of the expected problems like broken file locking.  You better believe I'm upset with the admin for failing to address these problems.  Just because I've been gradually making progress working around them doesn't mean they're not problems.  The worst of it is knowing that the real fix is to just cut the number of servers in half and put the disk right inside the machine, since we don't even need NFS for what we're doing.  It's only there because it's always been there, and why fix what isn't broken?  And if a workaround exists, it's not broken!  Even if the workaround is to babysit long-running processes and be prepared to restart them over and over until they don't get zapped by the filesystem driver for no discernible reason.

> A local filesystem is defined as one wherein the filesystem code resides
> and is executed "on the same computer" as the application.  See the
> requirement 2 above to determine the meaning of "same computer".  The
> cable between the "computer running the filesystem code" and the "block
> storage device" may have a length varying from inches to many thousands
> of miles.  The command language the "filesystem" uses to access the
> underlying block storage is irrelevant (it may be ATA, SATA, SCSI, SAS,
> iSCSI, SAC (SCSI over Avian Carriers) or what ever your heart may
> desire).

I think I'm going to recommend that the admin switch from NFS to SAC in order to improve our latency and reliability.  Plus, birds tend to leave nice audit logs wherever they go, whereas right now I can't tell what's happening to our data.

Richard Damon wrote:
> The experimental concurrency option might also help, in that a write
> operation doesn't need an exclusive lock for the whole database, but
> does introduce more points where a transaction might fail and need to be
> rolled back.

I honestly don't see any reason why using BEGIN IMMEDIATE shouldn't be the full solution for my problem.  It makes perfect sense.  Anything more is overkill and would constitute me falling in love with sophistication at the expense of getting the job done.

Yet, I'm still having problems in my full application.  I need to continue with that logger, I guess.  Today has mostly been family time, so I've not made much progress.
------------------------------------------- CONFIDENTIALITY NOTICE: This email and any attachments are for the sole use of the intended recipient and may contain material that is proprietary, confidential, privileged or otherwise legally protected or restricted under applicable government laws. Any review, disclosure, distributing or other use without expressed permission of the sender is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies without reading, printing, or saving..
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Database locking problems

Andrew.Goth
Okay, I put in some instrumentation.  Basically I print out all database queries as they happen, along with all calls to sqlite3OsLock() and sqlite3OsUnlock() (including their lockType argument and any abnormal return code).  Also I print out how many times sqlite3InvokeBusyHandler() has to call the busy handler.

For the moment, the solution that is working for me is to disable syncing with PRAGMA synchronous = OFF.  This is acceptable in this particular application because a power failure or OS crash will necessitate restarting the data gathering process anyway.  I'll explain later why this change helps.

In looking at the logs, I'm seeing several unlucky locking patterns.  Let's just pick one.  Process A gets starved by process B since B does many write transactions in a row, each time successfully getting an EXCLUSIVE lock.  When B calls fdatasync() (actually fsync()), it blocks while holding EXCLUSIVE.  A wakes up, but A can't immediately get a SHARED lock because B holds EXCLUSIVE.  A goes to sleep while holding no locks, then B wakes up when the fsync() completes.  B then releases locks but grabs EXCLUSIVE again to complete its next transaction, and the cycle repeats.  A still can't get its SHARED lock, so it goes to sleep again, and then B continues to monopolize the lock.  This goes on long enough to exhaust A's patience, at which point SQLITE_BUSY propagates to the application.

If A tries to get SHARED while B holds EXCLUSIVE, I'd like to see A's desire for SHARED to be registered somehow so that A is granted SHARED the moment B drops EXCLUSIVE.  Then when B tries to get EXCLUSIVE a second time, it is forced to wait in PENDING until A releases SHARED.

I imagine one way to do this would be for A to block inside fcntl() rather than usleep(), so that the OS can atomically switch the lock over to A the very instant B releases it.  But as far as I know, the only API that can give fcntl() locking a timeout is setitimer() and SIGALRM.  I wouldn't want SQLite to mess with that, not without being given permission by a special preprocessor #define.  SQLITE_ENABLE_SETLK_TIMEOUT, perhaps?

In the above case, A is just trying to get SHARED.  This is a difference from the test program I posted, which only needs write transactions.  My real application needs to do a read transaction as part of database initialization in order to confirm the schema version is correct.  That's probably a waste of time, but it's in there for now.

But I think the more important difference is that B is doing many transactions in a tight loop.  I guess I could put a sleep in the loop to give other processes a chance.

I saw another pattern that I'm having a harder time explaining.  It looked almost like the process holding EXCLUSIVE got stuck waiting for the process trying to get EXCLUSIVE.  The latter would eventually timeout, at which point the former would proceed.  Shrug!  I should investigate this further, but look at the time.

Okay, why does disabling syncing help?  Taking fdatasync() out of the picture means B doesn't have to go to sleep holding EXCLUSIVE.  It grabs EXCLUSIVE, tells the OS what it wants written to disk, then releases EXCLUSIVE without waiting to confirm that all the bits landed on the platters.
------------------------------------------- CONFIDENTIALITY NOTICE: This email and any attachments are for the sole use of the intended recipient and may contain material that is proprietary, confidential, privileged or otherwise legally protected or restricted under applicable government laws. Any review, disclosure, distributing or other use without expressed permission of the sender is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies without reading, printing, or saving..
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Database locking problems

Gary R. Schmidt
On 21/01/2019 18:46, [hidden email] wrote:
> Okay, I put in some instrumentation.  Basically I print out all database queries as they happen, along with all calls to sqlite3OsLock() and sqlite3OsUnlock() (including their lockType argument and any abnormal return code).  Also I print out how many times sqlite3InvokeBusyHandler() has to call the busy handler.
>
> For the moment, the solution that is working for me is to disable syncing with PRAGMA synchronous = OFF.  This is acceptable in this particular application because a power failure or OS crash will necessitate restarting the data gathering process anyway.  I'll explain later why this change helps.
>
> In looking at the logs, I'm seeing several unlucky locking patterns.  Let's just pick one.  Process A gets starved by process B since B does many write transactions in a row, each time successfully getting an EXCLUSIVE lock.  When B calls fdatasync() (actually fsync()), it blocks while holding EXCLUSIVE.  A wakes up, but A can't immediately get a SHARED lock because B holds EXCLUSIVE.  A goes to sleep while holding no locks, then B wakes up when the fsync() completes.  B then releases locks but grabs EXCLUSIVE again to complete its next transaction, and the cycle repeats.  A still can't get its SHARED lock, so it goes to sleep again, and then B continues to monopolize the lock.  This goes on long enough to exhaust A's patience, at which point SQLITE_BUSY propagates to the application.
>
Long ago, when I was at PowerFlex, on an OS long dead (Siemens-Nixdorf
SINIX) using the MIPS RM series of processors, we saw a similar problem.

In that case it was a batch process locking out interactive users, but
*only* on the top-of-the-line machine, the lower end, slower CPUs were fine.

One of S-N's finest engineers informed the customer that we "didn't know
anything about file-locking on UNIX(TM) systems," which was slightly
annoying, to say the least.

So we started into space, and we drew diagrams on the whiteboard, and I
scribbled on my notepad, and stared at my locking code, and eventually
decided that it was probably down to the batch process on the faster CPU
just not giving up the lock when the interactive processes were ready to
run.  I.e. "Here's the lock...  Oh, I've still got some time left, I'll
hang on to it...  My time has run out but I've still got the lock, nyah,
nyah, nyah."

We added "sleep(0)" after the "unlock()" call.

The problem went away.

This may help you, it may not.

        Cheers,
                Gary B-)
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Database locking problems

Rowan Worth-2
In reply to this post by Andrew.Goth
On Mon, 21 Jan 2019 at 15:46, <[hidden email]> wrote:

> For the moment, the solution that is working for me is to disable syncing
> with PRAGMA synchronous = OFF.  This is acceptable in this particular
> application because a power failure or OS crash will necessitate restarting
> the data gathering process anyway.
>

If the data gathering process is creating a database for later consumption
by other applications, and you potentially want to use this on a network
file system, note that you want to turn synchronous back ON and run one
final transaction to update the database on each connection you've opened.

This is because sqlite ignores errors from close() - which doesn't matter
in normal circumstances because any i/o errors should have been collected
by fdatasync(). But when you're not syncing it opens the possibility of a
silently corrupt data set, because even though the file server reported the
error sqlite suppressed it.


> In looking at the logs, I'm seeing several unlucky locking patterns.
> Let's just pick one.  Process A gets starved by process B since B does many
> write transactions in a row, each time successfully getting an EXCLUSIVE
> lock.  When B calls fdatasync() (actually fsync()), it blocks while holding
> EXCLUSIVE.  A wakes up, but A can't immediately get a SHARED lock because B
> holds EXCLUSIVE.  A goes to sleep while holding no locks, then B wakes up
> when the fsync() completes.  B then releases locks but grabs EXCLUSIVE
> again to complete its next transaction, and the cycle repeats.  A still
> can't get its SHARED lock, so it goes to sleep again, and then B continues
> to monopolize the lock.  This goes on long enough to exhaust A's patience,
> at which point SQLITE_BUSY propagates to the application.
>

Yeah, I've seen a tight loop of write transactions starve readers before -
since they've most likely backed off to sleep for 100ms at a time they have
to get really lucky to wake up while the writer is idle. It doesn't strike
me as a  common workload though? Like if you need that kind of constant
throughput without disrupting readers it may be time to consider an RDMS.


> I saw another pattern that I'm having a harder time explaining.  It looked
> almost like the process holding EXCLUSIVE got stuck waiting for the process
> trying to get EXCLUSIVE.  The latter would eventually timeout, at which
> point the former would proceed.  Shrug!  I should investigate this further,
> but look at the time.
>

I don't understand this one, and it doesn't really add up because:

1. A process holding EXCLUSIVE has the highest lock possible; it's not
waiting for anyone
2. Why is any other process blocked trying to acquire EXCLUSIVE at this
point? It shouldn't even be able to get a SHARED/RESERVED lock.

-Rowan
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Database locking problems

Rowan Worth-2
In reply to this post by Keith Medcalf
On Mon, 21 Jan 2019 at 07:21, Keith Medcalf <[hidden email]> wrote:

> In DELETE or TRUNCATE (that is, all modes except WAL) a READ transaction
> in progress blocks a WRITE transaction and a WRITE transaction in progress
> blocks all other attempts to commence a transaction of any type on any
> other connection.
>

Nitpick: an active READ transaction blocks the COMMIT stage of a WRITE
transaction. An active WRITE transaction blocks all attempts to start
another WRITE transaction, and once it is ready to COMMIT blocks attempts
to start READ transactions.

-Rowan
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Database locking problems

James K. Lowden
In reply to this post by Keith Medcalf
On Sun, 20 Jan 2019 17:01:25 -0700
"Keith Medcalf" <[hidden email]> wrote:

> SQLite3 however has latencies on the order of microseconds

Is that really true?  Are there machines for which SQLite's throughput
can be measured in transactions per millisecond?  

I think you're referring to the latency of the function-call overhead,
as opposed to using a network interface.  But since DBMSs are basically
I/O machines, and the most interesting operations involve I/O, it's not
clear to me why function-call overhead is a relevant measure.  

> [SQLite] does not have the same opportunity for opportunistic
> behaviour as does a client/server database which may be serving
> thousands of concurrent (but different) applications.

That I think is the relevant measure.  It's the number of clients, not
latency, that makes SQLite's lock-the-whole database feasible.  On a
large machine supporting thousands of clients, the latency advantage
would be overwhelmed by the concurrency disadvantage, depending of
course on the rate of updates.  

--jkl
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Database locking problems

James K. Lowden
In reply to this post by Andrew.Goth
On Sun, 20 Jan 2019 21:51:19 +0000
<[hidden email]> wrote:

> > insert into t
> > select :pid, nrows, N
> > from (select 1 as N union select 2 union select 3) as cardinals
> > cross join (select :pid, count(*) as nrows from t) as how_many;
> >
> > By using a single SQL statement, you avoid a user-defined
> > transaction and any proprietary transaction qualifiers.
>
> There's too much back-and-forth between the database and my logic to
> put it all into a single statement.  Thus, transactions are
> necessary.  Transactions exist to allow multiple statements to become
> an atomic unit, so eschewing them is basically the same thing as
> admitting they don't work.  

I don't suggest you eschew user-defined transactions, and I don't admit
they don't work.  I suggest not relying on proprietary features.  BEGIN
IMMEDIATE may do what you want, but at the cost of not learning
how to accomplish the same thing using standard SQL.  

In general, the formulation

        select ...
        do stuff
        insert ...

will get you in trouble in most SQL setups, because selected data are
subject to change, whether or not the logic is in a user-defined
transaction.  If the DBMS is set up with serialized isolation,
concurrency will be compromised because you're holding a transaction
open across application logic.  That's especially true when it would be
most convenient: when "do stuff" involves user interaction, and said
user might choose that moment for a coffee break.  

The most common solution is to check your assumptions at the door,
otherwise known as "optimistic concurrency". At time of insert,
make sure the previously obtained data still obtain, i.e., that nothing
has changed meanwhile. If it has, the precondition for the update has
been violated, and you start over.  If it hasn't -- and, in practice,
usually it hasn't -- you're good to go.  

In your case, you might be able to do something like

        select count(*) as nrows from t
        do stuff
        begin transaction
        insert into t select ... where nrows = (select count(*) from t)
        verify rows affected == 1
        insert into t select ... where nrows = (select count(*) from t)
        verify rows affected == 1
        insert into t select ... where nrows = (select count(*) from t)
        verify rows affected == 1
        commit

putting whatever the relevant test is in the WHERE clause.  The test
can be simplified in SQLite to just the first insert, because with
SQLite's single-writer design, they're redundant once the transaction
has begun to take effect (ie, once other writers really are blocked).  

User-defined transactions weren't invented to deal with
read-write-write errors.  They were invented to enforce referential
integrity.  SQL has no syntax to update several related tables at the
same time.  If updates to different tables would leave the database in
an inconsistent state, some mechanism is needed to convert those
several SQL statements into a single all-or-nothing update.  Thus were
born user-defined transactions.  

--jkl

_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Database locking problems

Richard Damon
In reply to this post by James K. Lowden
On 1/21/19 4:38 PM, James K. Lowden wrote:

> On Sun, 20 Jan 2019 17:01:25 -0700
> "Keith Medcalf" <[hidden email]> wrote:
>
>> SQLite3 however has latencies on the order of microseconds
> Is that really true?  Are there machines for which SQLite's throughput
> can be measured in transactions per millisecond?  
>
> I think you're referring to the latency of the function-call overhead,
> as opposed to using a network interface.  But since DBMSs are basically
> I/O machines, and the most interesting operations involve I/O, it's not
> clear to me why function-call overhead is a relevant measure.  
>
>> [SQLite] does not have the same opportunity for opportunistic
>> behaviour as does a client/server database which may be serving
>> thousands of concurrent (but different) applications.
> That I think is the relevant measure.  It's the number of clients, not
> latency, that makes SQLite's lock-the-whole database feasible.  On a
> large machine supporting thousands of clients, the latency advantage
> would be overwhelmed by the concurrency disadvantage, depending of
> course on the rate of updates.  
>
> --jkl

SQLite is NOT intended to be used over a network, and in fact there are
warnings about accessing the database file over a network file system,
as this often doesn't support the assumed locking primitives. SQLite is
intended to be accessed over the local file system. Some operations can
be order of microseconds if the data resides in cache, slightly longer
if the database is in flash memory, and perhaps 100s of microseconds to
milliseconds if the database is on spinning rust, and the operation
needs to access the drive to get the needed data.

--
Richard Damon

_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Database locking problems

James K. Lowden
On Mon, 21 Jan 2019 18:12:25 -0500
Richard Damon <[hidden email]> wrote:

> Some operations can be order of microseconds if the data resides in
> cache,

Thank you, I hadn't considered that.  I was thinking that seek times on
"spinning rust" -- which is the only economically feasible technology
for large databases, which would be typical of a database with many
concurrent users -- is still measured in milliseconds.  And the larger
the database, the lower the cache hit ratio.  

> SQLite is NOT intended to be used over a network

No emphasis required.  I didn't imply network access.  I posited
multple clients, of which there could be many on a large shared-logic
machine.  

--jkl
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
12