Quantcast

Inserts silently failing

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Inserts silently failing

Brian Macy
I have an application where multiple engines perform heavy write OLTP transactions on a repository.  The engine supports many data stores and I recently added SQLite support.

WAL is great, for small single node stuff SQLite is very fast and simple.  I had turned of synchronous (=0), as for the use case, having to start over due to a corrupted database is not a big deal, but I have turned it back to normal mode which lessens the problem but doesn’t go away.  Due to the heavy writes I have a lot of contention (my busy handler works well) and I have to force checkpoints to keep the log under control.

Recently I removed a bunch of latency from my application.  Suddenly I’m getting odd errors where a record that was just inserted can’t be read by the same process.  In fact, even waiting till completion of the run, the record never exists.  I confirmed the insert is getting called with the correct parameters, I logged everything coming back on the error log callback, and I verified that the DB is always in auto commit.

What can I do to debug better?  Can checkpoint RESTART/TRUNCATE cause data loss?

Brian Macy

_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Inserts silently failing

Simon Slavin-3

On 12 Feb 2017, at 2:27pm, Brian Macy <[hidden email]> wrote:

> Recently I removed a bunch of latency from my application.  Suddenly I’m getting odd errors where a record that was just inserted can’t be read by the same process. In fact, even waiting till completion of the run, the record never exists.  I confirmed the insert is getting called with the correct parameters, I logged everything coming back on the error log callback, and I verified that the DB is always in auto commit.

Are you intentionally using transactions by executing BEGIN and COMMIT ?  Or is each row of each table entirely independent so it doesn’t matter ?

> What can I do to debug better?

Are you checking the values returned from each of the SQLite function calls to make sure they’re SQLITE_OK ?

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Inserts silently failing

Keith Medcalf
In reply to this post by Brian Macy


On Sunday, 12 February, 2017 07:28, Brian Macy <[hidden email]> said:

> Recently I removed a bunch of latency from my application.  Suddenly I’m
> getting odd errors where a record that was just inserted can’t be read by
> the same process.  

SQLite does not do "per process" locking/transactions -- it does locking/transactions/isolation per connection (handle returned by one of the sqlite_open* calls).  You are not "spawning" new processes using the same connection handle, are you?

> In fact, even waiting till completion of the run, the
> record never exists.  I confirmed the insert is getting called with the
> correct parameters, I logged everything coming back on the error log
> callback, and I verified that the DB is always in auto commit.

Again, autocommit is not per database (nor per process) but is per connection.  Does the database connection close properly (return SQLITE_OK)?




_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Loading...