capturing and testing a hot journal

classic Classic list List threaded Threaded
50 messages Options
123
Reply | Threaded
Open this post in threaded view
|

capturing and testing a hot journal

Charles Parnot
Hi all,

For testing purposes of our application (a Mac app), I am generating what I thought would be a database with a “hot” journal using this approach (on an existing database):

- open the database (and PRAGMA journal_mode = TRUNCATE;)
- open a transaction: BEGIN IMMEDIATE TRANSACTION;
- add some rows: INSERT etc…
- **make a copy of the db and journal files** (while still hot?)
- close the transaction

Then I open the copied database+journal (naming the files appropriately), again in TRUNCATE journal mode. As expected, the content of the database does not include the inserted rows. However, the journal file is not emptied, even after closing the database. Based on the documentation (http://www.sqlite.org/lockingv3.html#hot_journals), I would have expected the journal file to be emptied because it is “hot”.

There are 2 options here:

- the journal file is actually not “hot” and I misunderstood the conditions that make it hot
- there is a bug in SQLite

Obviously, I strongly suspect I am misunderstanding things, and don’t think it is an SQLite bug. Despite intensive Google-ing and more testing, I am not sure what makes the journal non-hot.

Thanks for your help!

Charles


NB: You might be wondering why I am doing the above. I realize SQLite has already much more advanced tests for “hot” db+journals (running custom versions of filesystems to generate all kind of edge cases). The test case I am generating is just for a simple edge case of our Dropbox-based syncing (see: https://github.com/cparnot/PARStore and http://mjtsai.com/blog/2014/05/21/findings-1-0-and-parstore/). For a given database file, there is only one device that can write to it, all other devices being read-only (not in terms of filesystem, but sqlite-wise). But it is possible that Dropbox will copy a database and journal files that are not consistent with each other, which can create problems. For instance, maybe a read-only device could try to open the (still old) database with a new non-empty journal file and sqlite would empty that journal file, then Dropbox could in turn empty the journal file before the writer client had finished the transaction. I am not (yet) going to test for and try to protect against more complicated (and rarer) edge cases where the database is in the middle of writing a transaction (which I suspect will only happen in case of crashes, not because of Dropbox, in which case the recovery of the database by the read-only client would actually be beneficial).

--
Charles Parnot
[hidden email]
http://app.net/cparnot
twitter: @cparnot

Your Lab Notebook, Reinvented.
http://findingsapp.com

_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

Richard Hipp-3
On Sat, Jul 12, 2014 at 4:37 AM, Charles Parnot <[hidden email]>
wrote:

> Hi all,
>
> For testing purposes of our application (a Mac app), I am generating what
> I thought would be a database with a “hot” journal using this approach (on
> an existing database):
>
> - open the database (and PRAGMA journal_mode = TRUNCATE;)
> - open a transaction: BEGIN IMMEDIATE TRANSACTION;
> - add some rows: INSERT etc…
> - **make a copy of the db and journal files** (while still hot?)
>

Normally you need to either (1) reduce the page cache size using "PRAGMA
cache_size=5" or else (2) do a VERY large transaction in order to get
SQLite to spill content to disk in order to get a hot journal using the
technique above.



> - close the transaction
>
> Then I open the copied database+journal (naming the files appropriately),
> again in TRUNCATE journal mode. As expected, the content of the database
> does not include the inserted rows. However, the journal file is not
> emptied, even after closing the database. Based on the documentation (
> http://www.sqlite.org/lockingv3.html#hot_journals), I would have expected
> the journal file to be emptied because it is “hot”.
>
> There are 2 options here:
>
> - the journal file is actually not “hot” and I misunderstood the
> conditions that make it hot
> - there is a bug in SQLite
>
> Obviously, I strongly suspect I am misunderstanding things, and don’t
> think it is an SQLite bug. Despite intensive Google-ing and more testing, I
> am not sure what makes the journal non-hot.
>
> Thanks for your help!
>
> Charles
>
>
> NB: You might be wondering why I am doing the above. I realize SQLite has
> already much more advanced tests for “hot” db+journals (running custom
> versions of filesystems to generate all kind of edge cases). The test case
> I am generating is just for a simple edge case of our Dropbox-based syncing
> (see: https://github.com/cparnot/PARStore and
> http://mjtsai.com/blog/2014/05/21/findings-1-0-and-parstore/). For a
> given database file, there is only one device that can write to it, all
> other devices being read-only (not in terms of filesystem, but
> sqlite-wise). But it is possible that Dropbox will copy a database and
> journal files that are not consistent with each other, which can create
> problems. For instance, maybe a read-only device could try to open the
> (still old) database with a new non-empty journal file and sqlite would
> empty that journal file, then Dropbox could in turn empty the journal file
> before the writer client had finished the transaction. I am not (yet) going
> to test for and try to protect against more complicated (and rarer) edge
> cases where the database is in the middle of writing a transaction (which I
> suspect will only happen in case of crashes, not because of Dropbox, in
> which case the recovery of the database by the read-only client would
> actually be beneficial).
>
> --
> Charles Parnot
> [hidden email]
> http://app.net/cparnot
> twitter: @cparnot
>
> Your Lab Notebook, Reinvented.
> http://findingsapp.com
>
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



--
D. Richard Hipp
[hidden email]
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

Simon Slavin-3
In reply to this post by Charles Parnot

On 12 Jul 2014, at 9:37am, Charles Parnot <[hidden email]> wrote:

> - the journal file is actually not “hot” and I misunderstood the conditions that make it hot

That one.  The files on disk aren't 'hot' (as I think you mean it) while you're in a transaction.

Your file system is not pushing journal changes at the file level.  It doesn't need to do that while a transaction is open. since while the transaction is open, the database is locked so nothing else can use it anyway, and if your app crashes the whole transaction will be ignored.

SQLite could be written to push transactions to the journal file on each change, but that would involve lots of writing to disk, so it would make SQLite slower, and for no gain.

> [snip] The test case I am generating is just for a simple edge case of our Dropbox-based syncing


Yes, DropBox can be a problem for open SQLite databases.  As a file level duplication system which does not understand locks, there's no good way to make DropBox work with open SQLite databases, or as a mediator for concurrent multi-user changes to a database.  I had to explain to some users that a database change is not 'safe' until the database is closed.

One thing that's worth testing is to make sure that recovery after crashes always yields a database with either pre- or post-transaction data rather than something corrupt which can't be opened.  I don't know much about how DropBox works.  Could it perhaps end up with a database file from one computer but journal file from another ?

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

Kees Nuyt
On Sun, 13 Jul 2014 18:00:59 +0100, Simon Slavin <[hidden email]>
wrote:

> On 12 Jul 2014, at 9:37am, Charles Parnot <[hidden email]> wrote:
> > - the journal file is actually not “hot” and I misunderstood
> the conditions that make it hot
>
> That one.  The files on disk aren't 'hot' (as I think you mean
> it) while you're in a transaction.
>
> Your file system is not pushing journal changes at the file
> level.  It doesn't need to do that while a transaction is open.
> since while the transaction is open, the database is locked so
> nothing else can use it anyway, and if your app crashes the
> whole transaction will be ignored.
>
> SQLite could be written to push transactions to the journal file
> on each change, but that would involve lots of writing to disk,
> so it would make SQLite slower, and for no gain.
>
> > [snip] The test case I am generating is just for a simple edge
> case of our Dropbox-based syncing
>
>
> Yes, DropBox can be a problem for open SQLite databases.  As a
> file level duplication system which does not understand locks,
> there's no good way to make DropBox work with open SQLite
> databases, or as a mediator for concurrent multi-user changes to
> a database.

> I had to explain to some users that a database
> change is not 'safe' until the database is closed.

As far as I know, a database change is safe after a successfull COMMIT.
Commit also releases locks.
 
> One thing that's worth testing is to make sure that recovery
> after crashes always yields a database with either pre- or post-
> transaction data rather than something corrupt which can't be
> opened.  I don't know much about how DropBox works.  Could it
> perhaps end up with a database file from one computer but
> journal file from another ?

Indeed, or a journal file and a database file of different
points in time.

One could expect dropbox to respect locks, but it doesn't seem
to do that. It also soesn't seem to synchronize a directory in
an atomic fashion, which would be necessary to maintain
consistency for sqlite or any other software that works on
time-coordinated sets of files.

In my opinion dropbox should not be used on directories with
SQLite databases at all. It would be better to only allow
dropbox access to directories with backups, and an application
level synchronisation/recovery mechanism to reconstruct the main
database from the backup when needed.

--
Regards,

Kees Nuyt

_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

Simon Slavin-3

On 14 Jul 2014, at 11:19am, Kees Nuyt <[hidden email]> wrote:

> On Sun, 13 Jul 2014 18:00:59 +0100, Simon Slavin <[hidden email]>
> wrote:
>
>> I had to explain to some users that a database
>> change is not 'safe' until the database is closed.
>
> As far as I know, a database change is safe after a successfull COMMIT.
> Commit also releases locks.

That's what the documentation says, and it's a safe way to operate if all your access to the file is via one API.  Unfortunately, the drivers for many storage media lie to the operating system and do not flush changes to disk when told to.  On a test system running Windows 98, using a C program writing a text file, I was able to prove that doing all the locking and flushing the documentation required still did not properly update the file on disk.  However, the file was always updated by a few seconds after the file was closed so I have used that as a yardstick ever since.

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

Kees Nuyt

On Mon, 14 Jul 2014 12:09:46 +0100, Simon Slavin <[hidden email]>
wrote:

> On 14 Jul 2014, at 11:19am, Kees Nuyt <[hidden email]> wrote:
>
> > On Sun, 13 Jul 2014 18:00:59 +0100, Simon Slavin <[hidden email]>
> > wrote:
> >
> >> I had to explain to some users that a database
> >> change is not 'safe' until the database is closed.
> >
> > As far as I know, a database change is safe after a successfull COMMIT.
> > Commit also releases locks.
>
> That's what the documentation says, and it's a safe way to
> operate if all your access to the file is via one API.
> Unfortunately, the drivers for many storage media lie to the
> operating system and do not flush changes to disk when told to.
> On a test system running Windows 98, using a C program writing a
> text file, I was able to prove that doing all the locking and
> flushing the documentation required still did not properly
> update the file on disk.  However, the file was always updated
> by a few seconds after the file was closed so I have used that
> as a yardstick ever since.

Aha, I see. Yes, ill-behaving filesystems can do that.
The question is whether experiences on Windows 98 are still
relevant for rules of thumb in 2014.

--
Regards,

Kees Nuyt
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

Simon Slavin-3

On 14 Jul 2014, at 12:53pm, Kees Nuyt <[hidden email]> wrote:

> Aha, I see. Yes, ill-behaving filesystems can do that.
> The question is whether experiences on Windows 98 are still
> relevant for rules of thumb in 2014.

I mentioned Windows 98 to let you know how out-of-date my text was.  I no longer have a job which involves testing like that so I can't do an updated one.

At least one of the things lying about updates was the hard disk driver.  (Samsung, if I recall correctly, though I doubt any competing manufacturer was any better.)  I bet they're still using more or less the same driver.  This was, of course, a hard disk intended for use in a perfectly normal desktop computer, not for use in a server.

By the way, anyone reading this who might want to know why everything lies: doing proper updates, and checking to make sure hardware has changed before software can move on slows down the operation of your computer /a lot/.  For a dedicated server you might want it.  For your desktop computer you really don't.  I once set up server-class hardware with server-class hard disk with the jumper settings set to "Yes, really wait for writes to happen before acknowledging it.".  The computer took over 10 minutes to boot and another 10 minutes before I had Word loaded.  Once it was running with a reasonable number of apps open, I think I managed to get almost three characters a second when typing a Word document.

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

Michael Schlenker-4
In reply to this post by Kees Nuyt
Am 14.07.2014 13:53, schrieb Kees Nuyt:

>
> On Mon, 14 Jul 2014 12:09:46 +0100, Simon Slavin <[hidden email]>
> wrote:
>
>> On 14 Jul 2014, at 11:19am, Kees Nuyt <[hidden email]> wrote:
>>
>>> On Sun, 13 Jul 2014 18:00:59 +0100, Simon Slavin <[hidden email]>
>>> wrote:
>>>
>>>> I had to explain to some users that a database
>>>> change is not 'safe' until the database is closed.
>>>
>>> As far as I know, a database change is safe after a successfull COMMIT.
>>> Commit also releases locks.
>>
>> That's what the documentation says, and it's a safe way to
>> operate if all your access to the file is via one API.
>> Unfortunately, the drivers for many storage media lie to the
>> operating system and do not flush changes to disk when told to.
>> On a test system running Windows 98, using a C program writing a
>> text file, I was able to prove that doing all the locking and
>> flushing the documentation required still did not properly
>> update the file on disk.  However, the file was always updated
>> by a few seconds after the file was closed so I have used that
>> as a yardstick ever since.
>
> Aha, I see. Yes, ill-behaving filesystems can do that.
> The question is whether experiences on Windows 98 are still
> relevant for rules of thumb in 2014.
>
As most people probably keep the write caches on their hard drives 'ON',
yes. If even your hardware lies in the name of performance, you should
probably be a little paranoid. So it is not just filesystems, it is
hardware too.

So if you do a successful commit and the OS and hardware don't lie to
you, your change is safe. But there are lies of varying magnitude at
work. So if it is really important, wait a few minutes until the OS has
surely flushed all its buffers and the HDDs did the same.

Michael

--
Michael Schlenker
Software Architect

CONTACT Software GmbH           Tel.:   +49 (421) 20153-80
Wiener Straße 1-3               Fax:    +49 (421) 20153-41
28359 Bremen
http://www.contact.de/          E-Mail: [hidden email]

Sitz der Gesellschaft: Bremen
Geschäftsführer: Karl Heinz Zachries, Ralf Holtgrefe
Eingetragen im Handelsregister des Amtsgerichts Bremen unter HRB 13215
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

Drago, William @ CSG - NARDA-MITEQ
In reply to this post by Charles Parnot
This may be a bit simplistic, but it does give me a reasonable degree of confidence that hot journal files are being handled correctly in my application.

I simply put a 1/0 on the line before my commit to purposely crash my app. Sure enough there's a journal file after the crash (I have a rather large transaction consisting of among other things, about 35 rows inserted, each containing a blob).

When I restart my app it looks for the presence of a journal file and will open and read the db so that SQLite can deal with it. It also displays a message letting the user know that something went wrong during the last run.

I do this with a test db of course, not the real one.

-Bill




> -----Original Message-----
> From: [hidden email] [mailto:sqlite-users-
> [hidden email]] On Behalf Of Charles Parnot
> Sent: Saturday, July 12, 2014 4:38 AM
> To: [hidden email]
> Subject: [sqlite] capturing and testing a hot journal
>
> Hi all,
>
> For testing purposes of our application (a Mac app), I am generating
> what I thought would be a database with a "hot" journal using this
> approach (on an existing database):
>
> - open the database (and PRAGMA journal_mode = TRUNCATE;)
> - open a transaction: BEGIN IMMEDIATE TRANSACTION;
> - add some rows: INSERT etc...
> - **make a copy of the db and journal files** (while still hot?)
> - close the transaction
>
> Then I open the copied database+journal (naming the files
> appropriately), again in TRUNCATE journal mode. As expected, the
> content of the database does not include the inserted rows. However,
> the journal file is not emptied, even after closing the database. Based
> on the documentation
> (http://www.sqlite.org/lockingv3.html#hot_journals), I would have
> expected the journal file to be emptied because it is "hot".
>
> There are 2 options here:
>
> - the journal file is actually not "hot" and I misunderstood the
> conditions that make it hot
> - there is a bug in SQLite
>
> Obviously, I strongly suspect I am misunderstanding things, and don't
> think it is an SQLite bug. Despite intensive Google-ing and more
> testing, I am not sure what makes the journal non-hot.
>
> Thanks for your help!
>
> Charles
>
>
> NB: You might be wondering why I am doing the above. I realize SQLite
> has already much more advanced tests for "hot" db+journals (running
> custom versions of filesystems to generate all kind of edge cases). The
> test case I am generating is just for a simple edge case of our
> Dropbox-based syncing (see: https://github.com/cparnot/PARStore and
> http://mjtsai.com/blog/2014/05/21/findings-1-0-and-parstore/). For a
> given database file, there is only one device that can write to it, all
> other devices being read-only (not in terms of filesystem, but sqlite-
> wise). But it is possible that Dropbox will copy a database and journal
> files that are not consistent with each other, which can create
> problems. For instance, maybe a read-only device could try to open the
> (still old) database with a new non-empty journal file and sqlite would
> empty that journal file, then Dropbox could in turn empty the journal
> file before the writer client had finished the transaction. I am not
> (yet) going to test for and try to protect against more complicated
> (and rarer) edge cases where the database is in the middle of writing a
> transaction (which I suspect will only happen in case of crashes, not
> because of Dropbox, in which case the recovery of the database by the
> read-only client would actually be beneficial).
>
> --
> Charles Parnot
> [hidden email]
> http://app.net/cparnot
> twitter: @cparnot
>
> Your Lab Notebook, Reinvented.
> http://findingsapp.com
>
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
CONFIDENTIALITY, EXPORT CONTROL AND DISCLAIMER NOTE:This e-mail and any attachments are solely for the use of the addressee and may contain information that is privileged or confidential. Any disclosure, use or distribution of the information contained herein is prohibited. In the event this e-mail contains technical data within the definition of the International Traffic in Arms Regulations or Export Administration Regulations, it is subject to the export control laws of the U.S.Government. The recipient should check this e-mail and any attachments for the presence of viruses as L-3 does not accept any liability associated with the transmission of this e-mail. If you have received this communication in error, please notify the sender by reply e-mail and immediately delete this message and any attachments.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

mm.w
seriously? you should fix and solve why the soft crashed in the first
place, reality check please.

"But it is possible that Dropbox will copy a database and journal
files that are not consistent with each other, which can create
problems"

fix the sync process, that's easy.

Best.



On Mon, Jul 14, 2014 at 3:04 PM, Drago, William @ MWG - NARDAEAST <
[hidden email]> wrote:

> This may be a bit simplistic, but it does give me a reasonable degree of
> confidence that hot journal files are being handled correctly in my
> application.
>
> I simply put a 1/0 on the line before my commit to purposely crash my app.
> Sure enough there's a journal file after the crash (I have a rather large
> transaction consisting of among other things, about 35 rows inserted, each
> containing a blob).
>
> When I restart my app it looks for the presence of a journal file and will
> open and read the db so that SQLite can deal with it. It also displays a
> message letting the user know that something went wrong during the last run.
>
> I do this with a test db of course, not the real one.
>
> -Bill
>
>
>
>
> > -----Original Message-----
> > From: [hidden email] [mailto:sqlite-users-
> > [hidden email]] On Behalf Of Charles Parnot
> > Sent: Saturday, July 12, 2014 4:38 AM
> > To: [hidden email]
> > Subject: [sqlite] capturing and testing a hot journal
> >
> > Hi all,
> >
> > For testing purposes of our application (a Mac app), I am generating
> > what I thought would be a database with a "hot" journal using this
> > approach (on an existing database):
> >
> > - open the database (and PRAGMA journal_mode = TRUNCATE;)
> > - open a transaction: BEGIN IMMEDIATE TRANSACTION;
> > - add some rows: INSERT etc...
> > - **make a copy of the db and journal files** (while still hot?)
> > - close the transaction
> >
> > Then I open the copied database+journal (naming the files
> > appropriately), again in TRUNCATE journal mode. As expected, the
> > content of the database does not include the inserted rows. However,
> > the journal file is not emptied, even after closing the database. Based
> > on the documentation
> > (http://www.sqlite.org/lockingv3.html#hot_journals), I would have
> > expected the journal file to be emptied because it is "hot".
> >
> > There are 2 options here:
> >
> > - the journal file is actually not "hot" and I misunderstood the
> > conditions that make it hot
> > - there is a bug in SQLite
> >
> > Obviously, I strongly suspect I am misunderstanding things, and don't
> > think it is an SQLite bug. Despite intensive Google-ing and more
> > testing, I am not sure what makes the journal non-hot.
> >
> > Thanks for your help!
> >
> > Charles
> >
> >
> > NB: You might be wondering why I am doing the above. I realize SQLite
> > has already much more advanced tests for "hot" db+journals (running
> > custom versions of filesystems to generate all kind of edge cases). The
> > test case I am generating is just for a simple edge case of our
> > Dropbox-based syncing (see: https://github.com/cparnot/PARStore and
> > http://mjtsai.com/blog/2014/05/21/findings-1-0-and-parstore/). For a
> > given database file, there is only one device that can write to it, all
> > other devices being read-only (not in terms of filesystem, but sqlite-
> > wise). But it is possible that Dropbox will copy a database and journal
> > files that are not consistent with each other, which can create
> > problems. For instance, maybe a read-only device could try to open the
> > (still old) database with a new non-empty journal file and sqlite would
> > empty that journal file, then Dropbox could in turn empty the journal
> > file before the writer client had finished the transaction. I am not
> > (yet) going to test for and try to protect against more complicated
> > (and rarer) edge cases where the database is in the middle of writing a
> > transaction (which I suspect will only happen in case of crashes, not
> > because of Dropbox, in which case the recovery of the database by the
> > read-only client would actually be beneficial).
> >
> > --
> > Charles Parnot
> > [hidden email]
> > http://app.net/cparnot
> > twitter: @cparnot
> >
> > Your Lab Notebook, Reinvented.
> > http://findingsapp.com
> >
> > _______________________________________________
> > sqlite-users mailing list
> > [hidden email]
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> CONFIDENTIALITY, EXPORT CONTROL AND DISCLAIMER NOTE:This e-mail and any
> attachments are solely for the use of the addressee and may contain
> information that is privileged or confidential. Any disclosure, use or
> distribution of the information contained herein is prohibited. In the
> event this e-mail contains technical data within the definition of the
> International Traffic in Arms Regulations or Export Administration
> Regulations, it is subject to the export control laws of the
> U.S.Government. The recipient should check this e-mail and any attachments
> for the presence of viruses as L-3 does not accept any liability associated
> with the transmission of this e-mail. If you have received this
> communication in error, please notify the sender by reply e-mail and
> immediately delete this message and any attachments.
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

William Drago
On 7/14/2014 6:38 PM, mm.w wrote:
> seriously? you should fix and solve why the soft crashed in the first
> place, reality check please.

The software doesn't crash on its own; I'm forcing it to
crash with a divide-by-zero for test purposes. This doesn't
happen in actual use and there's no reason other than a
power failure for a transaction to not commit successfully.
But that doesn't mean I shouldn't handle a failed
transaction if it ever does happen.


-Bill


> "But it is possible that Dropbox will copy a database and journal
> files that are not consistent with each other, which can create
> problems"
>
> fix the sync process, that's easy.
>
> Best.
>
>
>
> On Mon, Jul 14, 2014 at 3:04 PM, Drago, William @ MWG - NARDAEAST <
> [hidden email]> wrote:
>
>> This may be a bit simplistic, but it does give me a reasonable degree of
>> confidence that hot journal files are being handled correctly in my
>> application.
>>
>> I simply put a 1/0 on the line before my commit to purposely crash my app.
>> Sure enough there's a journal file after the crash (I have a rather large
>> transaction consisting of among other things, about 35 rows inserted, each
>> containing a blob).
>>
>> When I restart my app it looks for the presence of a journal file and will
>> open and read the db so that SQLite can deal with it. It also displays a
>> message letting the user know that something went wrong during the last run.
>>
>> I do this with a test db of course, not the real one.
>>
>> -Bill
>>
>>
>>
>>
>>> -----Original Message-----
>>> From: [hidden email] [mailto:sqlite-users-
>>> [hidden email]] On Behalf Of Charles Parnot
>>> Sent: Saturday, July 12, 2014 4:38 AM
>>> To: [hidden email]
>>> Subject: [sqlite] capturing and testing a hot journal
>>>
>>> Hi all,
>>>
>>> For testing purposes of our application (a Mac app), I am generating
>>> what I thought would be a database with a "hot" journal using this
>>> approach (on an existing database):
>>>
>>> - open the database (and PRAGMA journal_mode = TRUNCATE;)
>>> - open a transaction: BEGIN IMMEDIATE TRANSACTION;
>>> - add some rows: INSERT etc...
>>> - **make a copy of the db and journal files** (while still hot?)
>>> - close the transaction
>>>
>>> Then I open the copied database+journal (naming the files
>>> appropriately), again in TRUNCATE journal mode. As expected, the
>>> content of the database does not include the inserted rows. However,
>>> the journal file is not emptied, even after closing the database. Based
>>> on the documentation
>>> (http://www.sqlite.org/lockingv3.html#hot_journals), I would have
>>> expected the journal file to be emptied because it is "hot".
>>>
>>> There are 2 options here:
>>>
>>> - the journal file is actually not "hot" and I misunderstood the
>>> conditions that make it hot
>>> - there is a bug in SQLite
>>>
>>> Obviously, I strongly suspect I am misunderstanding things, and don't
>>> think it is an SQLite bug. Despite intensive Google-ing and more
>>> testing, I am not sure what makes the journal non-hot.
>>>
>>> Thanks for your help!
>>>
>>> Charles
>>>
>>>
>>> NB: You might be wondering why I am doing the above. I realize SQLite
>>> has already much more advanced tests for "hot" db+journals (running
>>> custom versions of filesystems to generate all kind of edge cases). The
>>> test case I am generating is just for a simple edge case of our
>>> Dropbox-based syncing (see: https://github.com/cparnot/PARStore and
>>> http://mjtsai.com/blog/2014/05/21/findings-1-0-and-parstore/). For a
>>> given database file, there is only one device that can write to it, all
>>> other devices being read-only (not in terms of filesystem, but sqlite-
>>> wise). But it is possible that Dropbox will copy a database and journal
>>> files that are not consistent with each other, which can create
>>> problems. For instance, maybe a read-only device could try to open the
>>> (still old) database with a new non-empty journal file and sqlite would
>>> empty that journal file, then Dropbox could in turn empty the journal
>>> file before the writer client had finished the transaction. I am not
>>> (yet) going to test for and try to protect against more complicated
>>> (and rarer) edge cases where the database is in the middle of writing a
>>> transaction (which I suspect will only happen in case of crashes, not
>>> because of Dropbox, in which case the recovery of the database by the
>>> read-only client would actually be beneficial).
>>>
>>> --
>>> Charles Parnot
>>> [hidden email]
>>> http://app.net/cparnot
>>> twitter: @cparnot
>>>
>>> Your Lab Notebook, Reinvented.
>>> http://findingsapp.com
>>>
>>> _______________________________________________
>>> sqlite-users mailing list
>>> [hidden email]
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> CONFIDENTIALITY, EXPORT CONTROL AND DISCLAIMER NOTE:This e-mail and any
>> attachments are solely for the use of the addressee and may contain
>> information that is privileged or confidential. Any disclosure, use or
>> distribution of the information contained herein is prohibited. In the
>> event this e-mail contains technical data within the definition of the
>> International Traffic in Arms Regulations or Export Administration
>> Regulations, it is subject to the export control laws of the
>> U.S.Government. The recipient should check this e-mail and any attachments
>> for the presence of viruses as L-3 does not accept any liability associated
>> with the transmission of this e-mail. If you have received this
>> communication in error, please notify the sender by reply e-mail and
>> immediately delete this message and any attachments.
>> _______________________________________________
>> sqlite-users mailing list
>> [hidden email]
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4716 / Virus Database: 3986/7854 - Release Date: 07/14/14
>
>

_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

Simon Slavin-3

On 15 Jul 2014, at 2:20am, William Drago <[hidden email]> wrote:

> The software doesn't crash on its own; I'm forcing it to crash with a divide-by-zero for test purposes. This doesn't happen in actual use and there's no reason other than a power failure for a transaction to not commit successfully. But that doesn't mean I shouldn't handle a failed transaction if it ever does happen.

If all you're trying to do is spot crashes then you don't have to implement your own semaphore system or locking system.  Use

PRAGMA journal_mode = DELETE

which is the default.  Then you know that if a journal file exists, a process is in the middle of a transaction, or a process which in the middle of a transaction crashed.

All you need to do is check to see if a file exists with the name of the journal file.  Presumably you'd be wanting to do this when your application starts up, before it opens the database.

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

mm.w
ok sorry, I did not red all thru, may you simply sha1 local and remote ? if
!= commit again


On Tue, Jul 15, 2014 at 12:49 AM, Simon Slavin <[hidden email]> wrote:

>
> On 15 Jul 2014, at 2:20am, William Drago <[hidden email]> wrote:
>
> > The software doesn't crash on its own; I'm forcing it to crash with a
> divide-by-zero for test purposes. This doesn't happen in actual use and
> there's no reason other than a power failure for a transaction to not
> commit successfully. But that doesn't mean I shouldn't handle a failed
> transaction if it ever does happen.
>
> If all you're trying to do is spot crashes then you don't have to
> implement your own semaphore system or locking system.  Use
>
> PRAGMA journal_mode = DELETE
>
> which is the default.  Then you know that if a journal file exists, a
> process is in the middle of a transaction, or a process which in the middle
> of a transaction crashed.
>
> All you need to do is check to see if a file exists with the name of the
> journal file.  Presumably you'd be wanting to do this when your application
> starts up, before it opens the database.
>
> Simon.
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

mm.w
yes indeed the "journal tweaking" would work solely for this special file
case, comparing local and remote that's how for instance git works like
many other sync software, I don't know the API but is the box thing notify
you "on start transaction" then "on close", if not it sucks ?


On Tue, Jul 15, 2014 at 6:53 AM, mm.w <[hidden email]> wrote:

> ok sorry, I did not red all thru, may you simply sha1 local and remote ?
> if != commit again
>
>
> On Tue, Jul 15, 2014 at 12:49 AM, Simon Slavin <[hidden email]>
> wrote:
>
>>
>> On 15 Jul 2014, at 2:20am, William Drago <[hidden email]>
>> wrote:
>>
>> > The software doesn't crash on its own; I'm forcing it to crash with a
>> divide-by-zero for test purposes. This doesn't happen in actual use and
>> there's no reason other than a power failure for a transaction to not
>> commit successfully. But that doesn't mean I shouldn't handle a failed
>> transaction if it ever does happen.
>>
>> If all you're trying to do is spot crashes then you don't have to
>> implement your own semaphore system or locking system.  Use
>>
>> PRAGMA journal_mode = DELETE
>>
>> which is the default.  Then you know that if a journal file exists, a
>> process is in the middle of a transaction, or a process which in the middle
>> of a transaction crashed.
>>
>> All you need to do is check to see if a file exists with the name of the
>> journal file.  Presumably you'd be wanting to do this when your application
>> starts up, before it opens the database.
>>
>> Simon.
>> _______________________________________________
>> sqlite-users mailing list
>> [hidden email]
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

mm.w
forgot if != commit again or restore if local unusable (lossy scenario)


On Tue, Jul 15, 2014 at 7:01 AM, mm.w <[hidden email]> wrote:

> yes indeed the "journal tweaking" would work solely for this special file
> case, comparing local and remote that's how for instance git works like
> many other sync software, I don't know the API but is the box thing notify
> you "on start transaction" then "on close", if not it sucks ?
>
>
> On Tue, Jul 15, 2014 at 6:53 AM, mm.w <[hidden email]> wrote:
>
>> ok sorry, I did not red all thru, may you simply sha1 local and remote ?
>> if != commit again
>>
>>
>> On Tue, Jul 15, 2014 at 12:49 AM, Simon Slavin <[hidden email]>
>> wrote:
>>
>>>
>>> On 15 Jul 2014, at 2:20am, William Drago <[hidden email]>
>>> wrote:
>>>
>>> > The software doesn't crash on its own; I'm forcing it to crash with a
>>> divide-by-zero for test purposes. This doesn't happen in actual use and
>>> there's no reason other than a power failure for a transaction to not
>>> commit successfully. But that doesn't mean I shouldn't handle a failed
>>> transaction if it ever does happen.
>>>
>>> If all you're trying to do is spot crashes then you don't have to
>>> implement your own semaphore system or locking system.  Use
>>>
>>> PRAGMA journal_mode = DELETE
>>>
>>> which is the default.  Then you know that if a journal file exists, a
>>> process is in the middle of a transaction, or a process which in the middle
>>> of a transaction crashed.
>>>
>>> All you need to do is check to see if a file exists with the name of the
>>> journal file.  Presumably you'd be wanting to do this when your application
>>> starts up, before it opens the database.
>>>
>>> Simon.
>>> _______________________________________________
>>> sqlite-users mailing list
>>> [hidden email]
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>>
>
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

Simon Slavin-3
In reply to this post by mm.w

On 15 Jul 2014, at 2:53pm, mm.w <[hidden email]> wrote:

> ok sorry, I did not red all thru, may you simply sha1 local and remote ? if
> != commit again

You won't have anything to commit.  If your application really had crashed it wouldn't have any transaction data to commit.  If your application had not crashed the transaction would always have worked.

Anything that might sync a file automatically can make a mistake like this:

1) computer A and computer B both have local copies of the database open
2) users of computer A and computer B both make changes to their local copies
3) computer A and computer B both close their local copies

Now the automatic syncing routine kicks in and notices that both copies have been modified since the last sync.  Whichever copy it chooses, the changes made to the other copy are still going to be lost.

Also, since the sync process doesn't understand that the journal file is intimately related to the database file, it can notice one file was updated and copy that across to another computer, and leave the other file as it was.  While SQLite will notice that the two files don't match, and will not corrupt its database by trying to update it with the wrong journal, there's no way to tell whether you are going to get the data before the last transaction was committed or after.

My recommendation to the OP is not to do any programming around this at all, since whatever programming you come up with will not be dependable.  The routines for checking unexpected journal files in SQLite are very clever.  Just leave SQLite to sort out rare crashes by itself, which it does pretty well.

If, on the other hand, crashes aren't rare then I agree with the other poster to this thread who said that time is better spent diagnosing the cause of your crashes.

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

mm.w
Simon your design-idea do not reflect any reality, this is weak, there is a
lack of experience on the topic and we can feel it.

"You won't have anything to commit.  If your application really had crashed
it wouldn't have any transaction data to commit.  If your application had
not crashed the transaction would always have worked."

nope you can have a partial upload, a broken socket pipe et cetera, and you
only assume a version of the file is not already remote and assume that
after crash you might be able to recover local anyway.


there are two scenario to check:

local = remote after any network transaction
local = remote

after incident:
 + if not remote, test integrity of local
 + if remote make sure both are safe
 + if only remote restore/force sync has you got an interrupt (it happens
with box)

1- the network flow could be interrupted no need a power failure for that
to happen, it can happen that's you face also the case of undetected broken
pipe, that's the reason you need to be notify by the network pooler API you
use,

2- the journal tweaking only concern sqlite file and specific to it, then
wrong design, make it work for anything using the "common regular" system
of hashing/signing local to remote to ensure the integrity of the data, at
least that's the only purpose of this discussion how I am sure whatever
happen that I have my data in good shape somewhere.







On Tue, Jul 15, 2014 at 9:16 AM, Simon Slavin <[hidden email]> wrote:

>
> On 15 Jul 2014, at 2:53pm, mm.w <[hidden email]> wrote:
>
> > ok sorry, I did not red all thru, may you simply sha1 local and remote ?
> if
> > != commit again
>
> You won't have anything to commit.  If your application really had crashed
> it wouldn't have any transaction data to commit.  If your application had
> not crashed the transaction would always have worked.
>
> Anything that might sync a file automatically can make a mistake like this:
>
> 1) computer A and computer B both have local copies of the database open
> 2) users of computer A and computer B both make changes to their local
> copies
> 3) computer A and computer B both close their local copies
>
> Now the automatic syncing routine kicks in and notices that both copies
> have been modified since the last sync.  Whichever copy it chooses, the
> changes made to the other copy are still going to be lost.
>
> Also, since the sync process doesn't understand that the journal file is
> intimately related to the database file, it can notice one file was updated
> and copy that across to another computer, and leave the other file as it
> was.  While SQLite will notice that the two files don't match, and will not
> corrupt its database by trying to update it with the wrong journal, there's
> no way to tell whether you are going to get the data before the last
> transaction was committed or after.
>
> My recommendation to the OP is not to do any programming around this at
> all, since whatever programming you come up with will not be dependable.
>  The routines for checking unexpected journal files in SQLite are very
> clever.  Just leave SQLite to sort out rare crashes by itself, which it
> does pretty well.
>
> If, on the other hand, crashes aren't rare then I agree with the other
> poster to this thread who said that time is better spent diagnosing the
> cause of your crashes.
>
> Simon.
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

mm.w
but it does not imply that in rare cases that either of files are not
busted, that's the reason of "backups" being able to recover last seen (the
lossy case, shit happens)


On Tue, Jul 15, 2014 at 10:06 AM, mm.w <[hidden email]> wrote:

> Simon your design-idea do not reflect any reality, this is weak, there is
> a lack of experience on the topic and we can feel it.
>
> "You won't have anything to commit.  If your application really had
> crashed it wouldn't have any transaction data to commit.  If your
> application had not crashed the transaction would always have worked."
>
> nope you can have a partial upload, a broken socket pipe et cetera, and
> you only assume a version of the file is not already remote and assume that
> after crash you might be able to recover local anyway.
>
>
> there are two scenario to check:
>
> local = remote after any network transaction
> local = remote
>
> after incident:
>  + if not remote, test integrity of local
>  + if remote make sure both are safe
>  + if only remote restore/force sync has you got an interrupt (it happens
> with box)
>
> 1- the network flow could be interrupted no need a power failure for that
> to happen, it can happen that's you face also the case of undetected broken
> pipe, that's the reason you need to be notify by the network pooler API you
> use,
>
> 2- the journal tweaking only concern sqlite file and specific to it, then
> wrong design, make it work for anything using the "common regular" system
> of hashing/signing local to remote to ensure the integrity of the data, at
> least that's the only purpose of this discussion how I am sure whatever
> happen that I have my data in good shape somewhere.
>
>
>
>
>
>
>
> On Tue, Jul 15, 2014 at 9:16 AM, Simon Slavin <[hidden email]>
> wrote:
>
>>
>> On 15 Jul 2014, at 2:53pm, mm.w <[hidden email]> wrote:
>>
>> > ok sorry, I did not red all thru, may you simply sha1 local and remote
>> ? if
>> > != commit again
>>
>> You won't have anything to commit.  If your application really had
>> crashed it wouldn't have any transaction data to commit.  If your
>> application had not crashed the transaction would always have worked.
>>
>> Anything that might sync a file automatically can make a mistake like
>> this:
>>
>> 1) computer A and computer B both have local copies of the database open
>> 2) users of computer A and computer B both make changes to their local
>> copies
>> 3) computer A and computer B both close their local copies
>>
>> Now the automatic syncing routine kicks in and notices that both copies
>> have been modified since the last sync.  Whichever copy it chooses, the
>> changes made to the other copy are still going to be lost.
>>
>> Also, since the sync process doesn't understand that the journal file is
>> intimately related to the database file, it can notice one file was updated
>> and copy that across to another computer, and leave the other file as it
>> was.  While SQLite will notice that the two files don't match, and will not
>> corrupt its database by trying to update it with the wrong journal, there's
>> no way to tell whether you are going to get the data before the last
>> transaction was committed or after.
>>
>> My recommendation to the OP is not to do any programming around this at
>> all, since whatever programming you come up with will not be dependable.
>>  The routines for checking unexpected journal files in SQLite are very
>> clever.  Just leave SQLite to sort out rare crashes by itself, which it
>> does pretty well.
>>
>> If, on the other hand, crashes aren't rare then I agree with the other
>> poster to this thread who said that time is better spent diagnosing the
>> cause of your crashes.
>>
>> Simon.
>> _______________________________________________
>> sqlite-users mailing list
>> [hidden email]
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

R Smith
In reply to this post by mm.w

On 2014/07/15 19:06, mm.w wrote:
> Simon your design-idea do not reflect any reality, this is weak, there is a
> lack of experience on the topic and we can feel it.

Strange, I feel nothing of the sort and the only weak thing I can see involves the correlation between the computer and social skill
sets you wield. Maybe you had a very different use-case in mind than what is normal for SQLite? - which is of course allowed.

> "You won't have anything to commit.  If your application really had crashed
> it wouldn't have any transaction data to commit.  If your application had
> not crashed the transaction would always have worked."
>
> nope you can have a partial upload, a broken socket pipe et cetera, and you
> only assume a version of the file is not already remote and assume that
> after crash you might be able to recover local anyway.

Partial uploads... broken pipes... these are all networking related issues and has nothing to do with file commitment of any SQLite
code. When you make a server-client system which upload a stream or download it, or in any way sends it somewhere or manages
synchronicity, it is the responsibility of either the client or the server to commit those databits to disk, not the pipe's
responsibility. If the pipe dies halfway then the app would know it, and no amount of half-commits can happen. The only time SQLite
engine can "break" a file by not completing a commit is if the program itself crashes or the physical media errors out, just like
Simon said - none of which involve programmed-logic solutions. Report error and die - this is the way the Force guides us.


> there are two scenario to check:
>
> local = remote after any network transaction
> local = remote
>
> after incident:
>   + if not remote, test integrity of local
>   + if remote make sure both are safe
>   + if only remote restore/force sync has you got an interrupt (it happens
> with box)
>
> 1- the network flow could be interrupted no need a power failure for that
> to happen, it can happen that's you face also the case of undetected broken
> pipe, that's the reason you need to be notify by the network pooler API you
> use,
>
> 2- the journal tweaking only concern sqlite file and specific to it, then
> wrong design, make it work for anything using the "common regular" system
> of hashing/signing local to remote to ensure the integrity of the data, at
> least that's the only purpose of this discussion how I am sure whatever
> happen that I have my data in good shape somewhere.

This is establishing whether a file-transfer between some syncing services is successful and current, it has not a single thing to
do with SQLite's ability to commit changes to the file or judging the need for roll-back. When SQLite starts the file is either
broken or not, end of. This should be checked on a high level and has no bearing on anything to do with SQLite.


_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: capturing and testing a hot journal

mm.w
yes that's exactly what I said thank you to confirm my dear 8)

" If the pipe dies halfway then the app would know it" sorry LOL


On Tue, Jul 15, 2014 at 12:13 PM, RSmith <[hidden email]> wrote:

>
> On 2014/07/15 19:06, mm.w wrote:
>
>> Simon your design-idea do not reflect any reality, this is weak, there is
>> a
>> lack of experience on the topic and we can feel it.
>>
>
> Strange, I feel nothing of the sort and the only weak thing I can see
> involves the correlation between the computer and social skill sets you
> wield. Maybe you had a very different use-case in mind than what is normal
> for SQLite? - which is of course allowed.
>
>
>  "You won't have anything to commit.  If your application really had
>> crashed
>> it wouldn't have any transaction data to commit.  If your application had
>> not crashed the transaction would always have worked."
>>
>> nope you can have a partial upload, a broken socket pipe et cetera, and
>> you
>> only assume a version of the file is not already remote and assume that
>> after crash you might be able to recover local anyway.
>>
>
> Partial uploads... broken pipes... these are all networking related issues
> and has nothing to do with file commitment of any SQLite code. When you
> make a server-client system which upload a stream or download it, or in any
> way sends it somewhere or manages synchronicity, it is the responsibility
> of either the client or the server to commit those databits to disk, not
> the pipe's responsibility. If the pipe dies halfway then the app would know
> it, and no amount of half-commits can happen. The only time SQLite engine
> can "break" a file by not completing a commit is if the program itself
> crashes or the physical media errors out, just like Simon said - none of
> which involve programmed-logic solutions. Report error and die - this is
> the way the Force guides us.
>
>
>
>  there are two scenario to check:
>>
>> local = remote after any network transaction
>> local = remote
>>
>> after incident:
>>   + if not remote, test integrity of local
>>   + if remote make sure both are safe
>>   + if only remote restore/force sync has you got an interrupt (it happens
>> with box)
>>
>> 1- the network flow could be interrupted no need a power failure for that
>> to happen, it can happen that's you face also the case of undetected
>> broken
>> pipe, that's the reason you need to be notify by the network pooler API
>> you
>> use,
>>
>> 2- the journal tweaking only concern sqlite file and specific to it, then
>> wrong design, make it work for anything using the "common regular" system
>> of hashing/signing local to remote to ensure the integrity of the data, at
>> least that's the only purpose of this discussion how I am sure whatever
>> happen that I have my data in good shape somewhere.
>>
>
> This is establishing whether a file-transfer between some syncing services
> is successful and current, it has not a single thing to do with SQLite's
> ability to commit changes to the file or judging the need for roll-back.
> When SQLite starts the file is either broken or not, end of. This should be
> checked on a high level and has no bearing on anything to do with SQLite.
>
>
>
> _______________________________________________
> sqlite-users mailing list
> [hidden email]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
[hidden email]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
123