Atomic DELETE index optimisation?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
49 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Atomic DELETE index optimisation?

Dinu
Hi all,
It became apparent to me from performance measurements that the DELETE
operation is very slow, when operating on large recordsets with indexes
involved. My deduction is that SQLite updates the indexes for every
deleted row, which in painstakingly slow...

Since the DELETE should be atomic, the index update could also be
atomic, and a mass pruning of the index tree is hugely faster than
individual removals.

My question is: is my deduction correct? Is there any way to fix/improve
this in userland? Or are there prospects for this to be improved in
SQLite in the foreseeable future?

Thanks,
Dinu

_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Dinu
Update: we are also using WAL; I have noticed during a huge delete, the
WAL also grows huge. Could this also be a reason for slowness, that
SQLite duplicates every block that hosts a delete? Is there any way to
work around this?


On 17.12.2017 22:53, Dinu Marina wrote:

> Hi all,
> It became apparent to me from performance measurements that the DELETE
> operation is very slow, when operating on large recordsets with
> indexes involved. My deduction is that SQLite updates the indexes for
> every deleted row, which in painstakingly slow...
>
> Since the DELETE should be atomic, the index update could also be
> atomic, and a mass pruning of the index tree is hugely faster than
> individual removals.
>
> My question is: is my deduction correct? Is there any way to
> fix/improve this in userland? Or are there prospects for this to be
> improved in SQLite in the foreseeable future?
>
> Thanks,
> Dinu
>

_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Keith Medcalf

You page cache is probably too small to contain the working set of database pages that require modification.  There should be no difference between "deleting" and "inserting".  Both require changes to all applicable B+ and B- Trees.

Though of course, it can be VERY slow if you have SECURE DELETE enabled since that requires overwriting the row rather than merely removing it from the applicable trees.

The WAL file contains images of changed pages AFTER they are changed.  So firstly the fact that you see pages added to the WAL file means the changed pages cannot be maintained in the page cache (it is too small).  Secondly it can mean that you have the database and WAL file stored on slow media (ie, a 360 RPM floppy disk rather than a 2600 MB/s NVMe disk).

One of the resources on the computer is being consumed 100%.  Once you figure out what that resource is, you can address the issue.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a lot about anticipated traffic volume.


>-----Original Message-----
>From: sqlite-users [mailto:sqlite-users-
>[hidden email]] On Behalf Of Dinu Marina
>Sent: Sunday, 17 December, 2017 14:06
>To: [hidden email]
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>
>Update: we are also using WAL; I have noticed during a huge delete,
>the
>WAL also grows huge. Could this also be a reason for slowness, that
>SQLite duplicates every block that hosts a delete? Is there any way
>to
>work around this?
>
>
>On 17.12.2017 22:53, Dinu Marina wrote:
>> Hi all,
>> It became apparent to me from performance measurements that the
>DELETE
>> operation is very slow, when operating on large recordsets with
>> indexes involved. My deduction is that SQLite updates the indexes
>for
>> every deleted row, which in painstakingly slow...
>>
>> Since the DELETE should be atomic, the index update could also be
>> atomic, and a mass pruning of the index tree is hugely faster than
>> individual removals.
>>
>> My question is: is my deduction correct? Is there any way to
>> fix/improve this in userland? Or are there prospects for this to be
>> improved in SQLite in the foreseeable future?
>>
>> Thanks,
>> Dinu
>>
>
>_______________________________________________
>sqlite-users mailing list
>[hidden email]
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Simon Slavin-3
In reply to this post by Dinu
On 17 Dec 2017, at 8:53pm, Dinu Marina <[hidden email]> wrote:

> It became apparent to me from performance measurements that the DELETE operation is very slow, when operating on large recordsets with indexes involved. My deduction is that SQLite updates the indexes for every deleted row, which in painstakingly slow...
>
> Since the DELETE should be atomic, the index update could also be atomic, and a mass pruning of the index tree is hugely faster than individual removals.
>
> My question is: is my deduction correct? Is there any way to fix/improve this in userland? Or are there prospects for this to be improved in SQLite in the foreseeable future?

The data for the table is held in one tree, occupying one set of pages.
The data for each index is held in a separate tree, occupying its own set of pages.
So if you have N indexes on the table, N+1 sets of pages need to be updated.

If a DELETE deletes more than one row, then SQLite does the caching you’d expect when working out which pages to update.  SQLite does this efficiently and work went into optimizing for speed.

If the timing of DELETE statements is important to you, use journal mode of PERSIST, MEMORY or WAL.  You might want to compare PERSIST and WAL and find out which performs better for you.  Remember that journal mode is saved with the database: you only need to set it once for the file, and SQLite will retrieve the setting when it reopens the file.

If you are doing more than one DELETE at a time, enclose them all in a transaction:

BEGIN
DELETE …
DELETE …
COMM|T

This can be far faster than individual commands, since the 'dirty' pages are written back once at the end of the transaction rather than the end of each command.

On 17 Dec 2017, at 9:06pm, Dinu Marina <[hidden email]> wrote:

> Update: we are also using WAL; I have noticed during a huge delete, the WAL also grows huge. Could this also be a reason for slowness, that SQLite duplicates every block that hosts a delete? Is there any way to work around this?

Correct.  While a transaction with DELETE is being carried out, SQLite must temporarily store both the old and the new versions of all pages affected.  This is to allow recover if your computer crashes or loses power before the transaction is complete.

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Dinu
In reply to this post by Keith Medcalf
Ok, so to address the size of the problem properly:

We have ~32M records out of which we are deleting about ~24M.
The raw size of the DB is about 20G out of which 15G goes away. Under these
circumstances any page cache becomes irrelevant.
The delete operation, which is a single delete run from the sqlite3 tool, is
running for 3 hours now.
The WAL size is in excess of 9G atm.
The WAL size is growing ever slower (was about 10M/s, now it's 1M every 2s,
slowing down). This indicates to me a non-linear process that I can link
only to the B-trees, it's the only non-linear component I can think of that
could cause this slowing-down-to-a-drag. The CPU is capped up badly, the HDD
is at idle level so this also hints to the same issue.

In reply to your remarks:
- I understand the B-trees need to be modified. However, if you prune,
(maybe rebalance), write-to-disk every node at a time, that is hugely
stressful and inefficient when you are pruning half of a 32M nodes tree.
Since the operation is atomic, the indexes could be updated one time and one
time only.

So, what to do? I think this delete may never end... by the time it took to
write this reply, the WAL grow has succombed to 1M every 4s.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Clemens Ladisch
Dinu wrote:
> The WAL size is in excess of 9G atm.
> The WAL size is growing ever slower (was about 10M/s, now it's 1M every 2s,
> slowing down). This indicates to me a non-linear process that I can link
> only to the B-trees, it's the only non-linear component I can think of that
> could cause this slowing-down-to-a-drag.

The WAL file contains the new version(s) of any modified page; for every page
access, the database has to search in the WAL for the newest version.

Consider using DELETE/TRUNCATE/PERSIST journal mode instead; any changed
pages are copied to the journal before being modified, so there is never
more than one copy of a page in the journal.  And during normal operation,
the database never has to look at the contents of an old page.

Alternatively, try copying all the other rows to a new table, and dropping
the old one.


Regards,
Clemens
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Simon Slavin-3
In reply to this post by Dinu


On 17 Dec 2017, at 10:01pm, Dinu <[hidden email]> wrote:

> The CPU is capped up badly, the HDD
> is at idle level so this also hints to the same issue.

Apparently your program/OS is trying to do everything in cache.  This may be self-defeating for this operation.

Your description of what’s happening is reasonable, and suggests SQLite is working the way it should do.

> We have ~32M records out of which we are deleting about ~24M.

It might be faster to create a new table with the rows which survive, then delete the whole old table.

1) Rename old table.
2) CREATE new table.
3) Copy the rows which survive from the old table.  Depending on your selection criteria you might be able to use

INSERT INTO MyTable (SELECT FROM OldTable WHERE survive=1)

4) DROP OldTable
5) Create the indexes you need on MyTable

If you have FOREIGN KEYs it may be problematic to do this.

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Dinu
In reply to this post by Dinu
By "one time only", I mean in my understanding, the way most DBs do on a
DELETE is this: cache the ROWIDs while deleting data rows from the main and
from the indexes, then when all ROWIDS are explored, sort the ROWID stream,
and prune the trees from a sorted stream. This is both highly efficient
(just like inserts, deletes of already ordered records are very efficient)
and highly parallelizable.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Dinu
For the table swap operation, that is out of the question, the DB is running
on a live system that has multiple clients. Foreign keys or not, any
structure change crashes all clients (DB structure has changed). Too high a
price to pay for a delete thay may happen routinely.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Simon Slavin-3


On 17 Dec 2017, at 10:18pm, Dinu <[hidden email]> wrote:

> For the table swap operation, that is out of the question, the DB is running
> on a live system that has multiple clients. Foreign keys or not, any
> structure change crashes all clients (DB structure has changed). Too high a
> price to pay for a delete thay may happen routinely.

Okay.  In that case you might try this.  The reason it may be fast is that SQLite has DROP without a WHERE clause optimized.  It happens very quickly.

BEGIN IMMEDIATE
create a temporary table with the same columns as MyTable, but no indexes
copy the rows you need to keep to the temporary table
DELETE FROM MyTable
copy all rows from the temporary table back to the original table
COMMIT

As with the last suggestion, the only reason I think these my be faster is that you are keeping less than half the rows in the original table.

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Dinu
Thanks, it's something worth trying.
However, no, that can't be the only reason. Populating scriptically with the
same data takes less than 10 minutes, including side processing... So the
disproportion between insert and delete performance is huge.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Dinu
INSERTs are taking constant time as they should, the DELETE is quite
obviously working is a blatantly non-linear progress.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Keith Medcalf
In reply to this post by Dinu

(1) Since you are in WAL, have you set pragma SYNCHRONOUS=1 (NORMAL)? (Don't know if it will make any difference in your particular case).

(2) The page_cache size makes a HUGE difference.  Modified pages are ONLY flushed to the WAL file when they overflow the cache.  The fewer modified pages there are to flush, the less I/O is performed.  The default pagecache is about 20000 pages which is not much at all.  If you do not see I/O to the drive it is probably going to the OS Filesystem cache.  This is still slower than going directly to the pagecache to access the tree data.  This will appear as CPU usage rather than I/O usage.  Since it started out writing LOTS of modified pages which slowly became less and less indicates that the pagecache is too small and that the B-Tree structure is continuously being reloaded.

What is the version of SQLite?  
The Operating System and its bitedness (32/64)?  
The bitedness (32/64) of the shell?  
How much RAM is in the machine?  
How much RAM is the process using (both the VM commit size, and the Working Set size)?
Have you set any pragma's other than JOURNAL_MODE=WAL?
Do you have any Foreign Keys defined on or against the table you are deleting from?
If so, is foreign key enforcement turned on?
If so, are there recursive foreign keys?
Are the proper indexes defined for any foreign keys?  On both parent and child?
Are there any ON DELETE triggers?

The simplest fix might very well be to compile your own shell with UPDATE DELETE LIMIT enabled, and delete the records in smaller batches by putting a LIMIT on the delete statement and running it until all the records are deleted.  Or perhaps dropping all the indexes, deleting the records, then re-creating the indexes.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a lot about anticipated traffic volume.


>-----Original Message-----
>From: sqlite-users [mailto:sqlite-users-
>[hidden email]] On Behalf Of Dinu
>Sent: Sunday, 17 December, 2017 15:02
>To: [hidden email]
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>
>Ok, so to address the size of the problem properly:
>
>We have ~32M records out of which we are deleting about ~24M.
>The raw size of the DB is about 20G out of which 15G goes away. Under
>these
>circumstances any page cache becomes irrelevant.
>The delete operation, which is a single delete run from the sqlite3
>tool, is
>running for 3 hours now.
>The WAL size is in excess of 9G atm.
>The WAL size is growing ever slower (was about 10M/s, now it's 1M
>every 2s,
>slowing down). This indicates to me a non-linear process that I can
>link
>only to the B-trees, it's the only non-linear component I can think
>of that
>could cause this slowing-down-to-a-drag. The CPU is capped up badly,
>the HDD
>is at idle level so this also hints to the same issue.
>
>In reply to your remarks:
>- I understand the B-trees need to be modified. However, if you
>prune,
>(maybe rebalance), write-to-disk every node at a time, that is hugely
>stressful and inefficient when you are pruning half of a 32M nodes
>tree.
>Since the operation is atomic, the indexes could be updated one time
>and one
>time only.
>
>So, what to do? I think this delete may never end... by the time it
>took to
>write this reply, the WAL grow has succombed to 1M every 4s.
>
>
>
>--
>Sent from: http://sqlite.1065341.n5.nabble.com/
>_______________________________________________
>sqlite-users mailing list
>[hidden email]
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Richard Hipp-3
In reply to this post by Dinu
On 12/17/17, Dinu <[hidden email]> wrote:
>
> We have ~32M records out of which we are deleting about ~24M.

Can you try this:

(1) BEGIN TRANSACTION
(2) DROP all indexes from the table
(3) Do the DELETE operation
(4) CREATE all indexes again
(5) COMMIT
--
D. Richard Hipp
[hidden email]
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Keith Medcalf
In reply to this post by Keith Medcalf

I had a similar problem with a multi-terabyte database once upon a time.  

The solution was simply to run the DELETE every time you insert, thus keeping the database continuously "pruned" by continuous removal of a small number of records rather than periodically trying to get rid of a lot all at once.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a lot about anticipated traffic volume.


>-----Original Message-----
>From: sqlite-users [mailto:sqlite-users-
>[hidden email]] On Behalf Of Keith Medcalf
>Sent: Sunday, 17 December, 2017 15:43
>To: SQLite mailing list
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>
>
>(1) Since you are in WAL, have you set pragma SYNCHRONOUS=1 (NORMAL)?
>(Don't know if it will make any difference in your particular case).
>
>(2) The page_cache size makes a HUGE difference.  Modified pages are
>ONLY flushed to the WAL file when they overflow the cache.  The fewer
>modified pages there are to flush, the less I/O is performed.  The
>default pagecache is about 20000 pages which is not much at all.  If
>you do not see I/O to the drive it is probably going to the OS
>Filesystem cache.  This is still slower than going directly to the
>pagecache to access the tree data.  This will appear as CPU usage
>rather than I/O usage.  Since it started out writing LOTS of modified
>pages which slowly became less and less indicates that the pagecache
>is too small and that the B-Tree structure is continuously being
>reloaded.
>
>What is the version of SQLite?
>The Operating System and its bitedness (32/64)?
>The bitedness (32/64) of the shell?
>How much RAM is in the machine?
>How much RAM is the process using (both the VM commit size, and the
>Working Set size)?
>Have you set any pragma's other than JOURNAL_MODE=WAL?
>Do you have any Foreign Keys defined on or against the table you are
>deleting from?
>If so, is foreign key enforcement turned on?
>If so, are there recursive foreign keys?
>Are the proper indexes defined for any foreign keys?  On both parent
>and child?
>Are there any ON DELETE triggers?
>
>The simplest fix might very well be to compile your own shell with
>UPDATE DELETE LIMIT enabled, and delete the records in smaller
>batches by putting a LIMIT on the delete statement and running it
>until all the records are deleted.  Or perhaps dropping all the
>indexes, deleting the records, then re-creating the indexes.
>
>---
>The fact that there's a Highway to Hell but only a Stairway to Heaven
>says a lot about anticipated traffic volume.
>
>
>>-----Original Message-----
>>From: sqlite-users [mailto:sqlite-users-
>>[hidden email]] On Behalf Of Dinu
>>Sent: Sunday, 17 December, 2017 15:02
>>To: [hidden email]
>>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>>
>>Ok, so to address the size of the problem properly:
>>
>>We have ~32M records out of which we are deleting about ~24M.
>>The raw size of the DB is about 20G out of which 15G goes away.
>Under
>>these
>>circumstances any page cache becomes irrelevant.
>>The delete operation, which is a single delete run from the sqlite3
>>tool, is
>>running for 3 hours now.
>>The WAL size is in excess of 9G atm.
>>The WAL size is growing ever slower (was about 10M/s, now it's 1M
>>every 2s,
>>slowing down). This indicates to me a non-linear process that I can
>>link
>>only to the B-trees, it's the only non-linear component I can think
>>of that
>>could cause this slowing-down-to-a-drag. The CPU is capped up badly,
>>the HDD
>>is at idle level so this also hints to the same issue.
>>
>>In reply to your remarks:
>>- I understand the B-trees need to be modified. However, if you
>>prune,
>>(maybe rebalance), write-to-disk every node at a time, that is
>hugely
>>stressful and inefficient when you are pruning half of a 32M nodes
>>tree.
>>Since the operation is atomic, the indexes could be updated one time
>>and one
>>time only.
>>
>>So, what to do? I think this delete may never end... by the time it
>>took to
>>write this reply, the WAL grow has succombed to 1M every 4s.
>>
>>
>>
>>--
>>Sent from: http://sqlite.1065341.n5.nabble.com/
>>_______________________________________________
>>sqlite-users mailing list
>>[hidden email]
>>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
>_______________________________________________
>sqlite-users mailing list
>[hidden email]
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Dinu
In reply to this post by Keith Medcalf
SQLite 3.19.3
CentOS 7 (64 bit)
RAM: 6G total, 4.4G buff/cache, 0.6G unused
Sqlite memory: 133M RES, 0 SWAP, 0 DIRTY - don't know if it's relevant
anymore, the query seems to have entered into another execution phase, it
looks like now it's flushing the WAL.
No other PRAGMA
No FKs, no triggers.






--
Sent from: http://sqlite.1065341.n5.nabble.com/
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Dinu
In reply to this post by Keith Medcalf
Keith Medcalf wrote
> I had a similar problem with a multi-terabyte database once upon a time.  

Na, they are not a single-time use rows, otherwise I'd have used a FIFO :)
Every now and then, a large portion of the table becomes obsolete by
external factors.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Dinu
In reply to this post by Richard Hipp-3
Richard Hipp-3 wrote
> Can you try this:

Thanks Richard, as mentioned earlier, any structure change is unacceptable
due to concurrent reader clients.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Simon Slavin-3
In reply to this post by Dinu


On 17 Dec 2017, at 10:57pm, Dinu <[hidden email]> wrote:

> Na, they are not a single-time use rows, otherwise I'd have used a FIFO :)
> Every now and then, a large portion of the table becomes obsolete by
> external factors.

Are you issuing one DELETE command with a WHERE clause which deletes lots of rows or are you issuing lots of DELETE commands ?

Simon.
_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Reply | Threaded
Open this post in threaded view
|

Re: Atomic DELETE index optimisation?

Keith Medcalf
In reply to this post by Dinu

Ok, just checking.  The database I was referring to was an Historian Collector so the amount of data that it needed to maintain was time limited so it was more or less a FIFO by tag.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a lot about anticipated traffic volume.


>-----Original Message-----
>From: sqlite-users [mailto:sqlite-users-
>[hidden email]] On Behalf Of Dinu
>Sent: Sunday, 17 December, 2017 15:58
>To: [hidden email]
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>
>Keith Medcalf wrote
>> I had a similar problem with a multi-terabyte database once upon a
>time.
>
>Na, they are not a single-time use rows, otherwise I'd have used a
>FIFO :)
>Every now and then, a large portion of the table becomes obsolete by
>external factors.
>
>
>
>--
>Sent from: http://sqlite.1065341.n5.nabble.com/
>_______________________________________________
>sqlite-users mailing list
>[hidden email]
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



_______________________________________________
sqlite-users mailing list
[hidden email]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
123