How to Optimize Filter DB for Faster Processing
Posted: Mon Apr 22, 2019 11:31 am
Hi Quade,
So I monkeyed around with the filters.db3 file using SQLite Studio and now filter processing is *really* slow. I realize I have screwed with perfection, but my aim is worthy
I have a pretty extensive list of filters that I've massaged over the years. In frustration at not being able to easily edit the filters inline (now fixed in 6.81+) I resorted to exporting the filter.db3 file using SQLlite Studio, exporting to Excel, sorting and editing the filter text in to individual SQLite records (versus using the pipe character to combine terms), and then re-importing them back to the filters.db3 file. That worked OK but was a lot of work.
So a list of filters that looked like:
Subject Accept filter3|filter1|filter2
Subject Reject filter5|filter4
Becomes:
Subject Accept filter1
Subject Accept filter2
Subject Accept filter3
Subject Reject filter4
Subject Reject filter3
The reason I did this is to make troubleshooting long filter lists easier. i can enable/disable 50% of the filter entries at a time and winnow out which filter is causing undesirable results. That has worked well. It is almost impossible to locate a filter term using the "as entered" list of filter text when you have more that, say 20 entries. Having the entries sorted (at least initially) makes finding specific entries much easier.
Now Newsbin processes really slowly when loading a GOG from the Groups tab, or selecting filters using the drop-down menu. I suspect this has to do with the index in the filters.db3 file not being optimized?
So, my questions:
1. Am I correct that this is an indexing issue? if so, can you offer a suggestion as to how to re-create the proper index to speed up filter processing? Is there a SQL command to recreate the index properly?
2. Am I better off using the pipe to combine terms, thus shortening the list of entries for a given filter as I had before, or does it not matter that the entries are now split in to individual records in the filters.db3 file? Does this matter in terms of filter processing performance?
I'd like to get back to speedy filters!
Thanks, I appreciate the help. I really do.
Scott
So I monkeyed around with the filters.db3 file using SQLite Studio and now filter processing is *really* slow. I realize I have screwed with perfection, but my aim is worthy
I have a pretty extensive list of filters that I've massaged over the years. In frustration at not being able to easily edit the filters inline (now fixed in 6.81+) I resorted to exporting the filter.db3 file using SQLlite Studio, exporting to Excel, sorting and editing the filter text in to individual SQLite records (versus using the pipe character to combine terms), and then re-importing them back to the filters.db3 file. That worked OK but was a lot of work.
So a list of filters that looked like:
Subject Accept filter3|filter1|filter2
Subject Reject filter5|filter4
Becomes:
Subject Accept filter1
Subject Accept filter2
Subject Accept filter3
Subject Reject filter4
Subject Reject filter3
The reason I did this is to make troubleshooting long filter lists easier. i can enable/disable 50% of the filter entries at a time and winnow out which filter is causing undesirable results. That has worked well. It is almost impossible to locate a filter term using the "as entered" list of filter text when you have more that, say 20 entries. Having the entries sorted (at least initially) makes finding specific entries much easier.
Now Newsbin processes really slowly when loading a GOG from the Groups tab, or selecting filters using the drop-down menu. I suspect this has to do with the index in the filters.db3 file not being optimized?
So, my questions:
1. Am I correct that this is an indexing issue? if so, can you offer a suggestion as to how to re-create the proper index to speed up filter processing? Is there a SQL command to recreate the index properly?
2. Am I better off using the pipe to combine terms, thus shortening the list of entries for a given filter as I had before, or does it not matter that the entries are now split in to individual records in the filters.db3 file? Does this matter in terms of filter processing performance?
I'd like to get back to speedy filters!
Thanks, I appreciate the help. I really do.
Scott