This discussion is using relative terms such as "fast" and "slow" without mentioning precise timing measurements.Although I don't have any such measurements on hand, I can say that as I stress test this amazing app more and more, and recently created a massive 1 terabyte download list comprised of nothing but small audio files, and specifying two servers and 2 sets of server headers.... IT DOES TAKE SEVERAL MINUTES in latest build this month, to wait for list to load on subsequent program launches.
But.... sometimes sort routines do not scale well with ever-larger data sets. Some operating systems and languages such as standard C on Mac OS for 20 years, lacked a single sort routine, so programmers learned to try and select and craft the right ones!! OSX is different. Sometimes the so-called fastest ones became horribly SLOW if the list was mostly sorted (quicksort). So many engineers doing their own sorts should use a shell sort with adaptations for 7 million elements or so that adjust for smaller sets.
I did not install any debug tools (this is a very squeaky clean usenet slurp box with even windows index disabled and even disk defrag off), so I as a casual user of newsbin could not tell you the percentage of time spent in disk I/O (even if ram cached), vs time spent in 3rd party SQL Lite db3, vs Windows system calls, vs application CPU, vs over-aggressive use of semaphores and mutexes to protect shared data fields between hung threads waiting to read-write from overly protected global shared fields. regarding semaphores : I go the other way when I write to design as much as I can to never stop a cpu thread on account of my own actions and threads. Sometimes its 8 times as much work to ever avoid hardware based mutexes (scoped_lock), and in iPad iPhone you have to go out of your way in classes to force the compiler to not protect global structures from automatically protecting everything from other threads. Even between two cpus on separate chips, a clever driver programmer can avoid hardware assisted semaphores or disabling interrupts or other super slow contention issues via Dekker's Algorithm. Pure sotware mutex so long as a few cache-coherent ram lines exist :
http://en.wikipedia.org/wiki/Dekker%27s_algorithm Though Dekkers was invented in 1962, and independently by myself before I learned it had a name, no one ever believes it possible. Even one of the 4 heads of ATI (a few months before AMD bought them) and the ATI chief technology officer doubted it would ever work (i showed them my source) until after I flew back to us and mailed them links to Dekkers paper. In fact, even better than dekkers (which needs to be written in assembly), is to use thread collision reentrancy DETECTION with reverse rollout, allowing a person to write disk drivers that NEVER EVER disable interrupts for the first two pending colliding ranges of I/O. You get millions of disk block accesses from a ram disk that way PER SECOND! There's a point to all this rambling... the point....
... the faster and faster the code... the far far more dangerous for corruption when you don't think of every possible thread race condition or paradox, or plain old mistake, hidden in fringe cases. Though I don't make such mistakes, the stress and ulcers are not worth it when you can just MAKE CODE SLOWER AND SAFER to prevent database corruption.
I would rather 6.34 stay slow (several minutes to open a 1 TB list of audio files at launch) than to EVER RISK running any of the databases in non-debug or corruption-prone faster manners.
STABILTY AND INTEGRITY OVER SPEED, unless 100% possible to avoid a single problem, and even then, if OS crashes or app crashes,
I would RATHER HAVE RESUMABLE database files. In two types of crashes I saw no averse problem reusing the possibly corrupt sql files and was very very very happy about that. Hurray!I too designed and tested databases ages ago I wrote with forced app crash in middle of writes (I used lazy atomic double writes, with an atomic file rename after complete), but modern databases have usually three types of journalling and jaunting to protect filesets. I am happy to see that the type of crash I had during downloading did not seem to harm the downloads upon relaunches that much in Newsbin. (yeah yeah yeah... officially its not desired or supported... but it seems to work fine)
I fear FASTER requests such as the original poster would wipe that away. I do hope all things disk flush at a minimum every 15 minutes or so though, and not wait until app graceful closure.
But if its a simple matter of selecting a different sort routine for when you are sorting 2 million message upon launch, then I guess so long as nothing riskier is applied, then i guess a speedup is OK. The reason? I shutdown and relaunch newsbin every 5 hours, and cache store all newsbin data files to backup, and the many-minute wait for app start does affect me because I fear if I walk away during the several minutes and it starts downloading at full speed, comcast will get grumpy with me yet again. I avoided installing a bandwidth shaper in this box, and run none on my routers, but my hacked PRIVATE owned docsis 3 cablemodem, that I used for 99 megabit for a year on non-comcast, and installed on a comcast business line MEANT to run less than half that, is my fear. If I walk away and newsbin launches fully and dls, the powerboost PLUS my own abuse could flag me. I ran unspeedcapped for 2.2 days after not downloading a SINGLE byte for 9 days prior to the test (connected only to a tivo for 9 days), but comcast killed my line dead trickle for a while after running ballsout too fast for my official provisioned data rate for 2.2 days testing this amazing newsreader. After getting slapped down last month on a BUSINESS LINE, I dont want to ever ever go too fast. So the wait for the list load before I can disable full speed, causes anxiety in case I forget too long.
I read some people get about 160 megabit on a single unbound docsis modem, sounds so fun. I bet THIS PROGRAM can handle the load, if a provider or server exists capable of it and if message files specially designed larger. VErizon fibre GPON areas offer 300 Gbit down !!! (I have a long haul fibre running past my home that goes back to my colocation facility I use, but I am not tied into it yet ($$$) and its on other side of the street). But no verizon 300 gbit around these parts.
Oddly I could have sworn I saw the newsbin speed limiter start in limited mode in one launch out of 40 or so, but it might have been my imagination. But if its "on-off" state stayed sticky that would be a feature that would be welcome. Maybe user error on my part for this aspect.
SUMMARY : even though I am a true speed demon and personally affected by multi-minute file list loads at start, and proved it all above......
I WANT STABILITY ONLY, *NOT FASTER*Thats if anyone is taking votes.