Seeing the results of the XP question, It seems that many of you are running Newsbin on relatively low end machines. I dug thought my junk pile and found an old AMD 64 laptop. 2 Ghz single core with an 80 Gb laptop drive. The Ethernet interface was only 100 Mbps. I have a test server on my lan which will deliver files at up to 300-400 Mbps. It's in a Centos Linux server with its built in NNTP server running under VM ware on a 4 core i7 machine. I post problematic file sets to it as people report them so, i can re-test them in Newsbin before release.
On my main work machine, which is a powerful beast, downloads will run at over 200 Mbps, more often than not faster, this includes repairs and unrars. If it's not currently repairing or unraring, download speed might be twice that, 400 Mbps or so. CPU load is probably 15% total across the cores. It typically takes about 15 minutes to run through all the downloads on the server.
I ran the same test on this laptop and it wasn't good. It took all afternoon to complete. Downloads ran at about 60 Mbps for a time, until autopar scanning kicked in. The combination of download and autopar scanning dropped the download speed to 10 Mbps or less. CPU load was never pegged, it might hit 60%. When the Autopar scan, repair or unrar finished download speed would pop back up to 50-60 but, never get much higher than that. The problem on this laptop isn't CPU. It's the disk drive. I disabled autopar so, no par scanning repair or unrar happened and the speed became a fairly consistent 60. The cache line down on the status tab remained at 0/100 meaning Newsbin simply can't shift the data to disk fast enough.
One experiment I did was to pause the download, wait for the cache to build back up again and then unpause it. Doing that, Newsbin flatlines the data transfer at about 60 Mbps until the cache runs out. Then it gets spiky as the lack of places to store the downloaded data stalls the download. This also suggests the real bottleneck is the drive. At 60 Mbps with no autopar, CPU load is 40-50%. I consider that a smidge high even though this machine is really old. There's not much I can do in software about shitty disk IO. The bottom line is I download data and need to offload it to disk at least as fast as I download it and the disk drive in this machine simply can't take it. Key takeaway is on really low end machines like this, you might want to disable autopar and use an external tool to post process the files. You might want to download all the PAR files too.
Key points:
- Newsbin 6.33 worked fine with no crashing under XP patched to SP3
- Multi-core repair DLL worked fine on this single core AMD with no crashing.
- On this low end machine, disk IO dominated everything. Newsbin simply couldn't shift data to disk as fast as it could download and decode it.
- Depending on what you want, high download speed or autopar, you might consider disabling autopar. Autopar worked fine but, the disk IO required by autopar stalled the download. If you just download stuff and come back later and don't really care how fast it downloads, you can leave Autopar active.
- New feature in 6.4 that'll let you disable download during unrar/repair might be useful.
- If you have a high speed internet connection, a fast machine will really maximize your download speed. On slower internet, these issues probably don't matter.
I consider the high/low speed break to be 10 Mbps. If you have a 10 Mbps or less connection, none of this probably impacts you.