John said:
Theory is fine, but the idea that real-world results should not be
posted or that posting such results is waste of time, is strange
IMO.
I find benchmarks to be the most useful, when they isolate a single
aspect of the hardware.
A sustained transfer benchmark, the kind HDTune does, shows some
limit inside the hardware. The hardware cannot be made to go faster
than that. So if someone asks me for an estimate of how fast their
USB2 disk enclosure can go, I can say with some confidence "it
can't go any faster than about 30MB/sec". Now, if the individual
transfers 100,000 4KB files, they're not even going to get close
to that number. Instead, what they'll see, is a few hundred head
seeks a second, times 4KB write, and the result would be <1MB/sec
of transfer. So I cannot really *bound* their performance with
any precision. On the one hand, I can tell them it could go as
fast as 30MB/sec (if they transfer a DVD sized file to their disk),
but if they use 100,000 small files, the transfer rate could be
very low indeed. In fact, I cannot tell them with any precision,
what number less than 30MB/sec to expect.
What other interesting test cases can I come up with ? Let's take
defrag. I used the Performance plugin, which has those little graphs
in it. I added a few counters to the graph, and watched while defrag
was running. The amount of data being written is only 1MB/sec
and this is for an internal drive. Terrible performance.
The second counter I used, records the number of write operations
per second. The write operations recorded was hovering around
120 writes per second. The disk is a 7200RPM disk. That is 120 revolutions
per second. The disk is doing one whole write operation for
each revolution. In other words, it is working its little heart
out, but making poor progress because of the size of the writes
that Microsoft is using.
On the one hand, the performance is pathetic. I'm only getting
1MB/sec of data written. But the write operations are running as
fast as a (cache disabled) set of hardware would allow. Defrag
has a set of "safe" APIs in Windows, with the intention that
the disk won't end up broken if the computer crashes in the
middle of a defragmentation. So the objective is "safeness"
rather than "performance". If they wanted to, defrag could be
made to go much faster, but if the power went off, you'd be
screwed.
Those are the things I'm interested in studying. So I spent a
few minutes, trying to understand why defrag wasn't completed
the next morning when I woke up.
Say I take a random folder with 562 files in it, and copy it
from one drive to another, and I get 17MB/sec as my transfer
rate. Now, you take a folder with a different set of files
and you get 21MB/sec. What factor are we isolating ? How are
we keeping all uninteresting factors under control. I cannot
tell from our two results, what is happening. Is your hardware
faster than mine ? Is your disk less fragmented ? Is your
average file size larger than mine (fewer head seeks) ? Many
factors are now uncontrolled. As a cynic, I could comment
that I would expect the results to be anywhere between
1MB/sec and 30MB/sec, depending on exactly what was happening.
That kind of benchmark is useless to me. But notice how the
two "bound" values, are of use to me. The transfer *probably*
won't go slower than 1MB/sec or so. But that leaves such a
range of values, it doesn't help anyone to know that.
I already know what the benchmark for sustained transfer is for
USB 1.1, because competent people out there have already measured
it. I don't seek to reproduce every test I read about. It is not
like I spend a lot of time running USB 1.1 hardware. And as for
file storage performance, I don't have the money to spend on
the "fastest of everything". It goes at whatever speed it goes
at, for $100.
One other comment about file transfer benchmarks. The performance
is a function of both the source and the destination devices.
If the devices share the same bus bandwidth, that invalidates
the results. When Anandtech does this kind of testing, they
set up a RAMdisk on the computer, to hold the source files.
And then, the transfer to the destination disk, measures only
the characteristics of the destination. That is because the
bandwidth of the RAMdisk, is 3000MB/sec+, so is not a factor.
If you transfer files from one USB2 device, to a second USB2 device,
on a number of motherboards, this shares the same (theoretical)
60MB/sec total bandwidth. If the practical transfer rate is actually
30MB/sec, as measured in HDTune (a unidirectional test), then when
copying files from one USB2 device to another, you could well see
15MB/sec as the measured best case performance. There are a few
motherboards, where the Southbridge has two USB2 controllers on
it. Of the dozen USB ports available on the motherboard, half are
on one controller and half on the other. If the source and
destination USB2 devices are on different controllers, you'll
see better results.
Paul