Why? Because you heard someone else say so?
Because you _have_ to make a pissing contest out of _everything_?
Because it's true.
What's the point of spreading half-truths?
It's been proven true by many people, and if you had
bothered to read up on tech or test it yourself you'd have
also seen this.
Let's put it another way...
Why SATA or SATA II?
The increase in transfer rate from the faster bus, right?
What happens when a faster busses' sole interface to the
system is a SLOWER bus? What happens when that slower bus
has other traffic contention? Apparently you believe
there's a magic way to give a device more time on a bus than
is possible.
If you must than the truth about your desperate attempt to appear to
have a clue & the last word is "yes & no." A bus can be quite crowded
without affecting transfer rates out of the disk subsystem (as usually
is the case).
It can also effect the rate.
As I wrote, "yes and no".
Even still the transfer rate of the disks involved are
still the same.
Actually, NO.
Their _potential_ transfer is still the same, from the media
to the internal cache. Data can't leave that cache till the
drive controller's buffer is ready to accept more data,
after what it already has, has been moved off through the
PCI bus.
What matters as far as _overall_system_throughput_ is
how _busy_ the bus is. There is only a small group of workloads where
you can saturate PCI. Even fewer where you can saturate it long
enough to matter. In those cases better HW that address this is
readily available.
Yes, small group of very common workloads, like using a NIC
and soundcard, devices in every system. Using more than
one drive simultaneously will push that limit too. You
can't actually force 100% efficiency on the PCI bus either,
you will not get 133MB/s sustained.
But this has no real relevance to comparative performance of 10K vs
SATA2. You bring up a multi-disk, multi-device, high workload issue
that is disk-technology independent.
You wrote "disk limits transfer rates, not the interface".
OFTEN that is true but it's not at all uncommon for it to be
untrue. Ever notice a newer bus called PCI Express? What
did you think the purpose was behind it?
You haven't considered chipset inefficiencies either.
Take for example an Asus A7V333, it is common knowledge to
anyone who has owned one, that any device (PCI IDE card)
will have lower throughput even in isolated benches, than
same drive connected to the southbridge. Same is the case
with an SATA card and equivalent SATA HDD.
Like it or not, this can happen, IS observed even when the
drive is isolated. Some uses may not stress the PCI bus
otherwise, but others do. It's not hard at all to do so.
Ever noticed common problems like sound card stuttering?
The OP asked a simple question based on confusion about what 10k &
SATAII bring to the table as far a performance. Citing everything you
ever heard, thought about, or learned about disk technologies &
purchases does not make for an honest, direct & accurate answer to
either me or the OP.
Yes, you provided a simple yet inaccurate answer. If you
don't use a system aggressively enough to ever realize the
problem, that's fine. One might think given the OP's desire
for these higher performance options that it was actually
planned to be able to aggressively use the system.
Information is useful, this was nothing personal as I
would've written same thing regardless of who wrote (the
inaccurate statement).
It is VERY, VERY easy to see a difference. Perhaps you
never bother to benchmark on multiple platforms or
interfaces so you simply never noticed it. Instead of being
upset about the truth you might consider learning a little
then it might just benefit you too.