nospam said:
I've been mucking w/ IOMeter and have had some better success. By
increasing the number of outstanding I/Os to 16, I'm able to get 78MB/s.
Although this is still far from 150MB/s,
You won't ever see 150MB/s for a single drive, the 150MB/s is the channel
clock rate. Did you bother to read my and Arnie's post?
The 150MB/s only comes into play when more than one drive (or in your case
all your drives) are connected to a single SATA port, by means of a port multiplier.
In that case you are limited to 150MB/s, minus overhead.
it's much better than the 40-45MB/s that I was getting w/ it set to 1,
as well as in my throughput tester.
Looking at the iometer source, it appears they use asynchronous writes using
WriteFile - actually having muliple writers for 1 file. I'm just using fwrite.
I also tried reading w/ 16 outstanding I/Os and I'm getting huge throughputs -
That's ~42MB/s per drive.
Still not very fast for a modern day drive, when expecting more like in the 50s.
I thought the max for this card was around 150MB/s.
What exactly did you not understand in our posts?
Are you even listening or are you just the compulsive-habitual top
poster that doesn't actually read but paints pictures in his head
and starts rambling when the pictures don't make sense to him?
There is only 1 drive per channel and a drive is per definition always slower
than the channel that it is connected to, as controllers are designed to last
a few years, to not be outdated as soon as a newer, faster drive comes out.
So the 1.5Gb/s 150MB/s rates won't figure anywhere in your calculations.
The STR of the drives do. The aggregated STR of 6 drives, in your case.
The bottleneck -if any- will be your system bus, not the channel(s).
Again, my setup is 6 pairs of raid1.
Yes, we got that.