SATA Write Throughput

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

I've got an Adaptec Serial ATA RAID 21610SA Controller card with 6 pairs of
RAID1 disks attached. The card is rated at 1.5Gbps. Writing to one pair, I
get about 30MB/s, which I would expect. Writing to 2 pairs at the same
time, I get a total of 40MB/s. I get 40MB/s when I right to 3,4,5, or 6
pairs at the same time. It seems 40MB/s is the hard limit for this card. I
was hoping to get something close to 6*30MB/s = 180MB/s for the system. Any
ideas?
 
I've got an Adaptec Serial ATA RAID 21610SA Controller card with 6 pairs
of
RAID1 disks attached. The card is rated at 1.5Gbps. Writing to one pair, I
get about 30MB/s, which I would expect. Writing to 2 pairs at the same
time, I get a total of 40MB/s. I get 40MB/s when I right to 3,4,5, or 6
pairs at the same time. It seems 40MB/s is the hard limit for this card. I
was hoping to get something close to 6*30MB/s = 180MB/s for the system. Any
ideas?

What motherboard and which PCI slot has this RAID card plugged in?
How exactly did you performed "writing", can you elaborate?
 
Previously nospam said:
I've got an Adaptec Serial ATA RAID 21610SA Controller card with 6 pairs of
RAID1 disks attached. The card is rated at 1.5Gbps. Writing to one pair, I
get about 30MB/s, which I would expect. Writing to 2 pairs at the same
time, I get a total of 40MB/s. I get 40MB/s when I right to 3,4,5, or 6
pairs at the same time. It seems 40MB/s is the hard limit for this card. I
was hoping to get something close to 6*30MB/s = 180MB/s for the system. Any
ideas?

I have seen a similar limit on an adaptec 8 disk SATA controller.
The disks are now on a pair of promise 150TX4 with Linux software RAID.
Writing is not much fater, but reading is faster than with the
Adaptec. IMO Sdaptec SATA controllers are best used as paperweights.

Arno
 
Motherboard is an intel serverboard se7501br. The raid card is plugged into
the first 64bit slot. The system Scsi is plugged into the integrated
ultra320 scsi controller. We've also got a fibre channel card plugged into
the 3rd 64bit slot. Windows is reporting:
Raid card - slot1, pcibus4, device2, function0.
Scsi - pcibus3, device3, function0.
Fibre - slot3, pcibus3, device2, function0.

I was using the IOMeter app, but thought it was overly complicated and
wasn't exactly sure what it was doing, so I wrote my own. It basically
creates a thread for each raid0 pair. Each thread open's a file and writes
1MB blocks into it, until the file is 1GB, and keeps doing that. I've
varied the block size, changed the file size, and whether I create a new
file, or just keep reusing the old filename, but nothing really made a
significant difference.

A side note, we're using windows 2003. We were using windows xp before -
got the same throughput, but were seeing raid pairs ocassionally go offline.
We would be writing to a raid pair, then it would disappear. I can't
remember what what the errors were in the event viewer. Adaptec said they
didn't support xp, and we haven't seen the errors since going to win2003.

-Lars
 
That's kindof sad that software raid is faster than the hardware raid for
sata. I wonder how Promise got their quoted 150MB/s xfer rate for your card.

Also, we've also tried creating a single raid10 disk group of 12 disks
through our adaptec raid card. We still got miserable write performance.

-Lars
 
Motherboard is an intel serverboard se7501br. The raid card is plugged
into
the first 64bit slot. The system Scsi is plugged into the integrated
ultra320 scsi controller. We've also got a fibre channel card plugged into
the 3rd 64bit slot. Windows is reporting:
Raid card - slot1, pcibus4, device2, function0.
Scsi - pcibus3, device3, function0.
Fibre - slot3, pcibus3, device2, function0.

I was using the IOMeter app, but thought it was overly complicated and
wasn't exactly sure what it was doing, so I wrote my own. It basically
creates a thread for each raid0 pair. Each thread open's a file and writes
1MB blocks into it, until the file is 1GB, and keeps doing that. I've
varied the block size, changed the file size, and whether I create a new
file, or just keep reusing the old filename, but nothing really made a
significant difference.

A side note, we're using windows 2003. We were using windows xp before -
got the same throughput, but were seeing raid pairs ocassionally go offline.
We would be writing to a raid pair, then it would disappear. I can't
remember what what the errors were in the event viewer. Adaptec said they
didn't support xp, and we haven't seen the errors since going to win2003.

I'm assuming motherboard is SE7501BR2 ?
Your setup seems fine. You may move RAID card to PCI slot 2.
Or swap with fibre card in slot 3.

Is your BIOS revision "Build P20-0079" ?
If not, you may need to upgrade.

Try to review your custom benchmark. Did you try to run it
against SCSI drive(s). They are still on PCI-X/100 bus.
Or try to benchmark on read operation for comparison.

May also verify/tweak BIOS settings.

What performance you get from fibre?
 
Previously nospam said:
That's kindof sad that software raid is faster than the hardware raid for
sata. I wonder how Promise got their quoted 150MB/s xfer rate for your card.

That is just the interface rate. You don't get that unless you do
striping with several disks.

Arno
 
Arno Wagner said:
That is just the interface rate.

Which is not a (user)data rate.
You don't get that unless you do striping with several disks.

Which won't be on the same channel unless connected through a port multiplier.
In which case you still don't get 150MB/s as bus protocol and command over-
head have to be accounted for.
 
I've been mucking w/ IOMeter and have had some better success. By
increasing the number of outstanding I/Os to 16, I'm able to get 78MB/s.
Although this is still far from 150MB/s, it's much better than the 40-45MB/s
that I was getting w/ it set to 1, as well as in my throughput tester.
Looking at the iometer source, it appears they use asynchronous writes using
WriteFile - actually having muliple writers for 1 file. I'm just using
fwrite.

I also tried reading w/ 16 outstanding I/Os and I'm getting huge
throughputs - 256MB/s. I thought the max for this card was around 150MB/s.
Again, my setup is 6 pairs of raid1.

-Lars
 
I've been mucking w/ IOMeter and have had some better success. By
increasing the number of outstanding I/Os to 16, I'm able to get 78MB/s.
Although this is still far from 150MB/s, it's much better than the 40-45MB/s
that I was getting w/ it set to 1, as well as in my throughput tester.
Looking at the iometer source, it appears they use asynchronous writes using
WriteFile - actually having muliple writers for 1 file. I'm just using
fwrite.

I also tried reading w/ 16 outstanding I/Os and I'm getting huge
throughputs - 256MB/s. I thought the max for this card was around 150MB/s.
Again, my setup is 6 pairs of raid1.

Card spec says:
"Data Transfer Rate - Up to 1.5 Gbits/sec"
that is per single SATA port, not for the whole card.
Card is "64-bit/66 MHz PCI"

So what is your write performance with concurrent write to all
6 RAID1's ?

You may compare that with:
http://www.pcpro.co.uk/reviews/61847/adaptec-serial-ata-raid-21610sa.html

and read some interesting info:
http://www.tweakers.net/reviews/557/1
 
Peter said:
Card spec says:
"Data Transfer Rate - Up to 1.5 Gbits/sec"
that is per single SATA port, not for the whole card.
Card is "64-bit/66 MHz PCI"

So what is your write performance with concurrent write to all 6 RAID1's ?

Would "I'm able to get 78MB/s" ring a bell?
 
nospam said:
I've been mucking w/ IOMeter and have had some better success. By
increasing the number of outstanding I/Os to 16, I'm able to get 78MB/s.
Although this is still far from 150MB/s,

You won't ever see 150MB/s for a single drive, the 150MB/s is the channel
clock rate. Did you bother to read my and Arnie's post?
The 150MB/s only comes into play when more than one drive (or in your case
all your drives) are connected to a single SATA port, by means of a port multiplier.
In that case you are limited to 150MB/s, minus overhead.
it's much better than the 40-45MB/s that I was getting w/ it set to 1,
as well as in my throughput tester.
Looking at the iometer source, it appears they use asynchronous writes using
WriteFile - actually having muliple writers for 1 file. I'm just using fwrite.

I also tried reading w/ 16 outstanding I/Os and I'm getting huge throughputs -

That's ~42MB/s per drive.
Still not very fast for a modern day drive, when expecting more like in the 50s.
I thought the max for this card was around 150MB/s.

What exactly did you not understand in our posts?
Are you even listening or are you just the compulsive-habitual top
poster that doesn't actually read but paints pictures in his head
and starts rambling when the pictures don't make sense to him?

There is only 1 drive per channel and a drive is per definition always slower
than the channel that it is connected to, as controllers are designed to last
a few years, to not be outdated as soon as a newer, faster drive comes out.

So the 1.5Gb/s 150MB/s rates won't figure anywhere in your calculations.
The STR of the drives do. The aggregated STR of 6 drives, in your case.
The bottleneck -if any- will be your system bus, not the channel(s).
Again, my setup is 6 pairs of raid1.

Yes, we got that.
 
Write Performance to all 6 Raid1's
Using IOMeter - 6 workers - each worker having it's own disk pair, 16
oustanding I/Os, 100% writes, 100% sequential, 1MB xfer size,
= 78MB/s total

same setup, but 100% reads was 256 MB/s

-Lars
 
Write Performance to all 6 Raid1's
Using IOMeter - 6 workers - each worker having it's own disk pair, 16
oustanding I/Os, 100% writes, 100% sequential, 1MB xfer size,
= 78MB/s total

same setup, but 100% reads was 256 MB/s

What disks do you use?

So what is your goal, trying to figure out why write performance is not
higher than 78MB/s or getting it higher than 78MB/s?

You may try things I have suggested before:
"I'm assuming motherboard is SE7501BR2 ?
Your setup seems fine. You may move RAID card to PCI slot 2.
Or swap with fibre card in slot 3.
Is your BIOS revision "Build P20-0079" ?
If not, you may need to upgrade.
Try to review your custom benchmark. Did you try to run it
against SCSI drive(s). They are still on PCI-X/100 bus.
Or try to benchmark on read operation for comparison.
May also verify/tweak BIOS settings.
What performance you get from fibre?"

If you need a better write performance, you may also try
RAID10 config.
 
Folkert Rienstra said:
You won't ever see 150MB/s for a single drive, the 150MB/s is the channel
clock rate. Did you bother to read my and Arnie's post?
The 150MB/s only comes into play when more than one drive (or in your case
all your drives) are connected to a single SATA port, by means of a port multiplier.
In that case you are limited to 150MB/s, minus overhead.
throughputs -


That's ~42MB/s per drive.
Still not very fast for a modern day drive, when expecting more like in
the 50s.

Not exactly. ~42MB/s per raid1 pair. w/ a raid1 pair, you should get close
to double the read speed.
Therefore, that's around 21MB/s per drive.
What exactly did you not understand in our posts?
Are you even listening or are you just the compulsive-habitual top
poster that doesn't actually read but paints pictures in his head
and starts rambling when the pictures don't make sense to him?
No. I didn't understand that is was per disk port. So now my throughputs are
looking comparatively worse.
There is only 1 drive per channel and a drive is per definition always slower
than the channel that it is connected to, as controllers are designed to last
a few years, to not be outdated as soon as a newer, faster drive comes out.

So the 1.5Gb/s 150MB/s rates won't figure anywhere in your calculations.
The STR of the drives do. The aggregated STR of 6 drives, in your case.
The bottleneck -if any- will be your system bus, not the channel(s).
With a fibre channel card in the same slot that the raid card was in, I can
get 190 MB/s read and write speed. So system bus is not the bottleneck.
Yes, we got that.
Peter asked again, so I responded.
 
Peter said:
What disks do you use? Hitachi deskstar 400gig

So what is your goal, trying to figure out why write performance is not
higher than 78MB/s or getting it higher than 78MB/s?
my goal is to get a total of 6*30 = 180 MB/s write speed.
You may try things I have suggested before:
"I'm assuming motherboard is SE7501BR2 ?
Your setup seems fine. You may move RAID card to PCI slot 2.
Or swap with fibre card in slot 3.
Is your BIOS revision "Build P20-0079" ?
If not, you may need to upgrade.
Try to review your custom benchmark. Did you try to run it
against SCSI drive(s). They are still on PCI-X/100 bus.
Or try to benchmark on read operation for comparison.
May also verify/tweak BIOS settings.
Tried all that stuff. I'm thinking this sata controller card is a pos.
What performance you get from fibre?" see other post.

If you need a better write performance, you may also try
RAID10 config.
Also tried that. see previous post.
 
Tried all that stuff. I'm thinking this sata controller card is a pos.

Then you might be right. I didn't see any reference of a good write
performance using this card.
see other post.

Which post?
 
nospam said:
Not exactly. ~42MB/s per raid1 pair.
Doubtful.

w/ a raid1 pair, you should get close to double the read speed.

Nope. That is RAID0.
Only if the RAID driver alternates consecutive IOs between the
drive pair can you get some RAID0 type performance on big files.
Whether your RAID controller does that remains to be seen.
Therefore, that's around 21MB/s per drive.


No. I didn't understand that is was per disk port.

That is downright silly, 120MB/s for a multichannel Raid card?
You're joking.

So how come then you expected 180MB/s from a '1.5Gb/s' (~120MB/s data) card?
So now my throughputs are looking comparatively worse.
With a fibre channel card in the same slot that the raid card was in, I can
get 190 MB/s read and write speed. So system bus is not the bottleneck.

That remains to be seen with 12 drives @ ~50MB/s each.
(Unless it does the Raid1 internally on the card).
 
Back
Top