RAID-5: StripeSize and allocation unit size

  • Thread starter Thread starter Laery
  • Start date Start date
L

Laery

Hi,

Have a promise SX4 controller with 128MB ECC ram and 4 maxtor 80GB
drives in raid-5 on a gigabyte 7dpxdw-p with 512MB ECC and 1 athlon
2600+MP.

I've been testing different setting but always get low read and write
performance. (I know that writing is supposed to be slow)

Which is the best combination of stripesize and ntfs clusters if 90%
of the files are above 80MB and read performance is important.

Currently I have a stripesize of 16KB and a ntfs of 32KB in one big
partition.

Is this the correct setting?

Regards
TheLaery
 
Hi,

Have a promise SX4 controller with 128MB ECC ram and 4 maxtor 80GB
drives in raid-5 on a gigabyte 7dpxdw-p with 512MB ECC and 1 athlon
2600+MP.

I have the same controller also with 128MB with 4 Maxtors diamondmax
9+ 120GB drives in Raid5.
I've been testing different setting but always get low read and write
performance. (I know that writing is supposed to be slow)

What kind of performance do you get and in what kind of tests? What do
you call low?

I have done so tests copying files about 700MB in size and the results
weren't bad at all.
Which is the best combination of stripesize and ntfs clusters if 90%
of the files are above 80MB and read performance is important.

Usually for a Raid array you should choose the maximum possible
stripesize. In this case the maximum isn't all that big so certainly
choose the maximum for this card.
(Anandtech once did some testing with stripe size that showed that
want to use large ones)

I haven't played around with ntfs cluster size. I mainly use it for
reliable storage so I didn't felt the need to experiment with that.
Currently I have a stripesize of 16KB and a ntfs of 32KB in one big
partition.

Is this the correct setting?

Only tests will tell :-)

If only read performance is really important you could change the
memory setting so that it doesn't use memory for writing.

Regards,
Marc

(If you want to mail me remove the "geen.spam." from the address)
 
Hi thanks for respondig.

Marc de Vries said:
I have the same controller also with 128MB with 4 Maxtors diamondmax
9+ 120GB drives in Raid5.


What kind of performance do you get and in what kind of tests? What do
you call low?
HDTach: 44max 13 min 17 average
I have done so tests copying files about 700MB in size and the results
weren't bad at all.


Usually for a Raid array you should choose the maximum possible
stripesize. In this case the maximum isn't all that big so certainly
choose the maximum for this card.
(Anandtech once did some testing with stripe size that showed that
want to use large ones) I will look after it
I haven't played around with ntfs cluster size. I mainly use it for
reliable storage so I didn't felt the need to experiment with that.


Only tests will tell :-)

If only read performance is really important you could change the
memory setting so that it doesn't use memory for writing.
Memory already is in writethrough modus
 
Hi thanks for respondig.


HDTach: 44max 13 min 17 average

HDTach doesn't seem to be a very reliable program to test performance.
Xbitlabs doesn't use it anymore because it is not reliable and
specially so for Raid arrays.

On the forum I see lots of peolpe complaining about low performance
results on raid arrays.

But I'm willing to test it on my machine. In read only mode I can just
run it on my existing array without losing data?

Marc
 
HDTach doesn't seem to be a very reliable program to test performance.
Xbitlabs doesn't use it anymore because it is not reliable and
specially so for Raid arrays.

On the forum I see lots of peolpe complaining about low performance
results on raid arrays.

But I'm willing to test it on my machine. In read only mode I can just
run it on my existing array without losing data?

I just tested it yesterday on my desktop.

Well, you can forget about the HDTach results because they are bogus.

It reports around 20MB/s sequential read speed which I can prove it
not correct.

I use the Raid5 array on the promise controller as data disk. I also
have a single Hitachi 60GB 7K250 as bootdisk.

I copied a CD image file from the hitachi disk to the array and
another image file from the array back to the bootdisk. I monitored
those actions with performance monitor to check on the read bytes/s
and the write bytes /s.

When reading from the array I am writing to the bootdisk. So in that
situation I am limited by the write speed of the bootdisk. Around 20 -
23MB/s. But HDTach shouldn't have that problem, so should have a much
higher number.

When writing to the array, I am reading from the bootdisk so it
shouldn't limit me as much as in the above situation. Then I get
around 30 - 35 MB/s.

Unless someone can explain to me why reading from a Raid5 would be
slower then writing I think this proves that the values from HDTach
are incorrect. :-)

Regards,
Marc
 
Back
Top