L
LittleRed
OK, I've had it. I cannot get a satisfactory answer from anyone
regarding RAID performance problems, so I am seeking help from the
masses.
Here is the problem:
I have two systems that I have noticed have low disk performance, both
IBM servers.
System 1 - IBM 7100 server, which is quite a nice machine. I have two
seperate raid5 arrays on two seperate controllers (one ServeRaid 4M one
a 4L). One is a four 36GB disk array, the other a three 72GB disk array,
all 10K ultra160s. When I copy a 70GB file from one array to the other,
it does so at an average of 5MB/s (yes, five). This figure is derived
from two sources - the disk performance counter in Perfmon and actually
timing the copy using a nice little utility called timethis.
I even attached a third array on another controller in a three disk
Raid0 configuration and got the same result.
System 2 - A brand new IBM x345 with a ServeRaid 5i controller and three
144GB 15k Ultra320 disks in a raid5 configuration (8k). I have tested
this array with ATTO, Bench32, Nbench and a timed file copy to and from
memory (using a 1GB ramdisk). The figures I get are 30MB/s read and
26MB/s write. Given that these disks are specified as having a sustained
read rate of over 80MB/s, you would expect at least that in any Raid
configuration.
I have spent too much time on the phone to IBM support and nobody seems
to know exactly what figure I should get. It seems that IBM have never
actually benchmarked their Raid systems. All they can tell me is that a
Raid system should deliver 'superior' performance. Compared to what? a
floppy disk? One engineer even told me that I should expect lower
performance from a raid array than from a single disk (err, that's not
what your brochure says).
Now I know there is a whole science in determining which raid
configuration best suits different requirements, be it a database, file
server etc. etc., but if you can't even get reasonable performance by
doing a simple file copy what sort of performance are you going to get
on a busy database?
All I am trying to find out is what rate of throughput I should expect
from these systems, because in my opinion, what I am getting is not what
I paid for. It is also costing me a lot of time because I have to wait
over five hours to copy a file that should be done in about half an
hour.
Does anybody out there know why these figures are so low or where I can
go to find the answers I am looking for. Perhaps some comparitive
figures or maybe a standard test that I can perform.
Please, any help would be appreciated.
regarding RAID performance problems, so I am seeking help from the
masses.
Here is the problem:
I have two systems that I have noticed have low disk performance, both
IBM servers.
System 1 - IBM 7100 server, which is quite a nice machine. I have two
seperate raid5 arrays on two seperate controllers (one ServeRaid 4M one
a 4L). One is a four 36GB disk array, the other a three 72GB disk array,
all 10K ultra160s. When I copy a 70GB file from one array to the other,
it does so at an average of 5MB/s (yes, five). This figure is derived
from two sources - the disk performance counter in Perfmon and actually
timing the copy using a nice little utility called timethis.
I even attached a third array on another controller in a three disk
Raid0 configuration and got the same result.
System 2 - A brand new IBM x345 with a ServeRaid 5i controller and three
144GB 15k Ultra320 disks in a raid5 configuration (8k). I have tested
this array with ATTO, Bench32, Nbench and a timed file copy to and from
memory (using a 1GB ramdisk). The figures I get are 30MB/s read and
26MB/s write. Given that these disks are specified as having a sustained
read rate of over 80MB/s, you would expect at least that in any Raid
configuration.
I have spent too much time on the phone to IBM support and nobody seems
to know exactly what figure I should get. It seems that IBM have never
actually benchmarked their Raid systems. All they can tell me is that a
Raid system should deliver 'superior' performance. Compared to what? a
floppy disk? One engineer even told me that I should expect lower
performance from a raid array than from a single disk (err, that's not
what your brochure says).
Now I know there is a whole science in determining which raid
configuration best suits different requirements, be it a database, file
server etc. etc., but if you can't even get reasonable performance by
doing a simple file copy what sort of performance are you going to get
on a busy database?
All I am trying to find out is what rate of throughput I should expect
from these systems, because in my opinion, what I am getting is not what
I paid for. It is also costing me a lot of time because I have to wait
over five hours to copy a file that should be done in about half an
hour.
Does anybody out there know why these figures are so low or where I can
go to find the answers I am looking for. Perhaps some comparitive
figures or maybe a standard test that I can perform.
Please, any help would be appreciated.