3
3dO
ok, this is a long one but i really am clueless on this one
two week ago i set up a "fileserver", as i'm a visual nut and not a
server nut i went for a simple xp pro machine with a raidcontroller:
asus p5k-vm with intel c2d e4300 - 1GB ram, promise fasttrak tx4310 PCI
raid controller, 4 samsung spinpoint t166 500GB (raid5) and 1 samsung
spinpoint t166 160GB as systemdisk.
now when i boot up the system the disks make a loud click right after
spinup, not having any experience with samsungs, i don't know if this is
normal behaviour for these kind of disks, it sounds scary when you
always worked with seagates. (they're very silent after this)
the raid 5 setup on the promise card -which is pci with a default and
unchangeable stripe size of 16KB - is the real problem
when i work with a 4.01GB video dvd image i get the following transferrates:
localWRITE: copy from systemdisk to raid5 array : 28.9MB/s sustained
localREAD : copy from raid5 array to systemdisk : 49.4MB/s sustained
netWRITE: copy from network to raid5 array : 14.7MB/s
netREAD : copy from raid5 array to network : 24.5MB/s
local speeds are a bit acceptable but i mainly use it in network over
gigabit, and those speeds are just not up to my expectations. they are
not sustained but more like /\_/\_/\_/\ (graph) in bursts, with bursts
up to 53MB/s, looks like it's getting network traffic and then stopping
to write it to the array...
now i already swithed my raidcontroller to another pci slot because i
noticed that it shared an IRQ with the onboard gigabit network
controller i use to connect it to my network but that didn't help
i tried all sorts of NTFS cluster setting on the array partition but to
no avail
i tried a raid 10 ... no difference (write was slightly better but not
great)
so i roamed the net to find ... nothing
i really don't know what to do, i have the feeling that there is some
form of issue with sharing pci resources between the pci raid controller
and the onboard networkcard so i went back to the shop and they were
willing to take the promise raid controller back because i was thinking
about an areca 1210 pci express raid controller and not very pleased
with the promise card but... i'm not completely sure that it's not
another problem that's causing these horrible transfer rates.
has anybody any idea, it would really be appreciated !
thx
3dO
two week ago i set up a "fileserver", as i'm a visual nut and not a
server nut i went for a simple xp pro machine with a raidcontroller:
asus p5k-vm with intel c2d e4300 - 1GB ram, promise fasttrak tx4310 PCI
raid controller, 4 samsung spinpoint t166 500GB (raid5) and 1 samsung
spinpoint t166 160GB as systemdisk.
now when i boot up the system the disks make a loud click right after
spinup, not having any experience with samsungs, i don't know if this is
normal behaviour for these kind of disks, it sounds scary when you
always worked with seagates. (they're very silent after this)
the raid 5 setup on the promise card -which is pci with a default and
unchangeable stripe size of 16KB - is the real problem
when i work with a 4.01GB video dvd image i get the following transferrates:
localWRITE: copy from systemdisk to raid5 array : 28.9MB/s sustained
localREAD : copy from raid5 array to systemdisk : 49.4MB/s sustained
netWRITE: copy from network to raid5 array : 14.7MB/s
netREAD : copy from raid5 array to network : 24.5MB/s
local speeds are a bit acceptable but i mainly use it in network over
gigabit, and those speeds are just not up to my expectations. they are
not sustained but more like /\_/\_/\_/\ (graph) in bursts, with bursts
up to 53MB/s, looks like it's getting network traffic and then stopping
to write it to the array...
now i already swithed my raidcontroller to another pci slot because i
noticed that it shared an IRQ with the onboard gigabit network
controller i use to connect it to my network but that didn't help
i tried all sorts of NTFS cluster setting on the array partition but to
no avail
i tried a raid 10 ... no difference (write was slightly better but not
great)
so i roamed the net to find ... nothing
i really don't know what to do, i have the feeling that there is some
form of issue with sharing pci resources between the pci raid controller
and the onboard networkcard so i went back to the shop and they were
willing to take the promise raid controller back because i was thinking
about an areca 1210 pci express raid controller and not very pleased
with the promise card but... i'm not completely sure that it's not
another problem that's causing these horrible transfer rates.
has anybody any idea, it would really be appreciated !
thx
3dO