Looking for solid high performance SATA raid card

  • Thread starter Thread starter dg
  • Start date Start date
D

dg

I am considering a raid setup using 3 drives, 2 striped and 1 parity drive.
What are the most efficient, highest performing, low overhead, reliable
cards out there? Any tips are greatly appreciated.

Thanks,
--Dan
 
Previously dg said:
I am considering a raid setup using 3 drives, 2 striped and 1 parity drive.

Why would you want RAID-4? Why not use RAID-5?
What are the most efficient, highest performing, low overhead, reliable
cards out there? Any tips are greatly appreciated.

They do not exist.
Speed, efficiency (=price to value ratio), reliability, pick any two.

With more you could get some recommendations:

- What are you willing to pay? SCSI or ATA/SATA prices?
- How much space do you want?

Arno
 
Arno said:
They do not exist.
Speed, efficiency (=price to value ratio), reliability, pick any two.

With more you could get some recommendations:

- What are you willing to pay? SCSI or ATA/SATA prices?
- How much space do you want?

i'd be interested to find out the best 2 and/or 4-drive SATA raid card
that can take the load off the CPU, i know SCSI is the best for that,
but there must be a reasonably good SATA hardware raid solution.

like with animation the cpu is incrementally writing images to the array
as the cpu goes all out rendering the images - it would be a shame to
have a bad array setup use even a small percentage of the processing power.
 
Why would you want RAID-4? Why not use RAID-5?

He probably means RAID5.
They do not exist.
Bullshit.

Speed, efficiency (=price to value ratio), reliability, pick any two.

You're the one thats mindlessly turned his efficient into price/value.
With more you could get some recommendations:
- What are you willing to pay?

Obviously depends on what is available, and that is what he is asking.
 
i'd be interested to find out the best 2 and/or 4-drive SATA raid card
that can take the load off the CPU, i know SCSI is the best for that,
but there must be a reasonably good SATA hardware raid solution.

That's the problem, everybody wants cheap and you get it for a price, major
CPU utilization. This is the same argument that revolve around WinModems
vs. hardware modems. SCSI costs slightly more for a reason, great stand
alone performance without taxing the main system. Sure, SATA is great for
novelty systems, but don't expect them to hold up under the workload a SCSI
RAID system can deliver.

like with animation the cpu is incrementally writing images to the array
as the cpu goes all out rendering the images - it would be a shame to
have a bad array setup use even a small percentage of the processing power.

Agreed, You know the answer. Go SCSI young man, go SCSI and never look
back.



Rita
 
In most situations, you have better off with 4 drive RAID 1+0 over 3 drive
RAID 5. It costs about the same, and performs better on the desktop.
 
Yep, the more I think about it the more I like the 4 drive setup, RAID 0 and
1. An even cheaper setup, which would work in my situation, is to run a 2
or 3 drive RAID 0 set and then do a scheduled backup to a cheaper, bigger
drive. Like stripe 3 10,000rpm 74GB drives and backup to a 250GB SATA
drive. Performance without too much risk is my goal.

--Dan
 
That's the problem, everybody wants cheap and you get it for a price,
major CPU utilization. This is the same argument that revolve around
WinModems vs. hardware modems. SCSI costs slightly more

Mindless bigot lie. It costs a hell of a lot more in fact.
for a reason, great stand alone performance without
taxing the main system. Sure, SATA is great for
novelty systems, but don't expect them to hold up
under the workload a SCSI RAID system can deliver.

More utterly mindless bigotry.
Agreed, You know the answer. Go SCSI
young man, go SCSI and never look back.

More utterly mindless bigotry.
 
Paul said:
i'd be interested to find out the best 2 and/or 4-drive SATA raid card
that can take the load off the CPU, i know SCSI is the best for that,
but there must be a reasonably good SATA hardware raid solution.

like with animation the cpu is incrementally writing images to the array
as the cpu goes all out rendering the images - it would be a shame to
have a bad array setup use even a small percentage of the processing
power.

Take a look at 3Ware and LSI Logic (not the low-end LSI boards, the high end
of their SATA product line)--the LSI Logic products with onboard processors
are based on the same technology that is used in IBM's enterprise RAID
controllers--LSI bought that whole operation from IBM a while back. Intel
also sells RAID controllers based on the LSI Logic technology.
 
J. Clarke said:
Take a look at 3Ware and LSI Logic (not the low-end LSI boards, the high end
of their SATA product line)--the LSI Logic products with onboard processors
are based on the same technology that is used in IBM's enterprise RAID
controllers--LSI bought that whole operation from IBM a while back. Intel
also sells RAID controllers based on the LSI Logic technology.


thanks for all replies :)

i just thought of something.... probly a newb question; if the
arcitecture is only PCI... is there much point in striping 4 drives,
even if they are only 7,200rpm SATA...? what should be the practical
limit for 7,200rpm and 10,000rpm striped drives before a PCI bus is
saturated by a SATA-RAID card solution...?

cheers
 
Previously Paul Gunson said:
thanks for all replies :)
i just thought of something.... probly a newb question; if the
arcitecture is only PCI... is there much point in striping 4 drives,
even if they are only 7,200rpm SATA...?

Not really. Standard PCI has a _theoretical_ maximum speed of
135MB/s, while modern drives reach speeds of 50MB/s. However it
depends. If you do software-striping on the on-board IDE interfaces
you might actually get more bandwidth than PCI has to offer.
Some chipsets do not route the IDE data over the PCI-bus.
what should be the practical
limit for 7,200rpm and 10,000rpm striped drives before a PCI bus is
saturated by a SATA-RAID card solution...?

2 drives, I would say. You could go PCI-X. Promise has an
8-port SATA card for PCI-X. PCI-X has 600MB/s (theoretical)
bandwidth, if I remember correctly

Arno
 
Paul said:
thanks for all replies :)

i just thought of something.... probly a newb question; if the
arcitecture is only PCI... is there much point in striping 4 drives,
even if they are only 7,200rpm SATA...? what should be the practical
limit for 7,200rpm and 10,000rpm striped drives before a PCI bus is
saturated by a SATA-RAID card solution...?

That's really application dependent. The "sustained transfer rate" listed
for hard disks is for sequential transfers, not for random access. If your
application does sequential transfers and the RAID is dedicated to that
dataset then the PCI bus could become a bottleneck with 2 or 3 drives.
Usually the application neither does sequential transfers nor is the RAID
dedicated to a single dataset, so the drive performance goes way down.

The LSI and 3Ware RAID controllers support 64-bit 66 MHz PCI, which has a
performance limit of around 600 MB/sec--from a practical viewpoint 32/33
PCI (the ordinary kind) can move about 50 MB/sec from one device to
another--that's the max that is usually achieved with network transfers
even when the networking technology used has far more bandwidth than that.
64/66 should be good for about 200. The trouble is finding a motherboard
that supports that--you're into the realm of purpose-made server and
workstation boards and the price goes way up.
 
Arno Wagner said:
Not really. Standard PCI has a _theoretical_ maximum speed of
135MB/s, while modern drives reach speeds of 50MB/s.

60-70 MB/s.
However it depends.
If you do software-striping on the on-board IDE interfaces
you might actually get more bandwidth than PCI has to offer.
Some chipsets do not route the IDE data over the PCI-bus.

Better yet, do Firmware RAID-0 on the chipset internal SATA.
2 drives, I would say. You could go PCI-X. Promise has an 8-port
SATA card for PCI-X. PCI-X has 600MB/s (theoretical) bandwidth,
if I remember correctly

Well, then you better should have looked it up, that was PCI-X 66(v1.0)
Even PCI (the wide 66-MHz variety) can manage that.
And it's 533MB/s, not 600 MB/s, disregarding overhead.

PCI-X has several speeds and is an ongoing development, now at v2.0
with PCI-X 533 as the maximum and are now finalizing PCI-X 1066
which is PCI-X v3.0.

"PCI-X 2133 is currently being evaluated by the PCI-SIG for future development".
 
J. Clarke said:
That's really application dependent. The "sustained transfer rate" listed
for hard disks is for sequential transfers, not for random access. If your
application does sequential transfers and the RAID is dedicated to that
dataset then the PCI bus could become a bottleneck with 2 or 3 drives.
Usually the application neither does sequential transfers nor is the RAID
dedicated to a single dataset, so the drive performance goes way down.

The LSI and 3Ware RAID controllers support 64-bit 66 MHz PCI,
which has a performance limit of around 600 MB/sec--

Huh? Are you just echoing that 'I can't help myself' Wagner character?
64 times 66MBit/s = 600MB/s?
 
Previously Folkert Rienstra said:
J. Clarke said:
Paul Gunson wrote: [...]
That's really application dependent. The "sustained transfer rate" listed
for hard disks is for sequential transfers, not for random access. If your
application does sequential transfers and the RAID is dedicated to that
dataset then the PCI bus could become a bottleneck with 2 or 3 drives.
Usually the application neither does sequential transfers nor is the RAID
dedicated to a single dataset, so the drive performance goes way down.

The LSI and 3Ware RAID controllers support 64-bit 66 MHz PCI,
which has a performance limit of around 600 MB/sec--
Huh? Are you just echoing that 'I can't help myself' Wagner character?
64 times 66MBit/s = 600MB/s?

Oooh, somebody here does not like people with a clue...

Arno
 
Arno said:
Previously Folkert Rienstra said:
J. Clarke said:
Paul Gunson wrote: [...]
That's really application dependent. The "sustained transfer rate"
listed
for hard disks is for sequential transfers, not for random access. If
your application does sequential transfers and the RAID is dedicated to
that dataset then the PCI bus could become a bottleneck with 2 or 3
drives. Usually the application neither does sequential transfers nor is
the RAID dedicated to a single dataset, so the drive performance goes
way down.

The LSI and 3Ware RAID controllers support 64-bit 66 MHz PCI,
which has a performance limit of around 600 MB/sec--
Huh? Are you just echoing that 'I can't help myself' Wagner character?
64 times 66MBit/s = 600MB/s?

Oooh, somebody here does not like people with a clue...

If one wants to be picky it's 528. And Folknut has raised pickiness to a
high art, which is why he now resides in my killfile.
 
If one wants to be picky it's 528. And Folknut has raised pickiness to a
high art, which is why he now resides in my killfile.

True. For some people 'exact' numbers is all they have, since
approximations and estimations require actual understanding of
what is important and what not.

Arno
 
J. Clarke said:
Arno said:
Previously Folkert Rienstra said:
"J. Clarke" (e-mail address removed)> wrote in message Paul Gunson wrote: [...]
That's really application dependent. The "sustained transfer rate" listed
for hard disks is for sequential transfers, not for random access. If
your application does sequential transfers and the RAID is dedicated to
that dataset then the PCI bus could become a bottleneck with 2 or 3
drives. Usually the application neither does sequential transfers nor is
the RAID dedicated to a single dataset, so the drive performance goes
way down.

The LSI and 3Ware RAID controllers support 64-bit 66 MHz PCI,
which has a performance limit of around 600 MB/sec--
Huh? Are you just echoing that 'I can't help myself' Wagner character?
64 times 66MBit/s = 600MB/s?

Oooh, somebody here does not like people with a clue...

Anyone with a *clue* would immediately recognize that 600
is not a multiple of PCI's 133MB/s, oh mighty clueless one.
If one wants to be picky it's 528.

Wrong again, *really* nitpicking is counting with 66.666667 MHz,
which makes it 533.333.336 bits/s (533MB/s).

And counting with 10% overhead it is more like 480MB/s, considerably
less than that 600MB/s.
And Folknut has raised pickiness to a high art, which is why he now resides
in my killfile.

Which is why you don't learn anything, isn't it John.
Which made you echo the stupidity of that babblemouth Wagner.
 
Back
Top