How I built a 2.8TB RAID storage array

  • Thread starter Thread starter Yeechang Lee
  • Start date Start date
Yes. If you look at the CPUs on RAID cards, they're a lot less
powerfull than the host CPU (even on the most expensive $1000+ cards).
However, that assumes that there are CPU cycles available on the host
(i.e., it is *not* CPU bound, as the previous poster mentioned).

If your file server is CPU bound you're doing something seriously wrong.
 
Peter said:
If your file server is CPU bound you're doing something seriously wrong.

Who said this is limited to discussions of file servers? Database
servers (or other specialized application servers) may well be CPU bound
and directly connected to a large RAID.
 
Jon Forrest said:
That's because, other than performing the XOR operations
for writes, they don't have to do very much.


Right. Even when a server is busy, satisfying read requests and non-RAID-
5 requests shouldn't add much to the load. Most of the work is done by
the intelligence built-in to the ATA or SCSI electronics on the disk.
The latency imposed by the movement of the arms and platters dominates
the latency caused by a busy CPU.

For a while I was a big fan of those cheap IDE pseudo-RAID 0 and 1
controllers but I now realize that they really don't provide much benefit
compared to just adding more IDE channels since those controllers do so
little.
That's one reason why you can convert one of those
Promise IDE boards into a RAID controller by simply adding a resistor.

That was only possible with the original Ultra 66 and 100 boards.
Next they used different PCI IDs for Ultra and Fasttrak.
 
Back
Top