1.485 Gbit/s to and from HDD subsystem

  • Thread starter Thread starter Spoon
  • Start date Start date
Spoon said:
willbill wrote:

This is not typical. There must be a bottleneck somewhere else in
your system.


my strong hunch is that it is typical

there are at least 40+ people on these
two n/g's that have two hard drives
on their system with one on them being
a 150GB Raptor

what is the best GB/min speed that
they seen when copying 5GB or more
from one drive to the other?

all ears?

out of honest curiosity, what do you
think real world sustained xfer rates
are for a single 150GB Raptor on
a current *fast* x86 machine?

all ears #2. :)

fwiw, i glanced at the ref you provided
on that 3Ware controller

given that it claims several times the
sustained rate that you need, my one
thought is that if you restict yourself
to one function at a time, it may work

but i'd be leary of it

nothing like putting together an expensive
machine (with an expensive raid controller
and 12 HDDs) and then finding out that it
doesn't do the job!

i also wouldn't rule out a server mobo that
has top end PCI-X slots coz odds are that
there are some very capable PCI-X raid
cards out there

bill
 
Spoon said:
Could you provide the links to these other benchmarks?
I've come across this disheartening benchmark:
Maximum Performance for Linux Kernel 2.6
http://www.3ware.com/linuxbenchmarks.htm
cf. Test #8: Bonnie++ tuned w/xfs filesystem
They reach 410 MB/s read and (only) 200 MB/s write.

Holy mother of pearls! Only 200 MB/s with 12 drives?
Did I miss something?


The Linux 2.6 kernels around 2.6.5-2.6.8 has some really serious I/O
bottlenecks; not crossing the 410 MB/sec read mark may very well be the
kernel. We were seeing a hard speed limit around 400-500MB/sec (depending
on the hardware and the kernel configuration) no matter how many spindles we
threw at it (starting with 24 and going to 60.)

I don't know if that's improved in more recent kernels.

The 200 MB/sec write limit is more mystifying, although that may just be the
limit of the drives.
 
willbill said:
nice to know. :)

Heh :-)
is that SATA-1 or SATA-2?

Neither :-)

http://www.sata-io.org/namingguidelines.asp
http://en.wikipedia.org/wiki/Serial_ATA#SATA_3.0_Gb.2Fs
not that it makes much diff
coz SATA HDDs still haven't
gotten to the SATA-1 limit

What are you trying to say?
That SATA HDDs cannot reach 150 MB/s while SCSI drives can?

For example, the Barracuda 7200.10 can burst data at 250 MB/s. And if
you were talking about sustained rates, could you point me to a SCSI HDD
that can sustain 150 MB/s.
you seem to be doing your homework. :)

I definitely don't want to buy a $500 RAID controller and 8 $300 HDDs
only to find out I can't make the damn thing work! :-)
the 15k SCSI drives offer three things
that you likely need/want for your high
end video requirement:
1) faster single drive xfer (15k xfers faster
than 10k or 7200 and
Yes.

2) especially faster read/write seeks (which
are likely to be important to you even
within some type of raid array) and

Is that important for sequential reads and sequential writes?
3) much much better server type performance
(assumming you'd like to do more than one
function at a time)

What do you mean by server type performance?

The RAID array would only be used to
1) capture a video stream (i.e. write e.g. 50-200 GB sequentially)
2) play a video stream (i.e. read e.g. 50-200 GB sequentially)

There will be no other disk activity to/from the array.
have you ever used SCSI?

No :-)
anyhow, disagreed for server type use
partly agreed for single user type use

OK. Depending on how you define server type use, I think I fall more
into the single user type use.
what's the current street price
for one of these?

Approximately $1200 :-)
i hadn't realized that you were
prepared to go as far as a 12
drive raid6 array (with at least
one of those being a hot spare)

I don't plan to use RAID-6, and I don't plan to need 12 drives.
(One can dream.)

Regards.
 
Spoon said:
Could you provide the links to these other benchmarks?

I've come across this disheartening benchmark:
Maximum Performance for Linux Kernel 2.6
http://www.3ware.com/linuxbenchmarks.htm

Goal of tests:
Maximum performance possible with 3ware 9000 Series RAID controller
under Linux 2.6.

System configuration:
Processor: Xeon 2.4 Ghz (2)
12 70 GB WDC WD740GD drives in hardware RAID-0
Stripe size: 64k
Kernel: 2.6.5
Controller: 3ware 9500S-12
Driver: 2.26.00.005
Driver cmds_per_lun setting: 254 (default)
Firmware: FE9X 2.02.00.009
OS Runlevel: 3
Bonnie++ version: 1.02c
Iozone version: 3.203
Motherboard: SE7501 CW2
RAM: 512 MB
Amount of I/O performed: 20 GB

(Note: The WD1500 is only 10-20% faster than the WD740GD.)

cf. Test #8: Bonnie++ tuned w/xfs filesystem

They reach 410 MB/s read and (only) 200 MB/s write.

Holy mother of pearls! Only 200 MB/s with 12 drives?

Did I miss something?

I've found the benchmark I was looking for.

http://www.gamepc.com/labs/view_content.asp?id=raptor150raid&page=4

Asus A8N-SLI Premium
Athlon64 X2 4400+
1 GB RAM
Areca ARC-1220 PCI Express x8 RAID controller
4 x WD1500ADFD in RAID-0

http://www.gamepc.com/labs/view_content.asp?id=raptor150raid&page=5

The sustained read throughput increases linearly with the number of HDDs
in the RAID-0 array.

1 HDD => 77.9 MB/s
2 HDD => 156.2 MB/s
3 HDD => 226.7 MB/s
4 HDD => 303.8 MB/s

AFAIU, these results were obtained using raw block devices. I wonder how
much the filesystem (NTFS in Windows) degrades performance. I'll need to
benchmark a few filesystems in Linux (XFS is a likely candidate).

Too bad they don't also provide sustained write throughput.

Regards.
 
Heh :-)


Neither :-)

http://www.sata-io.org/namingguidelines.asp
http://en.wikipedia.org/wiki/Serial_ATA#SATA_3.0_Gb.2Fs


What are you trying to say?
That SATA HDDs cannot reach 150 MB/s while SCSI drives can?

For example, the Barracuda 7200.10 can burst data at 250 MB/s. And if
you were talking about sustained rates, could you point me to a SCSI HDD
that can sustain 150 MB/s.


I definitely don't want to buy a $500 RAID controller and 8 $300 HDDs
only to find out I can't make the damn thing work! :-)


Have you contacted the sales people for each of the raid controller
cards we've named. Ask each if they can name a specific combo of
mobo/CPU/controller and disks that meet your requirements. SuperMicro
and HP (once DEC StorageWorks) might be worth a call.

If they all say it's impossible or will cost a million bucks, I think
you have to give your boss the bad news and make other plans.

BTW; there are two bits of information I've not seen you state
that *do* make a difference;

- For how many minutes of data do you need to sustain the write speed?
- Is the data one-time and priceless or is it something that can
be re-shot if your raid box screws up.
 
Al said:
Have you contacted the sales people for each of the raid controller
cards we've named. Ask each if they can name a specific combo of
mobo/CPU/controller and disks that meet your requirements. SuperMicro
and HP (once DEC StorageWorks) might be worth a call.

I have a few quotes in the 15,000-25,000 USD ball-park.
I was considering building an equivalent system for much less.
If they all say it's impossible or will cost a million bucks, I think
you have to give your boss the bad news and make other plans.

BTW; there are two bits of information I've not seen you state
that *do* make a difference;

- For how many minutes of data do you need to sustain the write speed?

A typical clip will last between 1 and 20 minutes, i.e. 10-200 GB.
- Is the data one-time and priceless or is it something that can
be re-shot if your raid box screws up.

The clip itself is unimportant. What matters is that I be able to
capture one. Then be able to play it out in a loop. (To test video
encoders, test the signal, etc.)

Regards.
 
Spoon said:
What am I supposed to see? :-)

Even the Cheetah 15K.5 cannot sustain 150 MB/s.

(135 MB/s on outer tracks down to 82 MB/s on inner tracks.)


i figured you'd have the brains to
look around the site

their jan.'06 review of the 150GB Raptor
shows 88.3 MB/s outer, down to 60.2 inner

see: www.storagereview.com/articles/200601/WD1500ADFD_3.html
What's wrong with RAID-0?


well i was responding to your raid6 comment,
which has the worst write performance

raid0 has no fault tolerance

so the larger the array (i.e. # of disks),
the more often it will fail, and you'll
be down 12+ hours rebuilding it

if that's acceptable, then go with raid0

but both raid0 and raid7 have roughly
the best write performance
(good luck finding a raid7 controller)

at least a raid0 controller that will
give you the write performance that
you need won't cost an arm and a leg

the above ref also suggests keeping an
open eye for raid5 controllers coz they
are next best at write performance

these are just ideas for you to look into;
meaning i wouldn't be making any bets on
a raid5 card to meet your write needs

so kindly be polite with any response

what's clear to me is that you've still got
some serious homework before it's clear
if this can or can't be done

bill
 
Spoon said:
i figured you'd have the brains to
look around the site

their jan.'06 review of the 150GB Raptor
shows 88.3 MB/s outer, down to 60.2 inner

see: www.storagereview.com/articles/200601/WD1500ADFD_3.html

Hmm. Getting nasty yet you cite a slower drive. & after all this
posting you still haven't provided a *solution* to the sustained 186
MB/s requirement.

Absolute silliness. There's nothing to look at. RAID 7 was a bunch of
unsafe mumbo jumbo *only* available from the now defunct Storage
Computer Corporation.
well i was responding to your raid6 comment,
which has the worst write performance

raid0 has no fault tolerance

Doesn't matter if he doesn't need it.


Spoon. This question is in the wrong group. Talk to *actual* storage
professionals that have *actually* used and built storage systems that
meet or exceed these requirements at comp.arch.storage.
 
teckytim said:
Spoon. This question is in the wrong group. Talk to *actual* storage
professionals that have *actually* used and built storage systems that
meet or exceed these requirements at comp.arch.storage.

Tim,

Thanks for the pointer. I will give it a try.

Regards.
 
On Tue, 12 Dec 2006 15:46:13 +0100, Spoon wrote:

They reach 410 MB/s read and (only) 200 MB/s write.

Holy mother of pearls! Only 200 MB/s with 12 drives?

Here's a small web forum thread with some Linux SW RAID benchmarks:

http://forums.2cpu.com/showthread.php?t=79364

He was hitting 484 MB/s reads & 335 MB/s writes.

--
DOS Air:
All the passengers go out onto the runway, grab hold of the plane, push it
until it gets in the air, hop on, jump off when it hits the ground again.
Then they grab the plane again, push it back into the air, hop on, et
cetera.
 
Spoon said:
I don't understand what point you are trying to make.

Can you elaborate?


you were the person who was attracted
to Raptor/SATA (presumably cost issues),
*not* me

i wasn't and i said so upfront

is there some disconnect here?

SCSI has had the superior performance
(over IDE and SATA) for almost forever
(10+ years)

i've been upfront with my limited lack of
knowledge on the subject of high end raid

imho, the primary use of usenet n/g's
is ideas

and i have done my limited/honest best
to give ideas in this thread

the fact that you responded as you did to
both myself and techytim is in you favor

the one thing that techytim said that
was worthwhile was doing a post on the
comp.arch.storage n/g

while i don't doubt his input on no raid7
controllers being available, i'd still
google on raid7 raid-7 raid_7 and "raid 7"
and see what turns up, coz it never ceases
to amaze me on how wrong "experts" are

i can say that i've never seen either
a raid2 or a raid7 controller, which
is why i made the comment:
"good luck finding a raid7 controller"

so far raid0 may work for you, although
it seems to still be an open question
if it's write performance will meet
your needs

my one other thought is that if you do find
a raid0 controller with the write performance
that you need, you might give some serious
thought to laying in a couple of extra 300GB
Seagate Cheeta (sp?) drives and then only
allocate the 1st 60/70% to the partition
that you are going to use (allocate the rest
to a 2nd partition and test the speed diff)

to my mind, any high end raid controller should take
the outer rims (fastest) for the initial selection,
but i don't know that for sure (another question
to pose on the comp.arch.storage n/g)

bill
 
Spoon said:
Tim,

Thanks for the pointer. I will give it a try.

Regards.

Well...

1. If the software that you're using is competent, an average desktop
system today can stream data onto or off at least a half-dozen disks at
very close to their theoretical potential, even on their outermost
tracks. Five of today's better 7200 rpm desktop drives will handle your
bandwidth requirement even on their innermost tracks; if you restrict
your usage to middle or outer tracks, four or even three could suffice
(unless you need the extra space anyway).

2. If you're not going to be streaming data for hours on end (i.e., the
disks get to take occasional significant breaks), conventional SATA (or
even plain old PATA) drives will work just fine (though 'near-line
enterprise' versions cost very little more if you'd feel more
comfortable with them). Don't bother with Raptors and don't even think
about high-end FC/SCSI - they're for seek-intensive continuous
operation, and the increase in per-disk streaming bandwidth doesn't
begin to justify the increase in cost.

3. One conceivable gotcha could be recalibration activity: I'm not
sure how completely that's been tamed. It used to be that a disk would
just decide to take a break for a second or more once in a while to
recalibrate (reevaluate its internal sense of where the tracks were),
which tended to disrupt the smoothness of streaming data. Back then
vendors sold special 'audio-visual' disks that purported to avoid this
problem, but I haven't heard anything about them recently. I suspect
that all disks are now more civilized about waiting for an opportune
moment (or that most of the need for recalibration may have disappeared
when the servo information became embedded with the data itself) - but
letting the array's temperature stabilize a bit after start-up before
putting it to use couldn't hurt.

4. If you really can tolerate interruption by the occasional disk
failure, RAID-0 is the way to go. If not, use RAID-10 (which will
maintain your read bandwidth even if you lose a drive, unlike RAID-5).

5. If you use RAID-0, software RAID will work virtually as well as
hardware RAID would (this is almost as true for RAID-10): just make
sure that the disks' write caches are enabled. In the unlikely event
that you wind up using PATA drives each single cable/controller port may
not have sufficient bandwidth to support more than one - in which case
you'll need more then the typical two PATA motherboard connectors,
either via a MB with an additional on-board RAID controller or by using
an add-on card. Unless you'll be doing significant *other* activity
while streaming data it would probably be safe for one of your streaming
disks to share a cable with your system disk (though if that turned out
to be a problem you could run the system and other software off a CD or
USB drive).

6. If you're not actually *processing* the data stream but just writing
it to disk as it comes in and then later reading it back out to
somewhere else, even a modest single-core processor won't break a sweat:
it just acts as a traffic cop, while the motherboard's DMA engines and
the disks do all the real work. Memory bandwidth won't be taxed,
either. (Note that both of these observations might change if you used
software RAID-5.)

7. PCI bandwidth, however, may be a problem. A plain old 32/33 PCI
maxes out at under 132 MB/sec of bandwidth (minus collision overhead) -
so even if the system bridges kept the disk transfers off the PCI (which
would not be the case if you needed to use a PCI card to connect some of
the disks) you couldn't stream the data in, or out, over the PCI (though
with bridges that supported dual Gigabit Ethernet as well as the disk
traffic you could do the job without touching the PCI at all - if
connecting via Ethernet were an option). 64/66 PCI might have enough
headroom to handle the combined interface and disk traffic and PCI-X
certainly should - so you shouldn't need to go to PCI Express unless you
want to.

8. Desktop disks don't take all that much power to run. A typical
contemporary 350W power supply will spin up 4 of them simultaneously
(which is by far the time of heaviest power draw - that's why spin-up
times are staggered in larger systems), unless it's heavily loaded by
some macho gaming processor and graphics card. Once they're spinning,
they take very little power indeed (especially if they're only streaming
data rather than beating their little hearts out doing constant long
seeks and short transfers): cooling won't be a significant problem
(though you do want to keep them comfortable).

- bill
 
Wow,
well here is something for the processor that may be of value or not:

AMD 580X CrossFire™ Chipset - Specifications

General

* The worlds first single chip 2x16 PCI-E chipset
* Enhanced support for over-clocking, and PCI Express performance
* Fastest multi-GPU interconnect
* Coupled with SB600 for performance

CPU Interface

* Support for all AMD CPU's: Athlon™ 64, Athlon™ 64 FX,
Athlon™ 64 X2 Dual-Core, and Sempron™ processors
* Support for 64-bit extended operating systems
* Highly overclockable and robust HyperTransport™ interface

PCI Express Interface

* 2 x16 PCI Express lanes to support simultaneous operation of
graphics cards
* Additional 4 PCI-E General Purpose Lanes for peripheral support
* Compliant with the PCI Express 1.0a Specifications

Power Management Features

* Fully supports ACPI states S1, S3, S4, and S5
* Support for AMD Cool'n'Quiet™ technology for crisp and
quiet operation

Optimized Software Support

* Unified driver support on all ATI Radeon PCI Express discrete
graphics products
* Support for Microsoft® Windows® XP, Windows® 2000, and Linux

Universal Connectivity

* A-Link Xpress II i/f to ATI northbridges; providing high
bandwidth for high speed peripherals
* 10 USB 2.0 ports
* SATA Gen 2 PHY support at 3.0Ghz with E-SATA capability
* 4 ports SATA AHCI controller supports NCQ and slumber modes
* ATA 133 controller support up to UDMA mode 6 with 2 drives (disk
or optical)
* TPM 1.1 and 1.2 compliant
* ASF 2.0 support for manageability control
* HPET (high precision event timer), ACPI 3.0, and AHCI support for
Windows Vista
* Power management engine supporting both AMD and Intel platforms
and forward compliant to MS Windows Vista
* UAA (universal audio architecture) support for High-Definition
Audio and MODEM
* PCI v2.3 (up to 6 slots)
* LPC (Low Pin Count), SPI (new flash bus), and SM (System
Management) bus management and arbitrations
* "Legacy" PC compatible functions, RTC (Real Time Clock),
interrupt controller and DMA controllers
 
Back
Top