Striping data across platters in a single hard disk

  • Thread starter Thread starter tony
  • Start date Start date
Al Dykes said:
And the MTBF would be higher and the component that has to be replaced
to fix is more expensive.

MTBF lower I believe you meant (more parts = more to potentially fail). I imagine
the actuator is the most unreliable portion too huh. Still though, it's twice as fast!
(I'd buy one! OK, not for twice the price... that, as said, could buy a second
drive).

Tony
 
I'm not convinced at all that a drive "twice as fast but costs more" would be
"low volume".

Jeez, you are dense! A drive that costs ten times as much and delivers no
more performance than two drives isn't going to sell very well.
 
And we're talking about a large potential performance gain with the
dual (or 3! ;) ) head/actuator setup too.


No, you're not. Stripe two cheap drives and you get all the gain for only
2x the price. You'll never get your contraption made at a lower cost.
 
No, you're not. Stripe two cheap drives and you get all the gain for only
2x the price. You'll never get your contraption made at a lower cost.

For the business/corporate market I'll agree that the contraption will
never be viable. But considering the niche market like that of the
quoted Raptor, I think we might not want to underestimate the amount
of money some groups of people are willing to spend for the
mine's-bigger-faster-l33t factor :P

After all, there are folks who will pay a couple of hundreds for some
fangled cooling kit just so that they can reduce the whirr of a fan to
the hmm of a pump, and folks who pay almost a thousand for the latest
graphic card just to be able to boast about another 1000pts in some
benchmark...
 
I was thinking the exact same thing today. The question becomes: "does
flash need a different interface than drives/peripherals (it's memory afterall)?".

Intel has a project called Robson to do just that - put it on the mbrd, if
I'm not mistaken. I'd think yes, flash does need a different interface but
that work is mostly done AFAIK.
They are though because they indicate a separation layer between 2 magnetic
layers.

Yes they talk about "layers" but the one where they have a "separation
layer" with "gauzy isolation" is for the heads - there's a hint for the
media also but to me, that's where it got muddled.
 
For the business/corporate market I'll agree that the contraption will
never be viable. But considering the niche market like that of the
quoted Raptor, I think we might not want to underestimate the amount
of money some groups of people are willing to spend for the
mine's-bigger-faster-l33t factor :P

After all, there are folks who will pay a couple of hundreds for some
fangled cooling kit just so that they can reduce the whirr of a fan to
the hmm of a pump, and folks who pay almost a thousand for the latest
graphic card just to be able to boast about another 1000pts in some
benchmark...

There's no accounting for stupidity, but how many desktop users run
appications that are ever bottlenecked on the disk for any perceptable
percentage of the time?

If the IO Q depth rarely gets above 1, a dual head disk won't get you
squat; perfom.exe will graph your bottlenecks. I *think* that new
SATA disks will do seek sorting for IO optimization. That's a decent
win that's done in controller logic and doesn't add any cost. (Sorry,
I don't know what the new fandgled name is. My computers were doing
this in the 70's).

maybe, you're going to be able to get two 2.5 inch disks in one 3.5
inch form factor. Add electronics that let the user chose RAID0
(performance) and RAID1 (mirroring) and you've got an interesting
product. Maybe. I don't know any system builder that would buy it if
it cost a nickle more than two seperate disks.

Photoshop is the only common desktop application I know of that can
really use multiple disks. I've got three good disks in my system and
I know it makes a difference.
 
Al Dykes said:
If the IO Q depth rarely gets above 1, a dual head disk won't
get you squat;

Sure it will -- you potentially can get double the bandwidth
from the drive. I don't know what difference it will make
if you're spending most time seeking.
perfom.exe will graph your bottlenecks. I *think* that
new SATA disks will do seek sorting for IO optimization.
That's a decent win that's done in controller logic and
doesn't add any cost. (Sorry, I don't know what the new
fandgled name is. My computers were doing this in the 70's).

Decent computers have been running elevator algorithms for
disks for decades. I don't know when Linux _didn't_.
Photoshop is the only common desktop application I know of
that can really use multiple disks. I've got three good
disks in my system and I know it makes a difference.

Actually, any quarter-decent OS should know how if the user
has laid out their data correctly. PhotoShop may just have
less stupid defaults. The general rule for expected heavy
disk traffic [streaming] is: swap/tmp/app/[OS] on one disk,
input on another and output on a third. Preferably on separate
controllers if EIDE (no disconnect).

-- Robert
 
Rob said:
And look at all the idiots who spend 6 times as much, per GB, for
a 74 GB Raptor compared to a more typical 300 GB 7200 rpm SATA
drive.

Having 300 GB doesn't do you much good if you only have 10 GB of data,
does it? I have the 37 GB Raptor on my work PC. It's still got 30GB
free after more than a year of use. I don't really care that the
GB/dollar ratio was low - the drive cost less than $100, which is not
much money for a very fast drive.

It's a much better investment than a high-end CPU, IMO.
 
The little lost angel said:
For the business/corporate market I'll agree that the contraption will
never be viable. But considering the niche market like that of the
quoted Raptor, I think we might not want to underestimate the amount
of money some groups of people are willing to spend for the
mine's-bigger-faster-l33t factor :P

After all, there are folks who will pay a couple of hundreds for some
fangled cooling kit just so that they can reduce the whirr of a fan to
the hmm of a pump, and folks who pay almost a thousand for the latest
graphic card just to be able to boast about another 1000pts in some
benchmark...

How about backing up across the LAN 3 times faster? Or installing a new
image across the LAN? Or multimedia streaming to muttiple places....?
With Gb ethernet becoming ubiquitous, the hard drive is a bottleneck.

Tony
 
tony said:
With Gb ethernet becoming ubiquitous, the hard drive is a bottleneck.

I don't see GBE that ubiquitous now, or for the midterm future.
The wiring is there, but the machines at both ends often
have trouble saturating 100baseTX. In particular, servers
are often busy with multithreads seeking.

-- Robert
 
Robert Redelmeier said:
I don't see GBE that ubiquitous now, or for the midterm future.
The wiring is there, but the machines at both ends often
have trouble saturating 100baseTX. In particular, servers
are often busy with multithreads seeking.

Any new network installations or PCs without Gb LAN is shortsighted (if not obsolete).

Fast hard drive, say ~30 MB/s = 240 Mb/s. Saturate 100BaseTX? One PC with
today's hard drive technology can saturate it (2 times over too) during backup
or other large file transfer scenarios (the transfer rate of 100Mb/s is a theoretical
maximum for the LAN also, so it's even worse). Go to GbE and the bottleneck is then
the hard drive.

Tony
 
tony said:
Fast hard drive, say ~30 MB/s = 240 Mb/s. Saturate 100BaseTX? One
PC with today's hard drive technology can saturate it (2 times
over too) during backup or other large file transfer scenarios
(the transfer rate of 100Mb/s is a theoretical maximum for the
LAN also, so it's even worse). Go to GbE and the bottleneck is
then the hard drive.

Nope. Because the server is still (as usual) the bottleneck.
You never get it to yourself. That's worse than it sounds,
because the disk pack will spend a lot of time seeking round for
all it's users. It can't just divvy up 150 MByte/s into 5 * 30 .

-- Robert
 
How about backing up across the LAN 3 times faster? Or installing a new
image across the LAN? Or multimedia streaming to muttiple places....?
With Gb ethernet becoming ubiquitous, the hard drive is a bottleneck.


Not on a cheap machine, right now. Two disks will still accomplish
the same thing, cheaper.
 
Al said:
Not on a cheap machine, right now. Two disks will still accomplish
the same thing, cheaper.

Whoopee. I have demonstrated to a few people that a pair of
ordinary SATA or PATA drives will beat an exorbitantly priced
Raptor in almost everything they use their computers for - and
that only serves to make them even more excited about striping a
pair of Raptors.

The kind of people that stripe a pair of Raptors will still go
out and buy the hypothetical striped-platters drives - and stripe
a couple of /those/ together too.

You might as well tell a Ferrari owner that he could have bought
a more practical car for a fraction of the price - he'll still
want his damned sports car.
 
Robert Redelmeier said:
Nope. Because the server is still (as usual) the bottleneck.
You never get it to yourself. That's worse than it sounds,
because the disk pack will spend a lot of time seeking round for
all it's users. It can't just divvy up 150 MByte/s into 5 * 30 .

Who said anything about a server?

Maybe it's a P2P LAN and users wanna transfer huge files back
and forth. Or maybe there is a dedicated node for streaming
backups to. How many clients could you backup in a given amount
of time with 1000 vs. 100TX? If there was a server though, the
connection to it is probably going to be the next level up on the
food chain (10 GbE over fiber maybe if required and RAID).

You'd have to be really trying to not see the value of GbE and faster
drives, especially if the cost is comparable as is the case for GbE.

Tony
 
Al Dykes said:
Not on a cheap machine, right now. Two disks will still accomplish
the same thing, cheaper.

Well at this time we don't have the alternative, so who is to say? It's up
to engineering to come up with a cost effective solution (and conceptually
at least, it seems "easy"). 2 disks will get you to ~half the bandwidth of GbE,
BTW, so you'd need both the new technology drive and two of them striped to
approach saturating GbE. The original post was not a call/charter to produce
an exotic drive, but rather to engineer something practical perhaps to become
mainstream technology.

Tony
 
tony said:
Who said anything about a server?

Maybe it's a P2P LAN and users wanna transfer huge files
back and forth.

Ms-Windows is far to insecure to contemplate opening machines.
Or maybe there is a dedicated node for streaming backups to.

aka -- server!
You'd have to be really trying to not see the value of GbE
and faster drives, especially if the cost is comparable as
is the case for GbE.

Harddrive bandwidth is only part of the performance equation.
I suspect a diminishing part as seek times are slow to improve.

-- Robert
 
There's no accounting for stupidity, but how many desktop users run
appications that are ever bottlenecked on the disk for any perceptable
percentage of the time?

If you look around, you will find people who exclaim estastically that
their new drive/whatever now boots Windows in 20 seconds instead of
22, and applications load 1 second faster etc...

Most people don't need a 3Ghz PC for most of the things they do. But
the same people will refuse to buy anything less than the fastest.

We have a friend who bought a dual core system with a pair of Raptor
recently simply because it's the "it's supposed to be the newest and
faster"... As far as we know, he doesn't play games and the most
demanding task his system ever does is probably watch DVD.

It's just like the folks in my country who illegally retrofit their
cars with turbos, superchargers, whatever performance enhancing
stuff... when my country's so small (26 miles across), they will never
get a road long, straight and empty enough to hit peak speed.

So it doesn't really matter how much percentage of time are people
ever bottled at. Guys just like to do incomprehensible things. :P
 
Robert Redelmeier said:
Ms-Windows is far to insecure to contemplate opening machines.

Nah. There's a lot of those deployments in both SOHO and residential
arenas. Sometimes a "server peer" (not a "real server running a
server OS) acts as a "central peer" (poor man's server) where
server-like things happen (file serving, backup of clients).
aka -- server!

Not really. Just another node on the LAN accepting huge streams
of data from other nodes.
Harddrive bandwidth is only part of the performance equation.
I suspect a diminishing part as seek times are slow to improve.

Seek times on a desktop aren't really significant. That has more
effect on transactional servers that are dishing out records from
databases. Throughput is what count's on desktops (loading
huge graphic files, backing up, booting in 10 seconds ...).

Tony
 
Back
Top