RAID: identical disks?

  • Thread starter Thread starter void
  • Start date Start date
Where does one find an "expected service life" rating?

Manufacturer's website.

WD - search for "component design life" for IDE, "service life" for SCSI.
Maxtor - search for "component design life"

In general, I'm sure that "average service life" of deployed hard drives
exceeds those numbers, but I'm not aware of anyone publishing those results.
Businesses should not run hard drives longer than warranty states.
 
Peter said:
Manufacturer's website.

WD - search for "component design life" for IDE,

Which gets one hit for a drive intended for mobile devices.
"service life" for SCSI.

WD hasn't made a SCSI drive in years. That search gets a number of hits on
SATA drives, none of which have a "service life" rating.
Maxtor - search for "component design life"

Which gets two hits both of which are for a "design life (min)".
In general, I'm sure that "average service life" of deployed hard drives
exceeds those numbers, but I'm not aware of anyone publishing those
results. Businesses should not run hard drives longer than warranty
states.

Why not? What downside is there that outweighs the cost of the replacement?
A _business_ should keep records of the drive failure rate and start a
phased replacement when the failure rate reaches the level at which
replacing the entire inventory becomes more cost effective than replacing
by attrition. But by that time it's probably time to consolidate the
servers anyway.
 
Manufacturer's website.
WD - search for "component design life" for IDE, "service life" for SCSI.
Maxtor - search for "component design life"

Generally it is 5 years for electronics. The design lifetime is
the time when the failure rate starts to increase significantly.
Before that it is directly related to the MTBF.
In general, I'm sure that "average service life" of deployed hard drives
exceeds those numbers, but I'm not aware of anyone publishing those results.
Businesses should not run hard drives longer than warranty states.

I do not agree. You cretainly can run any drive up to the
design lifetime, since only afterwards will you get an increased
risk. Businesses should however have a current backup and likely
should use RAID1 or RAID5 on all disks where a disk failure
would cause significant recovery effort.

Arno
 
Yep, but a WD and a Seagate of the same vintage will generally be pretty
closely matched in performance, space, and firmware optimizations,

Nope. Not firmware optimizations. As far as performance that depends
on what aspect we're talking about as well as whether the two can play
nice.
and if
there is a difference in capacity between one bran of 250 GB drive and
another, it's not so huge a difference that one would consider it
"limiting" in any but the most pedantic sense.

According to your tolerances.

What's pedantic is to overanalyze a simple statement. I never
inferred an array, esp a software ATA array, would likely be crippled
with mismatched drives. Only that the "lowest common denominator" or
"weakest link", if you will, dictates how the array works.
While that is indeed a worst case, have you ever seen it actually happen?

Yes. Ranging from erroneous PFA failures to massive performance
degradation.
Most RAID today is software, not hardware,

on the low end.
and quite honestly Windows and
Linux and Novell don't _care_ whether a RAID is composed of different
brands and models

So? In most cases those OS's don't even know what's going on with the
storage on that kind of low level, as they shouldn't.
as long as the capacity and performance are about the
same.

nope. They won't care about that either. Esp in the case of firmware
or firmware assisted software raid. The end user might care though.
I believe that I said "exceedingly unlikely" myself. It can happen though.

Nope. you said "I don't consider such failure to be exceedingly
likely." Which since you like being pedantic you should realize it
does not convey exactly the same meaning as "exceedingly unlikely."
Where does one find an "expected service life" rating?

Directly from manufacturers as well as end-user decisions about life
cycle.
 
Curious George said:
When disks are mismatched the best case scenario is performance,
space, firmware optimizations are limited by the lesser drive. In a
worst case scenario it causes compatibility problems. Fortunately on
modern hardware esp with software or firmware assisted software raid
this worst case scenario is virtually a non-issue.


There is most definitely a U-shaped or bathtub curve to hardware
failure over time. However both drives in a 2 drive array dying
natural deaths within hours of another is quite unlikely.

Not if the system is out of spec, too hot or a bad power supply.
Rather than mixing models some ppl buy parts from different suppliers
to hedge their bets or simply count on premature failure to introduce
media of different ages or simply proactively decommission arrays at
the end of expected service life rather than wait for the catastrophic event.

Or just use two quite different models that are compatible.
 
Manufacturer's website.

Or the OEM manual in the case of IBM/Hitachi.
WD - search for "component design life" for IDE, "service life"
for SCSI. Maxtor - search for "component design life"

And design life with IBM/Hitachi.
In general, I'm sure that "average service life" of
deployed hard drives exceeds those numbers, but
I'm not aware of anyone publishing those results.
Businesses should not run hard drives longer than warranty states.

That is just plain mad, most obviously with 1 year warrantys.
 
Curious George said:
Rod Speed (e-mail address removed) wrote
that wouldn't be a "natural death" now would it

The original wasnt about "natural death", it was clearly
about what gives the best result, regardless of what
happens. It aint just about handling hard drive failure
gracefully, its also about handling other failures
gracefully too if that can be done essentially for free
by using two different drives instead of two identicals.
offering an alternative is not saying "you can't."

Never said you did, I just pointed out the advantage of that approach.
 
Curious said:
Nope. Not firmware optimizations.

Yes firmware optimizations. Tuning may be a little different from one to
the other but not so much so that a RAID is going to die horribly from the
mix.

Of course you may insist on absolute identity in which case you're a pedant
and not a technician.
As far as performance that depends
on what aspect we're talking about as well as whether the two can play
nice.

Do you have case histories where they do not?
According to your tolerances.

What's pedantic is to overanalyze a simple statement. I never
inferred an array, esp a software ATA array, would likely be crippled
with mismatched drives. Only that the "lowest common denominator" or
"weakest link", if you will, dictates how the array works.

I see.
Yes. Ranging from erroneous PFA failures to massive performance
degradation.

Oh, you're talking hardware SCSI RAIDs. I see.
on the low end.
And?


So? In most cases those OS's don't even know what's going on with the
storage on that kind of low level, as they shouldn't.

Well, actually they do. All three of those have software RAID built in and
in many cases it outperforms hardware RAID.
nope. They won't care about that either. Esp in the case of firmware
or firmware assisted software raid. The end user might care though.

ROF,L. The only difference between "firmware" and "software" is where it is
stored.
Nope. you said "I don't consider such failure to be exceedingly
likely." Which since you like being pedantic you should realize it
does not convey exactly the same meaning as "exceedingly unlikely."
ROF,L.


Directly from manufacturers

Provide links please for "expected service life" for a dozen or so current
production models, since you assert that it can be obtained "directly from
manufacturers".
as well as end-user decisions about life
cycle.

Gotcha.
 
The original wasnt about "natural death", it was clearly
about what gives the best result, regardless of what
happens. It aint just about handling hard drive failure
gracefully, its also about handling other failures
gracefully too if that can be done essentially for free
by using two different drives instead of two identicals.

clueless, nonsensical back peddling
Never said you did, I just pointed out the advantage of that approach.

no you didn't. No advantage has been pointed out. You're simply
insisting on that method.
 
Yes firmware optimizations.

nope. Not when lowest common denominator applies.
Tuning may be a little different from one to
the other but not so much so that a RAID is going to die horribly from the
mix.

still rambling on about imaginary arguments I see.
Of course you may insist on absolute identity in which case you're a pedant
and not a technician.


Do you have case histories where they do not?
Yes.


I see.


Oh, you're talking hardware SCSI RAIDs. I see.

& ATA raid, esp early ATA raid & specific WD's

so "most" is meaningless. Most of what exactly? It's an overly
sweeping and ultimately self-defeating generalization.
Well, actually they do. All three of those have software RAID built in and
in many cases it outperforms hardware RAID.

outperforms? Generally not in many recovery scenarios. Although yes
I'll take OS raid over bottom of the barrel cards. But there are 2
problems here:
1. raid in general (where you started) /= OS software raid (where you
ended up)
2. Do these kernels really know what's going on with the volume
management? I'm not so sure. I'm not so sure you are either.
ROF,L. The only difference between "firmware" and "software" is where it is
stored.

ROFLOL. Duh!

Surely you jest if you think they're always functionally the same, esp
in relation to the host, & that there aren't different classes of
subsystems with different capabilities and thoroughness of design?

Yeah. Ha ha "exceedingly likely" said:
Provide links please for "expected service life" for a dozen or so current
production models, since you assert that it can be obtained "directly from
manufacturers".

What? you can't navigate a web site or read a manual? I got better
things to do than spoon feed crybabies.

Puhleeza. you got nutin silly child.
 
Previously J. Clarke said:
Curious George wrote:
Yes firmware optimizations. Tuning may be a little different from one to
the other but not so much so that a RAID is going to die horribly from the
mix.
Of course you may insist on absolute identity in which case you're a pedant
and not a technician.

What good would identity do if a disk dies after some time? Would
you have to replace every disk in the array? Pretty stupid idea IMO.
And no, RAID software is not tuned for particular disks. It takes
just the best each disk can give it and if it is smart it does
parallel requests to a limited degree. No disk dependence for
software optimisations in here.
Do you have case histories where they do not?

I have: Two disks in the same Linux software-RAID 1 or 5 on the
same IDE channel. Terrible performance. The solution is however
not to change the disks, but to give each one its on IDE channel,
since IDE switchover in one channel is slow.

Arno
 
Couldnt bullshit its way out of a wet paper bag...

You're right Rod - I don't bullshit. That's your specialty.

Rod. I'm concerned about you. I think you should really see someone
about your stuttering problem - unless, of course, you simply think
writing something twice makes you sound like half the troll you are -
in which case, you should really lay off the pipe.
 
All three of those have software RAID built in and in many cases it
outperforms hardware RAID.

Why would software RAID outperform hardware RAID?

Gerhard
 
Back
Top