Enterprise versus "consumer" grade drives

  • Thread starter Thread starter miso
  • Start date Start date
miso said:
[...]
So is there any advantage to buying a Seagate Constellation versus a
Baracuda? Or Ultrastar versus Deskstar?
No. Unless you have a hard limit to only use one drive. "Enterprise"
drives are a bit better in reliability, but RAID1 is so massively
better that enterprise drives are laughable. The only place these
pay of (somewhat) is if drive replacement is expensive, e.g. because
somebody has dro drive to the datacenter. There, even a small
imporivement in reliability can justify a larger price increase.

[...]
Given that the OS with be on SSD and the magnetic media is on RAID 0,
would it still make sense to go with enterprise grade drives, presuming
they are more reliable that the consumer grade?

If you use RAID0, your data is basically doomed anyways, unless
you have good backup. If you have good backup, no need to go for
enterprise drives. RAID0 is basically always a bad choice except
for cache and buffer applications that need high throughput, such
as video-capture and editing. But RAID0 should never be used as
actual longer-term storage.

Arno
I meant mirror, i.e. raid 1, for the hard drives.

Ah, that is something different.
I'll consider RAID5. I've done that before. Not with the greatest
results though. When the mobo died, I was able to put the RAID 10 array
in another PC and all the data was still there. The RAID5 array couldn't
be recovered. I had a pretty good backup.

You should test RAID recovery when you design that array,
i.e. before you put data on it. Software RAID is better than
hardware RAID is better than FAKE RAID.
Raid 5 at least buys you something in the way of more storage with just
as good security, well provided the mobo doesn't get Chinese cap disease.

That should be long over now. This was an isolated incident of
industrial espionage and basically all caps from that time
should now be dead.

Arno

Tell the Samsung TV owners that Chinese bad caps are a thing of the past.

I rebuilt a raid array once. Very nerve racking. I don't know if things
have changed, but back then you "delete" the bad drive and then the
controller on the mobo somehow rebuilds the data on that drive. Delete
is a nasty work, so I got a usb drive and backed up the unhealthy RAID
array just in case.

I think ultimately if I am going to be a cyber pack rat, I need to go
NAS. Maybe a Drobo.
 
Arno said:
You should test RAID recovery when you design that array,
i.e. before you put data on it. Software RAID is better than
hardware RAID is better than FAKE RAID.

What's fake RAID?

That should be long over now. This was an isolated incident of
industrial espionage and basically all caps from that time
should now be dead.

I had a mobo get cap disease a couple of months ago, and it was used
constantly. Another was late last year.
 
What's fake RAID?



I had a mobo get cap disease a couple of months ago, and it was used
constantly. Another was late last year.

I assume the "reality" of RAID goes like this:
1) Real raid uses a rain controller than plugs into a card slot (the
faster the better). The controller does all the hard work, removing the
processing burden from the host PC.
2) Fake raid uses some chip on the mobo, in addition to software that
the CPU needs to run all the time.
3)software raid is self explanatory.

I've only done fake raid. The raid cards cost more than the mobo, and it
is tough to justify the expense with the fake raid on the mobo already.
However, in theory, when you get into these situations where the mobo
fails and you have a real raid controller card, you can plug that card
into another Pc and it will be able to read all your drives. [As
interface slots have migrated over the years, this might not be possible
in all cases. That is, the old controller needs to work in the new PC.]

When you installed a fake raid, you might of had to take a step where
you inserted 3rd party drivers during the installation phase. I know I
did this in win2kpro, but not in win7 pro. Opensuse does a good job of
being equipped with the fake raid drivers.
 
miso said:
I haven't built a PC with hard drives in a few years and I have to say
I am amazed at the hard drive market. [Note I have bought a few USB
drives, so I am referring to internal drives here.] First of all, it
seems everyone bought everyone else. Samsung went to Seagate. Hitachi
went to WD. Fujitsu went to Toshiba, which is presume is waiting to go
elsewhere. Well now I can see why the hard drive market never fell
back to the pre-Thailand flood prices. There are three, no make that
2.5 suppliers.

I always had the best luck with Seagate. So I do the usual market
survey (translation: read Newegg reviews) and it seems Seagate now
sucks. Also the 5 year warranty is 3 years, and that is on a good day.
Seagate has some drive with 1 year warranties. OK, so check out WD.
Hmmh, they seem to suck now too.

So is there any advantage to buying a Seagate Constellation versus a
Baracuda? Or Ultrastar versus Deskstar?

Have you noticed some vendors selling new drives without warranties?
When did that start happening?

FWIW, the system I plan on building will use intel SSD for the OS.
I've done two systems with intel SSD and no headaches, well other than
having to pay top dollar for the SSD. [I had a Corsair SSD arrive DOA.
That is my only non-intel experience.] I plan on getting two large
hard drives (normal, not SSD) and running RAID0. [Raid can be a pain
if the controller dies. Raid 0 may be inefficient, but at least the
drives are readable without RAID. I had a mobo fail that had a RAID 10
and a Raid 5 array on it. I got the RAID 10 going on another PC, but
the RAID 5 just refused to load. I had to go to the backup.]

Given that the OS with be on SSD and the magnetic media is on RAID 0,
would it still make sense to go with enterprise grade drives,
presuming they are more reliable that the consumer grade?

Incidentally, I noticed WD now has a 4Tbyte drive whose description is
similar to the Hitachi 4Tbyte. I'm leading towards using 3Tbyte since
they are substantially cheaper, though Fry's occasionally discount the
Hitachi 4Tbyte drives.


This article in Tom's Hardware argues that Serial-Attached SCSI (SAS)
HHDs are better better than SATA in the argument that Enterprise class
quality is worthwhile in Enterprise environments for several reasons:

http://www.tomshardware.com/us/sponsored/Seagate-Enterprise-Class-Hard-Drive-158


*TimDaniels*

That was good reading, though it says it is a sponsored paper, hence the
heavy Seagate emphasis.

A couple of sections bothered me.

Under Reliability:
"These added safeguards are sufficient to give enterprise drives an
order of magnitude greater data protection. Whereas a Barracuda desktop
drive will experience an unrecoverable read error once in every 10E14
bits read, a Constellation ES nearline drive will experience one such
error in every 10E15 bits. In a three-drive RAID, this improvement would
drop the chance of an unrecoverable read error from 12% down to under
2%. More volumes in the RAID and/or the use of a more fault-tolerant
RAID type, minimize the risk further."

I don't follow how they get 12% and 2%, and over what time period?

This section notes that the MTBF for consumer drives is done at 40deg C,
while enterprise drives use 60 deg C.

I'm not so sure Seagate should hint that enterprise firmware is less
buggy than consumer firmware. It might be true, but that sends a bad
message. In any event, all software has bugs.
 
Beats me! Not being an engineering paper, the math logic
may have been edited out for brevity.




I think that the only safe conclusion may be that Enterprise HHDs
are designed *and priced* for environments that are different from
Consumer environments, and that the manufacturers want to
maintain that differentiation in the minds of buyers. If you don't
run your desktop HHD in an Enterprise environment (i.e. up 24/7
with lots of random non-serial reads/writes alongside many other
similarly-loaded HHDs), why pay for it - especially when you can
protect yourself from data loss via redundancy (RAID or frequent
backups)?

*TimDaniels*

Seagate does claim the enterprise grade drive will last longer, which is
something to consider. However, mirror raid is really simple. Not
efficient, but simple. You still need backups in the event the computer
pees all over your data.

Keeping the OSs on a SSD and data on the hard drive does make it easier
to migrate to new drives. When the ghosting programs couldn't handle
mirror raid, I just synced the mirrors then copied one drive to an
external drive. That seems to work without lose of data, so I don't
think a RAID mirror drive is too different than a non-RAID drive. Now
that the ghosting programs understand fake RAID, this is less critical.
 
miso said:
On 10/14/2012 8:34 AM, Arno wrote:
Tell the Samsung TV owners that Chinese bad caps are a thing of the past.

That sounds very much like a problem of using the wrong caps instead
of ones that did not perform to spec. That is a different problem.
I rebuilt a raid array once. Very nerve racking. I don't know if things
have changed, but back then you "delete" the bad drive and then the
controller on the mobo somehow rebuilds the data on that drive.

FAKE RAID. Do not use it. One problem is that everything is
so intransparent. Many people have made a slight mistake
and killed all their data with it. A good controller
will tell you before killing anything.
Delete is a nasty work, so I got a usb drive and backed up the
unhealthy RAID array just in case.

Very sensible.

Arno
 
What's fake RAID?

BIOS RAID pretening to be hardware RAID. Google will tell
you about it.
I had a mobo get cap disease a couple of months ago, and it was used
constantly. Another was late last year.

How old was it? Also note that mainboards have only something
like 5 years design lifetime. If the caps pop after that,
not the mainboard's fault. If you spring for industrial
quality, things are different, but a lot more expensive.
Even "polymer" caps may only give you 5 years if the board
is run hot.

Arno
 
Timothy Daniels said:
This article in Tom's Hardware argues that Serial-Attached SCSI (SAS)
HHDs are better better than SATA in the argument that Enterprise class
quality is worthwhile in Enterprise environments for several reasons:

*TimDaniels*

In Enterprise environments, yes. You have to take into account
that every replacement and every downtime is pretty expensive
in an Enterprise environment. Hot disk replacement without any
other intervention can already be more expensive just for the
time of the people doing it than the drive being replaced.

The home-user situation is a lot different.

Arno
 
Here's the link at the bottom of that article to the same one but with a
better picture:

http://arstechnica.com/gadgets/2012...fixing-my-out-of-warranty-tvs-click-of-death/

The pic shows bulging capacitors. The botched espionage job to steal
capacitor formula was back around 1999 and the bad caps show up in a few
years if not sooner. Of course, if the electronics are never powered up
or are turned on very little then the problem takes longer to manifest.
The articles above are dated in 2012. There are other reasons for
pregnant and leaking caps, like (as you said) using the wrong capacity,
voltage rating (using an underrated cap), poor design (not related to
the espionage incident but, for example, lack of proper outgassing
relief), and age that causes dry out (the TVs might be just "a few years
old" but the components in them could be a lot older).

http://en.wikipedia.org/wiki/Capacitor_plague

The article is way too show or undetailed to know what Samsung claims is
the cause of the bulging caps. I doubt the 1999-2000 supply of bad caps
from the botched espionage attempt are still around. That episode of
bad caps was also limited to certain suppliers which resulted in
specific brands getting affected. If Samsung had any of those, I doubt
it took them more than a couple months if not a lot quicker to flush out
their component bins of those bad caps. The article is talking about an
LCD TV that was manufactured a decade later. It is more likely now the
bulging caps are due to bad component design (of the cap itself) or
improper cap used in the circuit, like the cap's breakdown voltage is
too close to the working voltage range of the circuit or something else
in the circuit is causing too large a voltage swing.

Bad designs do happen, or the wrong components get used, or purchase
orders get screwed up and what gets delivered isn't what was ordered.

http://en.wikipedia.org/wiki/Capacitor_plague#Electrolytic_capacitor_failures_after_2007

Another cause mentioned in that article is high ripple current. If a
regulator blows then the voltage swing could be outside the breakdown
voltage rating of the capacitor but then that might be considered poor
design, too, with manufacturers trying to save tenths of pennies over
volume production to remain competitive.

http://www.tpub.com/neets/book7/27j.htm

Bad design or wrong components are part of the history and future of
electronics. The cited article does not provide proof that some
2000-era caps due to botched espionage stored somewhere unused for over
a decade managed to get into those TVs.
 
Timothy Daniels said:
"


If an electrolytic capacitor on a motherboard or power supply fails,
what is the easiest way to find out its specifications so as to buy a
suitable replacement?

Read that off the one that died.
Do you need to have access to the circuit diagram?

Nope.
 
Timothy said:
"Rod Speed" replied:


I'm glad to hear that it's still that easy. I seem to recall
seeing a circuit board a few years ago that had components with only
part nos. printed on them - no real specs.

Capacitors always have capacitance and voltage printed on them, but for this
application you also need low Equivalent Series Resistance which is not
printed on them. High ESR is the reason they failed in the first place.
You won't find suitable caps at Radio Shack, but low-ESR types are easy to
find from the big parts retailers.

The hardest part is desoldering the lead-free solder.
 
Some of my fake raid boards had chips from Silicon Image, so there can
be hardware associated with fake raid.

I think if you are going to dual boot, fake raid is better than software
raid. At least it is less work since both windows and in my case
opensuse see the raid set up in the bios.
 
Some of my fake raid boards had chips from Silicon Image, so there can
be hardware associated with fake raid.

I think if you are going to dual boot, fake raid is better than software
raid. At least it is less work since both windows and in my case
opensuse see the raid set up in the bios.
 
Capacitors always have capacitance and voltage printed on them, but for this
application you also need low Equivalent Series Resistance which is not
printed on them. High ESR is the reason they failed in the first place.
You won't find suitable caps at Radio Shack, but low-ESR types are easy to
find from the big parts retailers.

The hardest part is desoldering the lead-free solder.
Actually low ESR is what causes the high current in the cap, so you
could say low ESR causes the cap to fail. When they fail, the ESR will
be high. The problem is switching power supplies, which can really whack
a cap. Going to more phases in the switching supply helps, as well as
soft start.

Oscon or Nichicon caps are find for replacement, but at some point, it
makes sense to just scrape the system. I put the components in other
systems, upped the ram in some older mobos, etc. Now if a TV or stereo
has cap failure, then it probably makes sense to replace the caps.
 
That should be long over now. This was an isolated incident of
industrial espionage and basically all caps from that time
should now be dead.

Unfortunately that isolated incident hasn't been the main cause of
bad capacitors, which continued to be manufacturered by most Chinese
and Taiwanese companies and Japan's Toshin Kogyo (TK). Even Nippon
Chemicon's/United Chemicon's KZG and KZJ models are considered bad.

I have problems believing the industrial espionage story because it
says China's Luminous Town company hired a chemist from Japan's
Rubycon capacitor co. to reproduce Rubycon's electrolytes and that
he did it correctly, then his assistants jumped ship and made faulty
copies of his good copies, and that's what caused the capacitor plague
of the early 2000s. So why did Chinese and Taiwanese companies
continue to produce bad caps even long after the supply of faulty
electrolyte was used up? Even Luminous Town made bad caps, despite
them supposedly using only the good Rubycon knockoff electrolyte, and
I've replaced many of their LTec brand caps in motherboards, some as
new as 2007 (back in 2010).
 
If an electrolytic capacitor on a motherboard or power supply
fails, what is the easiest way to find out its specifications so
as to buy a suitable replacement? Do you need to have access to
the circuit diagram?

The diagram is unlikely to be available or indicate anything about
the capacitors except their capacitance and voltage ratings; for
extra details you'll need the parts list, which is also unlikely to
be available. You probably have to read the markings off the original
caps and check the spec sheets at their manufacturers, or check the
FAQs and forums at BadCaps.net. And while lower ESR is generally
better, don't go too low when the cap is in a circuit with a coil in
it because that can cause really bad oscillations. In addition to
capacitance and voltage, you need to check the ripple current rating,
ESR, and diameter -- some power supplies can be so crowded inside that
if you use a cap just 2mm fatter, it may not fit and may have to be
mounted up to 1" above its original position, and then you have to
worry about fastening it securely and about proper electrical
insulation to prevent shorts and contact with hot parts or the high
voltage side of the circuit (some heatsinks are connected to high
voltage, but you don't want them touching the plastic insulation
even if they're at zero volts).
 
I assume the "reality" of RAID goes like this:
1) Real raid uses a rain controller than plugs into a card slot (the
faster the better). The controller does all the hard work, removing the
processing burden from the host PC.

It's not "real" raid, it's "hardware raid". A hardware raid card
handles the raid on the controller card. This has some advantages, such
as more efficient use of the system's IO bandwidth and the possibility
of using a battery backup.

It also has some disadvantages, such as the lack of flexibility (limited
raid modes, no mixing of modes with the same disks, no usb disks for
extra safety during replacements, etc.), the disks are tied to the
particular hardware (if the board dies, you need an identical
replacement to be sure of recovery of your array), and management is
often poor (such as having to reboot to the raid bios, or using hideous
management software).

Typically, an OS can see the hardware raid as a single big drive without
extra software or drivers, which is nice. But you normally need extra
software for management of the raid.

And the hardware raid may or may not be faster than software, depending
on the type of system and the type of usage. The main cpu in a modern
system is far faster at calculating parities than the cpu in a raid
controller card, for example, and software solutions can often give
faster raid layouts (Linux raid10,f2 on two harddisks will outperform
anything a hardware raid can do with two disks in either raid0 or raid1).

And of course, hardware raid cards cost a lot of money - especially with
a battery backup (and they are pretty pointless without a battery). You
also often have to pay extra for "advanced" features such as raid6.
2) Fake raid uses some chip on the mobo, in addition to software that
the CPU needs to run all the time.

Fake raid don't actually use any motherboard chips - they are basically
a very limited software raid implemented in software in the bios so that
you can configure it or do recovery from the bios setup screens, and the
OS can boot from it. Beyond that, it requires drivers in the OS to
support it (as the OS does not use the bios), and it's all run in software.

Fake raid has all the disadvantages of software raid, and all the
disadvantages of a really cheapo hardware raid (inflexible, tied to the
one system, etc.).

But if you are using an OS that doesn't have proper support for software
raid (i.e., Windows, whose software raid is very limited), then it can
be a convenient and easy-to-use system.
3)software raid is self explanatory.

Software raid means the OS handles it. This means that the raid is only
as safe as the rest of the system - you can't get the advantage of
battery backed caches. And unclean shutdowns (crashes, power cuts,
etc.) can lead to time-consuming checks and re-syncs, depending on your
setup.

You also use more IO bandwidth - if you are writing two copies of
everything to raid1, your cpu has to write everything twice, rather than
letting a hardware raid card do the duplication. And of course the main
cpu has to do all the calculations, but that is usually a small burden
on modern cpus.

In return, you get an array that will work on any copy of the same OS on
any hardware, and that can be hugely more flexible than hardware
solutions (assuming you are using an OS with good software raid
support). You can mix and match raid types to suit particular
requirements, you can change things easily from within the system. With
Linux (and probably other *nixs, but I haven't tried with them) you can
freely mix different disks of different types and sizes within arrays,
and you can re-shape and re-arrange your arrays while running.

As an example, this means you can temporarily add an external USB disk
to a raid5 array and re-sync it to a lopsided raid6 with all parity on
the USB disk. Then you can re-arrange the raid5 disks (perhaps swapping
them out for bigger disks or replacing old ones before they fail) step
by step, without ever losing your redundancy. Once everything is
finished, you can remove the USB disk and go back to raid5.

And of course all your tools are integrated into the OS.
I've only done fake raid. The raid cards cost more than the mobo, and it
is tough to justify the expense with the fake raid on the mobo already.
However, in theory, when you get into these situations where the mobo
fails and you have a real raid controller card, you can plug that card
into another Pc and it will be able to read all your drives. [As
interface slots have migrated over the years, this might not be possible
in all cases. That is, the old controller needs to work in the new PC.]

When you installed a fake raid, you might of had to take a step where
you inserted 3rd party drivers during the installation phase. I know I
did this in win2kpro, but not in win7 pro. Opensuse does a good job of
being equipped with the fake raid drivers.

Don't use fakeraid with Linux - mdadm software raid is better in every
way (except perhaps ease of setup if the distro you are using does not
support it in its installer).
Maybe I should rethink what I am trying to achieve here. It might make
sense for me to set up a NAS for pack rat purposes, then use less
storage in the desktop.

I looked up software raid and it doesn't look particularly painful if
you set up freenas.

I assume should the mobo fail, you can set up another freenas system and
read the old drives. That gets around the fake raid issue.

I gather freenas setup up an apache server for the web interface. I
noticed supermicro makes some atom mobos that are tweaked for server use.

I've built a D525 system before. Very easy. They run cool and the lack
of all the power saving horseplay (moving clock frequencies and core
voltages) makes the D525 a very reliable system.

I never used BSD, but presumably freenas is turnkey. This gets the
storage drives away from the heat sources (high speed CPU and graphic
card).
 
Timothy Daniels said:
"

If an electrolytic capacitor on a motherboard or power supply fails,
what is the easiest way to find out its specifications so as to buy a
suitable replacement? Do you need to have access to the circuit
diagram?

Not at all. The capacitor will usually contain a manufacturer name,
a "type" and a voltage, capacity and temperature rating. With the
type and manufacturer, you look into the datasheet. Typically
these will be standard, low ESR or very-low ESR capacitors.
This must match. Temperature must be same or better. Capacity
must match. Voltage must be same or higher. You may also want
to look at diameter and distance between the connecting wires.

See, easy ;-)===)

Arno
 
The diagram is unlikely to be available or indicate anything about
the capacitors except their capacitance and voltage ratings; for
extra details you'll need the parts list, which is also unlikely to
be available. You probably have to read the markings off the original
caps and check the spec sheets at their manufacturers, or check the
FAQs and forums at BadCaps.net. And while lower ESR is generally
better, don't go too low when the cap is in a circuit with a coil in
it because that can cause really bad oscillations. In addition to
capacitance and voltage, you need to check the ripple current rating,
ESR, and diameter -- some power supplies can be so crowded inside that
if you use a cap just 2mm fatter, it may not fit and may have to be
mounted up to 1" above its original position, and then you have to
worry about fastening it securely and about proper electrical
insulation to prevent shorts and contact with hot parts or the high
voltage side of the circuit (some heatsinks are connected to high
voltage, but you don't want them touching the plastic insulation
even if they're at zero volts).

If you connect the new capacitors with 1 inch leads, you might as well just
leave them out, regardless of securing it and insulating the leads.
 
Timothy said:
Ummm, yeah. Time is Money, ka-ching! Do people offer "cap kits"
for popular boards or types of circuit that frequently have cap
failure?

Indeed they do have kits for various models.
 
Back
Top