The 15K SCSI drive will be of more benefit than a pair of
typical ATA in RAID0.
Come on.
So you agree with a 15k scsi drive recommendation but disagree with a
15k scsi drive recommendation?
Quite amusing really
OK then, but there was no mention of RAID0. Why would we
bother to contrast anything with RAID0?
Come on.
Ronny Mandal said
"So you are saying that two IDE in e.g. RAID 0 will outperform
the SCSI disk in speed, besides storage etc?"
and you mentioned RAID0 (see above) in your answer to him. It's a
valid and real part of previous discussion thread (from 2 weeks ago).
Certainly RAID0 is part of the category of inexpensive raid initially
brought up by the initial post Joris Dobbelsteen so it SHOULD be
discussed by ALL sub branches.
But that means very little without insider info about the
cause... it could simply be that the SCSI line is producing
a lot of defective drives.
Quality control is usually relaxed because of the relative tradeoff in
profitability / defect rates.
It's good to know you have been reading most of my postings. It makes
me feel good to see use of phrases like "insider info" with regard to
this subject. Only that doesn't really make it YOUR argument or mean
a manipulation of words is an argument.
... and a huge difference in total good units you may have
the chance to buy, too.
yes and that is offset by the huge numbers of customers and units'
population spread across many more resellers...
Most ppl tend to buy large lots of SCSI drives?
scsi is very often bought for multi-drive servers- average according
to Adaptec is usually around 4 per or 4 per channel. scsi has also
been used in disk arrays for some time. Many companies/enterprises
buy many servers with multiple arrays and often many workstations with
scsi drives also. It's usually uncommon for consumers or small
business (who tend to by small amounts of storage) to even consider
scsi.
That's not the whole poop though. Even when buying a single disk your
statistical relationship to the entire population of either is
different.
I admit the complexity of this comparison makes it somewhat fuzzy.
Even if you reject this and say scsi drives are of identical build
quality or you have equal chances of getting a good scsi or ATA drive-
it doesn't alter OUR suggestion which endorses the scsi drive. It
also doesn't successfully indict my reliability point as it has
already been satisfied with relative MTBF.
I suggest that any significant data store is tested before
being deployed, with the actual parts to be used. Further
that NO data store on a RAID controller be kept without an
alternate backup method.
Come on.
Of course. that has never been in contest. It's also not exactly
news.
But taking further this comment pulled out of thin air - backup
applies to multiple data categories on EVERY kind of storage volume.
IT's not much a raid suggestion anyhow.
so you're going to make a point of telling someone to back up his
raid0 volume if it only holds /tmp/ or paging data?
You backup data not storage or "data store".
I disagree that they "tend to implement new features more
conservatively", a couple days ago you listed many features
added less conservatively.
Come on.
I didn't say that. Implementing more advanced features (which is what
I assume you are referring to) is different than implementing features
more or less conservatively; they are implementing advanced features
in a more conservative fashion.
There is no logical conflict because certain advanced features aren't
put in ata drives ONLY because they want to differentiate the product
lines/product classes.
I disagree with that assessment. In one sentence you write
"more parts and tools to learn and use" but then come back
with "never have to think about DMA mode". You can't have
it both ways, it most certainly is more to think about.
Come one.
Read it again. Remember it is a 1 or 2 scsi drive vs ata raid
comparison. (as it always has been)
I suggest that anyone who can't understand DMA mode on ATA
should not be making any kind of data storage decisions,
instead buying a pre-configured system and not touching
whichever storage solution it might contain.
There isn't very much to understand about DMA (for the end-user) it's
a matter of familiarity/learning. If they never touch it then how are
they supposed to learn? How are they supposed to get problems fixed
when/if they arise and they have only phone support? Is this all some
kind of secret club?
Come on.
That has nothing to do with it. The point is there are more things to
look at & think of with ATA raid over a single scsi drive and that
makes it less simple. I'm not claiming any of these by themselves are
overwhelming. Put them together, though, and there is a _difference_
in overall simplicity of the different systems. Furthermore this
simplicity point is one of many items used to substantiate and
elaborate on a recommendation you agree with. It's unreasonable to
now claim one aspect of one of the many points makes or breaks the
overall recommendation & argument.
So you're trying to compare a single non-RAID drive to a
RAIDed config now?
Come on.
That always was the case. We both recommended the same thing and BOTH
compared it to ATA RAID earlier.
This smear attempt of yours is becoming very transparent. If the
thread confuses you so, why bother posting?
SCSI, including the Cheetah, does not
eliminate management software updates or config.
Come on
What "management software" does a single Cheetah use on a vanilla hba?
What backup of the controller config is needed on ATA beyond
SCSI?
Come on.
It's smart to backup a raid controller's config (if you can - or
perhaps even if you have to take it off the drives). There's no
reason or ability to do that with a vanilla scsi hba.
yes but again, this is not an argument FOR SCSI Cheetahs,
simply to avoid RAID0. Granted that was part of the context
of the reply, but it didnt end there, you tried to extend
the argument further.
Come on.
All this is in response to Joris' post:
"Besides this these disks are way to expensive and you get much
better performance and several times the storage space by
spending that money on a RAID array.
Why you need a Cheetah 15k disk?"
So we both later made an identical recommendation (the 15k cheetah) in
comparison to ATA raid. In facy YOU recommended a single cheetah
when compared to ATA RAID0!
Did I really _extend_ the argument _further_, or simply elaborate
/provide an explanation/details on the benefits which affect not only
performance but also user/operator productivity (which is WHY ppl are
concerned with performance in the first place).
So when you said:
"The 15K SCSI drive will be of more benefit than a pair of
typical ATA in RAID0."
That was more worthwhile because you made no attempt to elaborate on
the attributes that would be helpful to the OP and WHY it is a better
for for him?
Come on.
Except that you're ignoring a large issue... the drive IS
storage.
Come on.
That doesn't even make any sense.
You can avoid RAID0, which I agree with, but can't
just claim the Cheetah uses less power without considering
that it a) has lower capacity
Come on.
The OP was considering a single 15K cheetah NOT say 250gigs of storage
for example.
b) costs a lot more per GB.
Come on.
Has nothing to do with electrical power.
Also $/GB is overly simplistic - it is not the only variable in TCO or
ROI for storage.
c) it's performance advantage drops the further it's filled
relative to one much larger ATA drive at same or lower
price-point, perhaps even at less than 50% of the cost.
Come on.
Has nothing to do with electrical power
Also Not true. These drops are case by case and not by interface.
Look at the Seagate Cheetah 36ES for example which dropps extremely
little across the entire disk.
To compare raw thruput on similar price point you are talking about
antiquated scsi with less dense platters vs modern ata with very dense
platters. That's too unfair to even be serious. It also isn't very
serious because the comparison is based on an overly simplistic view
of both performance and valuation.
Thank you, laughing is good for us.
Yeah. I'm still laughing.
<sigh>
OK, getting less funny...
Do you not even understand the aforementioned PCI
bottleneck? Southbridge integral (or dedicated bus) is
essential for utmost performance on the now-aged PC 33/32
bus. Do you assume people won't even use the PCI bus for
anything but their SCSI array? Seems unlikley, the array
can't even begin to be competitive unless it's consuming
most of the bus througput, making anything from sound to nic
to modem malfunction in use else performance drops.
If you look at REAL STR numbers, REAL bus numbers, REAL overhead
numbers, and REAL usage patterns you will understand my point.
Remember the comparison is for 1 or 2 plain scsi 15k on a vanilla hba
vs. some kind of ATA RAID. Stop creating your own comparisons NOW
which are different that what the thread has been about ALL ALONG -
Including the time you also recommended the Cheetah over ata RAID0 and
everone put this to bed 2 weeks ago.
If the comparision you are making NOW is germane or there is such a
HUGE difference you should have put the SATA as YOUR primary
recommendation instead of the SCSI and challanged my recommendation
honestly.
How transparent your "argument" is...
This I agree with, latency reduction is a very desirable
thing for many uses... but not very useful for others.
For general purpose "workstation" performance from reduced latency
(15K) and load balancing (2x 15K) it is _extremely_ important.
The bandwidth associated with RAID0 on a dedicated bus is only
necessary for a handfull of special tasks. That's not what the OP is
looking for/needs.
The OP primarily wants "Fast access to files, short response times,
fast copying - just some luxury issues."
That's why you endorsed the 15K scsi like I did as the primary/best
recommendation. Furthermore you called 2x SATA Raptors a
"cost-effective compromise" not best. not necessary.
Never wrote "always been", we're talking about choices
today.
Why I said "I'd also be careful if"
We're not really talking about chipset choices today - at least that's
only something you pulled out of thin air and threw into the thread 2
weeks after the fact to attempt to confuse the discussion.
I'm clarifying how your assumptions are wrong or exaggerations and how
you are interjecting irrelevant comparisons.
What modern chipset puts integrated ATA on PCI bus?
What 2 year old chipset does?
No you don't have to look far back.
That's not the point though; since you were overstating the advantage
of "southbridge-integral SATA" I warned you against other
similar/related false notions.
No, if you could compare them you'd see that a SCSI PCI card
will never exceed around 128MB/s, while southbridge ATA
RAIDs may easily exceed that... throw a couple WD Raptors
in a box and presto, it's faster and cheaper.
Come on.
If you could compare them you'd see there isn't much of a difference
when you look at overhead.
So you are _always_ moving _files_ at max _raw_ thruput through _all_
parts of the disk with RAID0 type usage even with basic disks? What
about REAL usage patterns?
And what about the greater overhead of SATA? and inefficiencies of
some controllers (even though they are point to point) esp relative to
scsi which still has real potential with complex multi-disk access
(the 2 plain scsi drive scenario). Do you really think that some
marginal theoretical maximal bandwidth issue is going to be a huge
drawback against the multitasking responsiveness of reduced latency
esp with 2 regular 15K load balancing type storage approach? Do you
really think that a single 15K scsi or 1 15k scsi used at a time is
going to saturate the bus? I already specified "if the pci bus isn't
doing much else" which is likely if there is no pci video or other pci
storage or 'exotic' pci devices. You act like it would be crippled if
the full theoretical maximal potential isn't reached - and it just
doesn't work that way.
The OP wanted "Fast access to files, short response times, fast
copying - just some luxury issues."
The OP claims to be using a "workstation" which might very well imply
having a faster or multiple PCI busses anyway. You're only guessing a
single 32/33 pci is relevant.
Keep in mind
that either way I would only recommend a futher backup
strategy, data should not be only on (any) drives used
regularly in the system.
Of course. Everybody does. So?
I don't believe there is enough evidence to conclude
anything near this, seems more like an urban legend.
I suggest not powering down the drives at all, until their
scheduled replacement.
Well that's not _necessarily_ a bad or wrong suggestion- but it
usually isn't practical, esp on a "workstation", to never spin down
for the entire disk service life (typically 3-5 years) or system life
(typically 3 years). Given the total number of times modern drives
are safe to spin up it makes no sense to be _totally_ afraid of it.
If you ARE totally afraid there may be something to be said for
bringing a latent problem to a head when it is convenient and handling
a warranty replacement when you can afford to as opposed to allowing a
random occurrence - which always follows Murphy's Law. You should
investigate this more if you don't believe this (admittedly ancient)
"best practice."
You're getting desperate. These objections are your ego talking and
not your head.
You didn't see any problem with my post for 2 weeks until you started
getting snotty in another thread. This thread has been dead for so
long it took me a while to even notice your "objections." I thought
we already settled all this silliness?
I thought your objections & snottiness was only due to my alleged
"lack of specificity" or "details" (like the other thread we locked
horns). I first elaborated on my recommendation here (which you
agreed with) and now I have twice supported my elaborating details
with "specifics". If you disagreed with my recommendation or
reasoning it would have appeared more genuine to raise such issues
then.
Your "criticism" is just silly, arbitrary, & confused.