Gareth Church said:
I'm really unsure how you came to this conclusion.
Yes, I don't know that either, with my table conveniently snipped.
You just have no shame, have you, Church, playing dumb.
My memory isn't good enough to remember STRs for a number of drives
from yesteryear. it is good enough to remember that when I compared
top-end SCSI and top-end IDE drives
Typical how you again fail to name them so that we cannot check you up
on that.
in the past the SCSI drives were almost always faster.
"Almost always". Or in other words 'slightly faster'.
Traditionally new
features and improvements were made to SCSI first (because that was were
the money was to be made). If they worked, they were later added to IDE.
We are not talking about features, we are talking about that
stupid claim of yours: "SCSI has traditionally been 'a lot' faster".
And I forgot: recalibrates not interrupting transfers.
We're not talking about data reliability here, we are talking about performance.
Whether or not getting that performance has affects reliability is not at issue.
So what? They were not any faster, period.
Do you have a point or are you arguing for argument sake here, Church?
Since you don't come up with evidence of the contrary I again have to
conclude that you came up once with another false argument again of
"SCSI has traditionally been 'a lot' faster".
By taking comments out of their context you are giving an unclear picture.
SCSI has been traditionally faster than IDE.
Nope, you obviously just changed your position, again.
We have now gone from 'a lot faster' to just 'faster'.
Which is in keeping with my findings with the 1997 results where SCSI
has the same or very slightly better STRs than IDE, nothing to brag about.
If anything, SCSI is slightly faster currently in the latest 15k rpm drives.
The Maxtor Plus 8 comes very close and that is already an older IDE drive.
When I said that we were talking about single
drives of the past (hence the word traditionally).
Wordgames. Correction: stupid wordgames.
That statement still stands true and I stick by it.
Fool you.
The snippet above follows on from a discussion about RAID.
In that I said that now IDE RAID is available, you can get a better
sustained transfer rate using IDE drives (by adding more drives),
and still save money over SCSI. That statement, too, still stands true
and I stick by that.
Irrelevant to your claim that "SCSI has traditionally been 'a lot' faster".
You've lost me I'm afraid.
Of course you do. And don't we all know why.
What exactly did you say about SCSI, and how did I imply you were 'cheating'?
You can take quotes out of context all you like and pretend I was referring
to something else, but it doesn't change reality. I said "why not get a bit
more performance" in regards to sustained transfer rate, after talking about
RAID. Never did I suggest IDE had better access times.
Exactly. So you used the word performance totally out of context.
STR is worthless with small IO. RAID is worthless with small IO.
To talk of performance in that context falls just short of criminality.
I wasn't talking about access times.
Bingo. You should have.
Bingo again. The whole 'missing' point.
I said you can get better performance from IDE RAID (over SCSI RAID,
by using more IDE drives). I then very clearly qualified my statements by
saying "that's for throughput".
Suggesting something to come next, that didn't come.
Performance is not STR, STR is only a small part of performance.
I never mentioned access time in that passage.
Shame on you.
You were the one that mentioned that.
Sorry, but reducing latency still makes sweet FA difference to overall
system speed. A system is made up of many components. The hard drive
subsystem is only one.
And it holds up almost everything.
It isn't used all the time - the CPU and main memory are much more important.
Which is useless when it's idling.
Improving one aspect (latency) of one subsystem (the hard drive subsystem)
just won't make any difference to overall system performance.
I just said it did.
I see a page listing results for a number of drives using a general-purpose
hard drive benchmarking tool.
Yes, that is so so bad, isn't it. It verges on the criminal, don't you think?
Stop that posturing, Church!
I must have missed the bit where they took two
drives (one IDE, one SCSI) that were as similar as possible (same
manufacturer, size, number of platters etc) and attempted to take as
many other factors out of the equation so they could test how much of a
difference improved access time makes to the hard drive sub-system.
Uhuh. And where can I order those 2 drives?
That is plain stupid, Church! The fact that they are NOT the same size,
and number of platters makes that SCSI has the better access times in
the first place. Smaller but faster rotating platters, lesser track- and
areal density is what is making SCSI equally fast to IDE but with vastly
superior access times.
Of course. But a 10% improvement in the hard drive subsystem does not equate
to a 10% overall system improvement.
Bingo. Indeed, it does not. And no Church, I don't do back-flips.
Of course. And if the hard drive was being accessed 100% of the time
(and if you ignore drive buffers), this would have the direct overall system
performance you wish existed. But drives aren't used 100% of the time. They
are used fairly rarely really, when compared to the CPU and main memory.
The main time you hit the hard drive is when launching an application.
Sure, impoved access time can help a little here, maybe even quite a bit if the
application is small.
Huh? You won't notice anything when something executes in less than 1 or 2 seconds.
It's on the longer loads that you are going to notice a 1 or 2 second difference.
But if you are loading up a decent sized app (Word, IE, Outlook, Photoshop,
Dreamweaver etc) the difference access time makes quickly approaches zero.
Large apps using lots of libraries, consisting of many many IOs that are executed
in parallel. It's obviously here that access time will make a difference.
Pointing out errors in grammar, spelling etc in other's posts is a sure sign
that you have an otherwise weak argument.
Hey, if that makes you happy after all your non-arguments and hearsay,
go for it.
You can add the word 'secondly' to the start of the quoted
sentence if it makes you happy. So firstly it makes no no-
ticable difference, and secondly it is a lot more expensive.
Nope, you did nothing of the kind.
Of course I did.
You have just spent your last post arguiing the benefits of access time.
Lets look at one of the components that influences access time - rotational
latency. Do you really think rotational latency has an affect only on access
time, and no affect whatsoever on sustained transfer rate? Of course not.
Somewhere in that gibber will be a point that you are making.
Only God knows what it is.
Since you are being so pedantic, Church is my last name hence that would be
Mr Church.
I never call a troll 'mister', sorry.
If you were to use my first name, simply saying Gareth would be fine.
And by the way, having an opinion that differs to yours doesn't make
me a troll.
It does when you turn that into a negative comment s.a.
"You just like to argue, don't you?"
Or was that a compliment, Church? No? Didn't think so.
In fact, the majority of the world differs with your opinion
Ah, and therefor I must be wrong and you must be right.
What was it about that VHS vs Beta or MAC vs PC debate again, Church?
"It's the sort of thing people say when they have a strong opinion, but
can't come up with any actual reason why they feel the way they do"?
(that SCSI should be used in the desktop environment if there will be a lot
of multitasking, because the vast majority of the world in that situation
use IDE).
It takes two to argue.
I certainly am arguiing, because I disagree with the
advice you are giving the OP.
I never give advice, Church. I present arguments on which to make decisions.
I think it would be a waste of money for him to go with SCSI.
I don't think he would notice any difference in the
performance of his system at all, and I think it is also a bad idea from the
point of view of future-proofing. As SCSI becomes less and less common, the
hardware will become more expensive. If he already has SCSI in place in his
system, each time he upgrades he will need to decide whether to pay the
extra to get another SCSI drive, or if it is worth switching over to the far
more ubiquitous and cheaper IDE.
Also, it may be convinient for you to suggest all the arguiing is from me,
I didn't, Church.
and that you are completely innocent, but I'm afraid it's also hypocritical.
Suggesting that I did, yes, that certainly is, Church.
I do not understand why you are advocating SCSI drives for use in a
desktop machine. I am asking if it is only for the better access time,
or if you actually have a good reason to hold your point of view.
Is the 'No?' a question, or are you answering mine
Oh, was your's a question then?
(just with an erroneous question mark)?
Which one, your's or mine?
Sorry, but that is no misconception.
Yes it is, as I have proven with several drives from 1997.
I don't have data on old drives, so will use modern ones:
That proves 'FA' about "But SCSI has traditionally been 'a lot' faster".
Seagate Cheetah 15K.3 - STR: 49-75 megs/sec
Seagate Barracuda 7200.7 - STR: 32-58 megs/sec
And your point is? Except debunking your own statement
about IDE having caught-up so much, compared to SCSI?
Seagate Cheetah 15K.3 - access time: 4.7ms
Seagate Barracuda 7200.7 - access time: ~12 ms
A ~150% difference.
What was that about that "slightly better access time" again, Church?
In your opinion 25% is "a lot" but 150% is slightly better.
You are making a lot of sense, Church, as usual.
Nope, problem is you have no point, and you have not been able to disprove
anything I've said. You've got your opinion, and you're going to stick to
it - no matter what the reality is.
Ah, and you're not. Right.
Like it or not this thread is about SCSI vs IDE, and which one is better in
the OPs situation. Talking about how CPUs etc have improved over time is not
relevant.
Even talking about how much drives have changed over time is irrelevant.
I'm glad you agree, ignoring your constant posturing to the contrary, that that
certainly was a stupid statement of your's.
What matters is which is best for this person in this situation.
If you can't see that is a consistent point of view, then that is your problem.
It's not my problem, Church.
Your blindness to reality is staggering. What exactly are your reasons for
thinking that SCSI is better for the OP?
What, you suddenly can't find my quotes anymore? Talking about blindness.
It has been my position all along that SCSI is not appropriate for the OP.
That was not the question. The question was if there formerly -'were'- reasons
to use SCSI in a desktop machine.
You have been the one advocating SCSI. Does this mean that you have now done
a back-flip and realised that SCSI isn't appropriate.
Is that a question, Church? In that case : No, I don't do back-flips.
BTW, the line "there never were" was yours, not mine. So you are actually
asking "so you agree?" to yourself.
No Church, it doesn't work that way. You didn't 'argue' with what I said there
so I asked whether you agreed (or not). It was you who 'suggested' that SCSI
might have been appropriate for a desktop machine, once, but not anymore.
What two? CAD software and hardware? Yes, professional CAD software is
usually run on high-end workstations.
Thank you. You were just arguing for argument sake then.
And I said that there has been a shift from SCSI to IDE in these workstations.
No Church, you didn't.
Is that a question, Church?
Well, that wasn't obvious.
Yes, it *was* obvious since I mentioned it in the next comment for explanation.
It's also obvious, as that has always been IDE's best known bottleneck.
The interface has had many improvements. The connectors are different
(much smaller), the power connector has changed (and now includes 3.3v),
it has a faster theoretical limit, the limit to cable length has been increased,
and yes SATA uses just one channel per device.
Gee, didn't I mention that? You must have missed it then, eh?
If you think it is obvious what you were referring to, then clearly
you have a complete inability to see things from other people's point of
view, because it was anything but obvious.
Blablabla, whine, whine, whine, whatever. No wonder this post has grown so big.
Stop the posturing, Church.
If you have a point, no I don't.
Yeah, that's what I thought.
Indeed, not that it's a problem.
Yes Church, that is a problem with any shared bus: You have to
share it. Like when you share a cookie you each get half a cookie?
It's actually a very simple concept.
Like I said, you obviously don't have a clue.
Clueless, indeed.
When you copy a 60MB/s drive to another one on the same channel
you need (2*60) +10% = 133 MB/s bandwidth to copy at full speed.
An ATA66 channel will have them transfer at only half their speed.
An ATA100 channel will have them transfer at approx. 45MB/s each.
In the real world you rarely are trying to read/write to multiple drives at
the same time.
Of course you do. And you certainly do on RAID.
Oh, what a coincidence, didn't you advise RAID?
Having multiple applications open at once does not equal
having multiple applications accessing drives at once. About the only time
when you will access two devices at once is when you are doing something
like copying a CD to your hard drive. This operation is obviously limitied
in speed to the CD, so it doesn't matter that the hard drive is pulled back
to the CD drives speed.
Another example may be if you are encoding video in the background while
doing something else in the foreground. But video encoding takes a lot of
CPU, so you are much more likely to notice the speed dip because of that
than the hard drive speed (if you even happen to use the hard drive in the
foreground app, which you probably aren't).
Your ability to reduce problems to mere nothingness is truly jaw-dropping,
Church.
Is there an imposed limit on SATA?
Yes.
I would imagine it would be how many controllers you could
fit on a PCI card, and how many spare PCI slots you have.
It's not as bad as that.
More like how many drives can be attached to a port multiplexer.
but then it stops. You can't 'multiplex' a port multiplexer.
That is going to allow enough drives for all but the absolutely most
essential server, which wouldn't have considered IDE anyway.
A hosting server sure would consider IDE for storage and need lots
of them.
You just said they could be the same drive.
Sigh! Yes Church. Do I have to spell it out for you, word for word, letter
for letter or will you be able to grasp it on your own, if you try harder?