BTX Technology/Native Command Queuing

  • Thread starter Thread starter AWriteny
  • Start date Start date
Repeat after me: "the interface doesn't matter". Interfaces are
cheap, defying physics isn't.

For the last time: that's not the point, so stop trying to make it seem
like it. Admit you had a misunderstanding and move on (step into
manhood already!)
YOu're talking garbage. If you got your 3Gb/s *interface*, like you asked
you'd still have your hand, umm... Let's say you were asking the wrong
question and were too pig-headed to take some rather friendly advice.
IT'S NOT THE INTERFACE, DUMMY.

No I'm not. I started by stating something like: "NCQ is overrated because
it won't equate to more performance for the common user. The thing to wait
for is 3Gb/s SATA because then you'll get double the throughput". Again,
that probably will be true. You say no. We'll just have to wait and see won't
we.

The more you keep going on about "interface", the dumber you look. You pull
one little word out of a sentence that you recognize and then try to mold
everything to your own thought patterns. (That's what a child does, btw).
Try to see the big picture (or at least stick your head out of the box
occassionally).
You're the one that's flying in the face of *facts*. ...which you are
clearly devoid of. Frankly, others here are laughing at you.
Misconceptios are one thing, but outright refusal to listen to the facts
is another. Pig-headed is a term tht comes to mind.

I've told you many times and you still don't get it. What can I say? I give
up. I tried to get you out of your technical box, but you won't budge. Power
to ya. Go for it. Personally I don't care. I was just trying to help you understand
something that wasn't nuts-and-bolts and black-or-white. I've met a lot of
people "like you". I don't mind a bit, as it takes all kinds to get the job done
and when there's something that is very well structured and defined and
technical in nature, you may be the one to do it best. I'm not calling it an
affliction or dissing the way your mind works, so don't worry about it.
You've clearly not been listening. I've told you nothing but the truth

You've mostly attacked personally and continuously and almost exclusively.
THAT brings your credibility down to zero for me.

Your "truth" was off-topic and irrelevant mostly. You tried to suck the conversation
down to low technical level in which the original thoughts were to get lost.
You may care that 1.5Gb/s = 75MB/s in reality, whereas I only care that
3Gb/s = 150MB/s in reality. Get it? (Yet??) And I bet it'll be there when the'
drives are available too. Wanna bet? (rhetorical).
(well the age thing is an old .chips inside joke), yet you continue to
challenge the ABSOLUTE TRUTH as if you somehow were a system design
engineer.

LOL, I am (I still do a little of that)! It's not a question of truth. It's a
question of comprehension.I don't care that the bus isn't saturated.
All I care is that I get double the throughput because that equates to
the value I was looking for. (Somehow, I get the idea that you'll never
understand. Maybe when you're older, knock on wood).
Go fer it! Your loss. I'm the one who was trying to teach you something.
You clearly are too proud of your ignorance to learn. So be it.

No you can't force feed me something I didn't order. It would be nice if
you could just say "this is how it is" and it would turn into that, but that's
childish. No one requested information from you and why anyone would
is beyond my comprehension. I see you're getting more and more hurt
because you did not understand and now you want to "escape" that
by creating a place where you are "safe". All I can say is that there is
nothing to be afraid of and no shame in misunderstanding. And "paradigm"
has caused many to be blinded to obviousness, so don't worry about it.
It's OK and nothing to be ashamed of. Really. No one is going to take
away your decoder ring.

Perhaps with that info you can be less reactionary and more cognitive
towards someone in the future. Please don't try to project your own
internal conflicts on others. And do consider other areas of knowledge,
capability and skill (social interaction, comprehension, communication
for example) as worthy of your time too, not just technical specialization
and detail. I've given you some good advice and I do charge for similar
counseling professionally, so take the freebie and run with it. You'll be
surprised how just a little change can make a big difference.

Good luck! (I really mean that. I hope you will transition to adulthood with
continually decreasing turmoil). I can't give you anymore of my time though
now.

(Last hint: you have to accept your problems before you can solve them).

AJ
 
keith said:
RusH said:
Well the implied (by the drive vendors) info is that there is a
speed up to be had in a future generation of SATA. No one is
picking nits (I'm not). Faster drives (more throughput) will be
welcomed with open arms (was the point). My guess is that SATA-I
is close to the stated 1.5Gb/s spec

[cut]

my dear, this thread is funnier and funnier
Do you realize that the FASTEST SATA drive available to this day (WD
Raptor 740) is actually a PATA drive with PATA2SATA glue chip onboard
? Yes, it is a PATA drive.

I'm forward thinking to when a consumer box could actually
self-heal in a short time. Currently it may take 20 mins (probably
less) for the as-manufactured configuration to be reinstalled. But
then there's the onsite/user config and data also (more work to be
done in this area). The goal of course being a user-friendly or
hands off approach to fixing mucked-up systems. Doesn't apply to
techies like yourself.

hmmm, but WHY to recover ? I see no reason (besides stupid user who
should get no computer acces in the first place)

Of course hard drives never fail. And viruses (viri?) never muck up systems.
And... <your favorite disaster here> never happens.

Certainly not often enough for WinBlows installation time to be of any
importance.

Important to me, not important to you. Still missing the point huh. Is English
your second or third language?
BTW, you're stuttering.

You wish, youngun.

AJ
 
daytripper said:
fyi, you've waded in well above your head at this point...

Let us just start by pointing the gentleman to US Patent 4,486,739 by
Franaszek et al.


Kai
 
Kai Harrekilde-Petersen said:
Keith is hinting at the fact that he (and I) are seeing
double-postings from you.

You mean you wish it was that easy to "get out" of a situation
that you see as relevant when indeed it has no relevance whatsoever.
Get a grip. You don't bubble up by being a technological bore or
annoyance and Keith exhibited that aspect with aplomb. He needs
to change in order to advance. Plain and simple. If someone wants
a schematic, he'll consult the manual (as I did). A human technical
manual is a waste of life.

AJ
 
AJ said:
You mean you wish it was that easy to "get out" of a situation
that you see as relevant when indeed it has no relevance whatsoever.
Get a grip. You don't bubble up by being a technological bore or
annoyance and Keith exhibited that aspect with aplomb. He needs
to change in order to advance. Plain and simple. If someone wants
a schematic, he'll consult the manual (as I did). A human technical
manual is a waste of life.

AJ, get up from that computer, walk outside, and chill out.

I was merely telling you the reason why keith wrote "you're
stuttering". AFAICT it was not meant that you stutter as a person,
merely that we're seeing double-postings from you. Double-postings
can happen for a lot of reasons, and they're a bore. There was no
personal attach implied in my email, regardless of how you chose to
receive it.


Kai
 
Let me be a tad more kind than 'tripper; Nope. The drives were
interleaved for the opposite problem. They were faster than the
controller (CPU, in fact). The CPU couldn't decide where to go next
before "next" came around. ...thus inteleave some crap inbetween so we
have time to thimk.

I remember that, and that the solution was for the controller to slurp
a whole track at once and then feed the PC at the speed it can manage.
The fact is that the bus is *SO* much faster that there isn't any "wait".
Even if the track/sector is hit without error, the data can be dumped
off the drive into main memory faster than the platter dumps it to its
electronics.

OK, that makes sense; I wasn't too sure how the relative speeds
balanced out (given that much of the waiting for HD is likely to be
no-data-flow waiting for the mechanics).
Try again. If the interface is 5-10x faster than the head, what's the
bottleneck? Hint: it's *not* the interface.

OK; then if the interface is that much faster, perhaps the idea is to
buffer stuff off the HD and send it through in fewer transactions?

I was wondering if faster UIDE modes would serve to reduce the
capacity of on-HD RAM, by offloading the HD unit quicker. While HD
data transfer is generally slow overall, it may be a mixture of fast
data squirts interspersed with no data flow at all. If this were the
case, then I could see that as a reason to buffer on the HD.

If this is not the case, then the reverse may apply - the need to
buffer writes to the HD so that the PC can shove through bigger wads
of stuff at a time, and then get on with other stuff while the HD
dumps from on-board RAM to platters as fast as it can. This would be
a bit like the way an application can be done with printing, even if
printing takes ages to commit to paper.

That would also fit with a shutdown problem pattern that required a
fix; where the OS had done writing to HD and would initiate an ATX
off, before the HD had flushed from on-board RAM to platters.
Head position is irrelevant if the buffer is empty as soon as the current
track/sector is read. The buffer is still waiting for the next request.

Yes, I take your point.
Remember, the interface is *faster* than the head/platter!

That was what I wasn't too sure about. I was thinking that peak rates
could be high enough to be a problem, but prolly not, now that I think
about it a bit more :-)


-------------------- ----- ---- --- -- - - - -
Trsut me, I won't make a mistake!
 
my dear, this thread is funnier and funnier

Yes, there's more heat than light :-)

Perhaps it's not about the interface speeding up access to or from
disk platters, or even HD units. Perhaps it's about completing DMA
transfers faster, to reduce even the slight processor overhead?

Else unless some breakthrough is in the wings, or the UIDE interface
is to be broadened for more device types, I can't see why it would
matter. Maybe it matters only with the big RAID stuff, and because
the hardware has to be developed for that, they figure they might as
well buzzword it into consumer-land as a value-add as well.


-------------------- ----- ---- --- -- - - - -
"If I'd known it was harmless, I'd have
killed it myself" (PKD)
 
I remember that, and that the solution was for the controller to slurp
a whole track at once and then feed the PC at the speed it can manage.

Sure, however memory was expensive at the time.
OK, that makes sense; I wasn't too sure how the relative speeds
balanced out (given that much of the waiting for HD is likely to be
no-data-flow waiting for the mechanics).

While the mechanics are waiting, so is any electronics. The electronics
are just waiting faster. ;-)
OK; then if the interface is that much faster, perhaps the idea is to
buffer stuff off the HD and send it through in fewer transactions?

That is indeed done. It's called read-ahead. You read sectors that
aren't requested in hopes that you might be able to use some.
Sometimes it even works. ;-) However the OS can do the same, since
the weak link is the speed of the magnetics (and main memory is free).

There is the possibility of buffering writes too (write data to the
buffer at interface speeds, then go away and let the mechanics catch
up), but there are some serious data integrity issues here.
I was wondering if faster UIDE modes would serve to reduce the
capacity of on-HD RAM, by offloading the HD unit quicker. While HD
data transfer is generally slow overall, it may be a mixture of fast
data squirts interspersed with no data flow at all. If this were the
case, then I could see that as a reason to buffer on the HD.

I don't follow... If the magnetics are faster than the interface,
buffering certainly helps here. THe fact is that the magnetics are
considerably slower than the interface, so buffers help (read-ahead and
a little on writes), but not to a great degree. Since 8MB of memory is
essentially free these days... Though I do see cheaper drives with
2MB. Intel had some great drive performance testing tools (iPeak-
Storage, IIRC) some years back. This would be a neat experiment for
them.
If this is not the case, then the reverse may apply - the need to
buffer writes to the HD so that the PC can shove through bigger wads
of stuff at a time, and then get on with other stuff while the HD
dumps from on-board RAM to platters as fast as it can. This would be
a bit like the way an application can be done with printing, even if
printing takes ages to commit to paper.

Except that we haven't lost anything but paper, ink, and time if the
printer jams. If the system barfs before data has been committed to
magnets... Write buffering is dangerous, though it is done (the
primary reason Maxtor drives were faster than Seagates and IBMs about
five years back). There are usually hidden "switches" in firmware to
change the behavior of these things. Some customers wouldn't touch
write-buffering, others don't care.
That would also fit with a shutdown problem pattern that required a
fix; where the OS had done writing to HD and would initiate an ATX
off, before the HD had flushed from on-board RAM to platters.

Exactly. ...or someone kicks the plug out of the wall. The OS thinks
the data is committed, the disk goes "huh?". The user goes
aw, $#|+! ...not good.
Yes, I take your point.


That was what I wasn't too sure about. I was thinking that peak rates
could be high enough to be a problem, but prolly not, now that I think
about it a bit more :-)

Look at the specifications of various drives on the manufacturer's web
sites. You'll find that drives are getting faster, but not as fast as
interfaces are. Interfaces overtook the heads somewhere back about
DMA4 or ATA2 (16MB/s), IIRC.
 
It wouldn't be a far stretch of the imagination to think that when 3Gb/s shows up
on a drive box that the technology inside will have changed to where the 1.5Gb/s
rate has been eclipsed. Therefor I'll still keep an eye out for drives with that
specification, as it may (and I think probably will) be an idicator of increased
performance.

My friends like to say I've got a pretty wild imagination and crazy
dreams. But even stretching my imagination, I don't see how a drive is
going to change to eclipse the 1.5Gb mark by the time 3Gb/s shows up.

Especially after looking around for information. Looking at the info,
it seems that the interface speed or improvements in the interface
speed/specification has little to do with any actual performance
improvement. Thus getting a new interface speed doesn't say anything
about any improvement on the inside.

I mean, the first thing that came to mind about some technology inside
that might have changed by the time 3Gbs shows up is areal density.

Somewhen in 2002, IBM predicted 400GB with 100Gb/sqinch drives by 2003
with their AFC breakthrough, we only got those in the middle of 2004.
At that point, the first AFC drives had 25.7Gb/sqinch. So areal
density growth was about 100% a year since. 25.7 2002, ~50 2003, and
~100 2004.

Later that year, Seagate also introduced the Barracuda 7200.7 which
according to Storage Review (I know they aren't manufacturers but they
are the only ones I know with a consistent comparison of drives over
the years) clocked in at 56.2MB/s based on the WB99 Begin benchmark.
Currently, the fastest Seagate ATA drive is still the SATA Barracuda
7200.7 at 64.7MB/s, with the fastest any ATA drive being the WD Raptor
at 71.8MB/s. The transfer rate has at best improved by some 28% over
about 2 years or 14% a year.

So areal density did not have a linear relationship with the transfer
rate. It is at best a 1:6 thing.

When is the 3Gb/s interface supposed to show up? The Serial ATA
consortium said SATA will be introduced at 150MB/s and roadmapped up
to 600MB/s in 10 years. That's erm 45MB every year on the average. Or
rather 150, 300, 600 every 3 years or since they said it's planned to
support 10years of projected demand.

Either way, that put us right smack around the time 3Gb/s is expected
or perhaps overdue since Seagate unveiled the 1st drive in Aug 2000...
:ppPPP

Let's say that internal drive performance is all about areal density.
So if we get density doubled by the end of the year, we will only get
that 14% now. Or let's assume the entire 28% increase is due entirely
to 1 breakthrough in the middle and assume now in 2005, we're right on
the edge of one.

This could be due to perpendicular recording. Last month, Toshiba
unveiled the first commercial perpendicular recording drive giving a
33% increase over 100Gb/sqinch. But 33% areal density increase only
gives us 5% performance.

Assuming it's new tech and cumulative with whatever they do to double
areal density in the past. Also assume they make great improvements in
the next 12 months. That's into 2006.. say 100% on top of the usual
100%. With 200% increase in areal density by 2006, that only gives us
about 33% perfomance gain.

We would be looking at 96Mbps (from 72Mbps) and certainly expecting to
see 3Gbps interface. So wouldn't it indicate that improvements in
specification has rather little to do with what performance we can
expect?

Especially since U320 SCSI drive certainly didn't give anywhere close
to 100% improvement over U160 drives. Even more so given that I'm
really using very optimistic estimates IMO.

In fact looking at the Storage Review results, albeit only 1 of them,
it appears that the winners are all 15K drives and 10K drives. Only a
handful of latest 7.2K drive manages to crawl past what appears to 1st
generation 10K U160 drives. The only ATA drive mixing it up there with
the SCSI drives also happen to be a 10K IIANW. This seems to indicate
that spindle speed is more likely to give better performance than any
improvement in interface specification. So the breakthrough you should
be eyeing isn't the 3Gbps interface but an drive at 18K RPM or
something like that.

Also, looking at the S-ATA documents, a major section appears to be
dedicated to something called a SEBM or concentrator. Basically I
think it means a storage controller that you stick a couple of drives
to, but appears to the computer as a single Serial ATA device.

Now if we stick a pair of WD Raptor into a RAID 0 controller, viola
we've hit the 150MBps limit. Which you already somewhat alluded to
earlier. So why not take the obvious and conclude that the
1.5Gbps/3Gbps/6Gbps spec was designed for this purpose stated in the
specification rather than some unrealistic breakthrough in single
drive technology?

Lastly, the specifications and other SATA docs kept refering to the
serial ata BUS. It seems like they are looking at the bus and all
specifications are to be the characteristics of the bus, without
relation to connected devices apart from the connectors/logic required
to work. Wouldn't it be logical to think that when the SATA bus speed
to 3Gbps, drive interface would be required to work at 3Gbps speed to
match the bus instead of thinking that some major breakthrough in
singular drive speed is going to happen???

So why do you say that it's quite possible (since it's not a long
stretch of the imagination) that we will have drives hitting 150Mbps
when 3Gbps interface arrives?

Is there some potential breakthrough in disk tech that gives 600%
increase in areal density, ignoring laws of diminishing returns and
maintain the 1:6 ratio, giving us 100% performance increase in
mainstream desktop drives within next 12 to 18 months?

12 to 18 months because 3Gbps is like expected last year or maybe the
Serial-ATA group are missing road maps like everybody does and 3
stages over 10 years since 2000 actually means 3 stages over 20 years,
which still puts 2005~2006 as the point 3Gbps comes out.

Or do you know/think that they are really really going to be late with
the 3Gbps interfaces? :pppPp


--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
The little lost angel said:
So areal density did not have a linear relationship with
the transfer rate. It is at best a 1:6 thing.

Agreed. To say that xfer is linear with density is
to assume constant trackwidth. It is not.

Width and bitspacing are controlled by fundamentally different
factors (positioning vs discrimination) but both improve over time.

As a simplification, both could improve together the same amount.
Then 2x areal density would give 41% increase in transfer.

72 MByte/s xfer is pretty fast-- at least 580 MHz which is very
fast for a 8cm flexwire (hopefully twisted pair). Unless they've
gone to multiple parallel heads or level encodings (modems).
spindle speed is more likely to give better performance
than any improvement in interface specification.

Fully agreed.
and conclude that the 1.5Gbps/3Gbps/6Gbps spec was designed

This is gonna require another Northbridge port,
I guess PCI-E, unless you want to use the AGP :)
Plain PCI graphics cards still sell.

-- Robert
 
Let us just start by pointing the gentleman to US Patent 4,486,739 by
Franaszek et al.

One of the goodies! For those who don't have ready access to a patent
database:

Title: Byte oriented DC balanced (0,4) 8B/10B partitioned
block transmission code

Published: 1984-12-04
Filed: 1982-06-30

Inventor: Franaszek, Peter A.; Katonah, NY
Widmer, Albert X.; Katonah, NY

Abstract: A binary DC balanced code and an encoder circuit for
effecting same is described, which translates an 8 bit
byte of information into 10 binary digits for transmission over
electromagnetic or optical transmission lines subject to timing
and low frequency constraints. The significance of this code is
that it combines a low circuit count for implementation with
excellent performance near the theoretical limits, when measured
with the commonly accepted criteria. The 8B/10B coder is
partitioned into a 5B/6B plus a 3B/4B coder. The input code
points are assigned to the output code points so the number of
bit changes required for translation is minimized and can be
grouped into a few classes.
 
For the last time: that's not the point, so stop trying to make it seem
like it. Admit you had a misunderstanding and move on (step into
manhood already!)

You are a nitwit! That *was* your point. You're determined to shift the
goalposts to save face somehow though.

You'll never learn if you don't listen to the teachers.
 
My friends like to say I've got a pretty wild imagination and crazy
dreams. But even stretching my imagination, I don't see how a drive is
going to change to eclipse the 1.5Gb mark by the time 3Gb/s shows up.

They think you may have a wild imagination, but I see yours as being quite
tempered by reality. Unlike...
Especially after looking around for information. Looking at the info, it
seems that the interface speed or improvements in the interface
speed/specification has little to do with any actual performance
improvement. Thus getting a new interface speed doesn't say anything
about any improvement on the inside.
Exactly.

I mean, the first thing that came to mind about some technology inside
that might have changed by the time 3Gbs shows up is areal density.

Actually the thing that limits STR is the head and sense amplifiers. They
could pack more magnets on the platter, but would hten have to slow the
platter down to be able to keep the bandwidth of the head/amps withing
physical limits. The STR is limited by the head electronics, not by areal
density. If you doubled the track density you'd have to cut the RPM in
half to compensate. That's a non-starter in the marketing departments.
Somewhen in 2002, IBM predicted 400GB with 100Gb/sqinch drives by 2003
with their AFC breakthrough, we only got those in the middle of 2004. At
that point, the first AFC drives had 25.7Gb/sqinch. So areal density
growth was about 100% a year since. 25.7 2002, ~50 2003, and ~100 2004.

Again said:
Later that year, Seagate also introduced the Barracuda 7200.7 which
according to Storage Review (I know they aren't manufacturers but they
are the only ones I know with a consistent comparison of drives over the
years) clocked in at 56.2MB/s based on the WB99 Begin benchmark.

When I was studying disk dives waybackwhen, they weren't much better than
Tom's technically, but at least they tried.
Currently, the fastest Seagate ATA drive is still the SATA Barracuda
7200.7 at 64.7MB/s, with the fastest any ATA drive being the WD Raptor
at 71.8MB/s. The transfer rate has at best improved by some 28% over
about 2 years or 14% a year.

So areal density did not have a linear relationship with the transfer
rate. It is at best a 1:6 thing.

Look at the track density and RPM vs. STR. That's what I was trying to
suggest for AJ's "homework". This will give you the bandwidth of the head,
which has been the limiting factor for a while. That is, it's a linear
issue, not a quadradic (areal) one.

<snip>
 
Agreed. To say that xfer is linear with density is
to assume constant trackwidth. It is not.

Width and bitspacing are controlled by fundamentally different
factors (positioning vs discrimination) but both improve over time.

Not discrimination, rather head/amplifier bandwidth. Slow the platter
down and the bits can be made smaller. It gets uninteresting fast though.

<snip>
 
AJ, get up from that computer, walk outside, and chill out.

I was merely telling you the reason why keith wrote "you're
stuttering". AFAICT it was not meant that you stutter as a person,
merely that we're seeing double-postings from you. Double-postings
can happen for a lot of reasons, and they're a bore. There was no
personal attach implied in my email, regardless of how you chose to
receive it.

Of course. That's why I put the BTW in front of the comment. BTW
indicates an off-hand comment, not to be taken as an integral part of the
discussion.

AJ has a real communications problem. I suppose it's just best to plonk
the idjit. ...though I rarely plonk anyone.
 
Of course. That's why I put the BTW in front of the comment. BTW
indicates an off-hand comment, not to be taken as an integral part of the
discussion.

AJ has a real communications problem. I suppose it's just best to plonk
the idjit.

Well, it couldn't hurt.
...though I rarely plonk anyone.

Aw, what a rip-off!

/daytripper ;-)
 
It's not that easy. The limit depends instead on signal/noise ratio and head
settle time requirement. For a given technology state of art, there is lower
limit on single bit area. For example, for high performance drives track
pitch can be traded with linear density, to reduce head settle time and
increase STR. At least, the datasheets of the drives of the past suggest it.
 
It's not that easy. The limit depends instead on signal/noise ratio and head
settle time requirement. For a given technology state of art, there is lower
limit on single bit area. For example, for high performance drives track
pitch can be traded with linear density, to reduce head settle time and
increase STR. At least, the datasheets of the drives of the past suggest it.

Of course the S/N ratio is part of the bandwidth of any
information channel. Shannon tells us that the max theoretical data rate
is log2 bandwidth * S/N (or more precisely bandwidth * log2(1+S/N).

A colleague who has designed many generations of drive electroncs (and
several patents in the area) tells me the real limiting factor is the
head/amplifier bandwidth. Which has can often be seen in similar STRs
between 7200 and 5400RPM drives. Though I've noticed the 5400RPM drives
aren't quite as swift anymore. I've been meaning to investigate why.
 
Back
Top