J
John H.
http://www.geek.com/news/geeknews/2003Dec/bch20031212023056.htm
An interesting user comment:
At some level... it really is amazing (4:59pm EST Fri Dec 12 2003)
[Reminiscing, pardons...]
I mean - it was only back in the 1930's for history's sake, that the
first steel-wire recorders were developed, and in the 1940's came "tape"
recorders, which sported flakey films of ground-up 'burned' hematite (a
modestly magnetic iron ore rock) on a nitrocellulose substrate. For the
longest time, the ground-rock film was king... the first true spinning
digital disks used it, and it even made it as far as the 30 megabyte
5.25 inch "Seagate" drives. But those lil' particles had to go.
Too noisy, not enough energy stored per unit area, way too large, and
damned difficult to set down in smooth layers. The industry transitioned
to metal-particle slurries, which sufficed for awhile, then gradually
went to directly "plated" aluminum platters. The plating was actually a
high voltage sputtering operation, and the media had to be passivated
with dilute nitric then sulfuric acids and a number of interesting
"washes" of ammonia and hexabutyl hexanoate. Anyway...
The issue though - for the longest time - was that the "heads" were made
from itty bitty "U's" of a magnetic flux core that was literally wrapped
with several turns (!!! can you imagine) of fine wire, to constitute the
read/write head. The 'gap' was horizontal, so the smaller the gap, the
less the write-field would leak out, making it harder and harder to
impinge 'bits' on the underlying spinning media.
Then researchers discovered some really amazing compounds that had the
property of changing their resistance in response to fairly modest
changes in local magnetic field. These made for extremely sensitive READ
heads, which could successfuly read bits far smaller than the old "U"
heads. But how to write these darn bits? Well... then the idea of a
'monopolar' head took over: a single tiny pole, instead of a pair, and
relying on Maxwell's law of magnetics (which guarantees that all paths
contain a net of 'zero' magnetic field, once integrated). So, the other
pole could be big, and essentially virtual. Bits were recorded
vertically, and things immediately got much better.
These magnetoresistive heads and monopole bit-writers propelled disk
capacity (which had more or less begun to stagnate at the 1.2 and 2.4
gigabyte 5.25 inch, full height monstrosity) to rapidly moving new and
much lower priced points. Today CMR ("colossal magneto-resistive")
heads, and nanometer-scale write-heads are powering the 70 gigabit per
sq. inch. densities now cheaply available.
The concept of nanopatterning (which may well prove to be an absolutely
excellent use for the 'nanoprinting' impression technology spoken of
elsewhere) will make sure magnetic domains are at least well formed,
uniform in size, uniform in energy-capacity, and uniform in their
resistance to change (data loss). The superparamagnetic limit is
sidestepped quite nicely when each domain is physically separated from
each other domain by a gap. Hard for neighboring magnetic fields to flip
OUR field, if we're each an island unto ourselves.
So, the bit densities will rise to the 100 gigaBYTE to maybe 2-3 times
higher limit. As another poster on a different geek.com forum pointed
out, what the hell good is all those terabytes, if it takes months to
perform a defrag?
In essence, I/O is falling so far behind processing speed (literally -
in 1981 a 5 inch hard drive delivered 2 megabyte per second performance,
and the CPUs of the day (8086) could only slog around 10 megabytes per
second on their memory busses themselves... Today, the
highest-of-the-highest speed hard drives can sustain 70 megabytes per
second throughput, but CPUs are able to slush around 3500 megbyte or
more per second on their busses.)
The change is going to have to be a bit expensive, and pretty heavily
leveraged off of what microelectronics CAN do: pipeline everything. I
see the heads being redone to have 16 or 32 (or 36 for ECC) complete
read islands, and write islands. I see the supporting electronics able
to read these and cross-correlate the signals into a bit stream that is
20 to 30 times the throughput of todays drives. I see 64 megabyte (or
more!) on-drive caches, to further speed up operations, and smart
statistical "look-ahead" circuitry to pre-position and pre-read data for
delivery to the CPU. I also see the need for substantially faster BUS
interfaces - 1 GByte/sec at a minimum, and more like 4
GByte/sec/64-bit/500MHz to really keep the data flowing.
Then... and maybe then... will the hard drives be fast enough for us to
have nearly "instant booting", and so on. Data needs to get both ON and
OFF those disks as fast as possible.
- by GoatGuy
An interesting user comment:
At some level... it really is amazing (4:59pm EST Fri Dec 12 2003)
[Reminiscing, pardons...]
I mean - it was only back in the 1930's for history's sake, that the
first steel-wire recorders were developed, and in the 1940's came "tape"
recorders, which sported flakey films of ground-up 'burned' hematite (a
modestly magnetic iron ore rock) on a nitrocellulose substrate. For the
longest time, the ground-rock film was king... the first true spinning
digital disks used it, and it even made it as far as the 30 megabyte
5.25 inch "Seagate" drives. But those lil' particles had to go.
Too noisy, not enough energy stored per unit area, way too large, and
damned difficult to set down in smooth layers. The industry transitioned
to metal-particle slurries, which sufficed for awhile, then gradually
went to directly "plated" aluminum platters. The plating was actually a
high voltage sputtering operation, and the media had to be passivated
with dilute nitric then sulfuric acids and a number of interesting
"washes" of ammonia and hexabutyl hexanoate. Anyway...
The issue though - for the longest time - was that the "heads" were made
from itty bitty "U's" of a magnetic flux core that was literally wrapped
with several turns (!!! can you imagine) of fine wire, to constitute the
read/write head. The 'gap' was horizontal, so the smaller the gap, the
less the write-field would leak out, making it harder and harder to
impinge 'bits' on the underlying spinning media.
Then researchers discovered some really amazing compounds that had the
property of changing their resistance in response to fairly modest
changes in local magnetic field. These made for extremely sensitive READ
heads, which could successfuly read bits far smaller than the old "U"
heads. But how to write these darn bits? Well... then the idea of a
'monopolar' head took over: a single tiny pole, instead of a pair, and
relying on Maxwell's law of magnetics (which guarantees that all paths
contain a net of 'zero' magnetic field, once integrated). So, the other
pole could be big, and essentially virtual. Bits were recorded
vertically, and things immediately got much better.
These magnetoresistive heads and monopole bit-writers propelled disk
capacity (which had more or less begun to stagnate at the 1.2 and 2.4
gigabyte 5.25 inch, full height monstrosity) to rapidly moving new and
much lower priced points. Today CMR ("colossal magneto-resistive")
heads, and nanometer-scale write-heads are powering the 70 gigabit per
sq. inch. densities now cheaply available.
The concept of nanopatterning (which may well prove to be an absolutely
excellent use for the 'nanoprinting' impression technology spoken of
elsewhere) will make sure magnetic domains are at least well formed,
uniform in size, uniform in energy-capacity, and uniform in their
resistance to change (data loss). The superparamagnetic limit is
sidestepped quite nicely when each domain is physically separated from
each other domain by a gap. Hard for neighboring magnetic fields to flip
OUR field, if we're each an island unto ourselves.
So, the bit densities will rise to the 100 gigaBYTE to maybe 2-3 times
higher limit. As another poster on a different geek.com forum pointed
out, what the hell good is all those terabytes, if it takes months to
perform a defrag?
In essence, I/O is falling so far behind processing speed (literally -
in 1981 a 5 inch hard drive delivered 2 megabyte per second performance,
and the CPUs of the day (8086) could only slog around 10 megabytes per
second on their memory busses themselves... Today, the
highest-of-the-highest speed hard drives can sustain 70 megabytes per
second throughput, but CPUs are able to slush around 3500 megbyte or
more per second on their busses.)
The change is going to have to be a bit expensive, and pretty heavily
leveraged off of what microelectronics CAN do: pipeline everything. I
see the heads being redone to have 16 or 32 (or 36 for ECC) complete
read islands, and write islands. I see the supporting electronics able
to read these and cross-correlate the signals into a bit stream that is
20 to 30 times the throughput of todays drives. I see 64 megabyte (or
more!) on-drive caches, to further speed up operations, and smart
statistical "look-ahead" circuitry to pre-position and pre-read data for
delivery to the CPU. I also see the need for substantially faster BUS
interfaces - 1 GByte/sec at a minimum, and more like 4
GByte/sec/64-bit/500MHz to really keep the data flowing.
Then... and maybe then... will the hard drives be fast enough for us to
have nearly "instant booting", and so on. Data needs to get both ON and
OFF those disks as fast as possible.
- by GoatGuy