Why does hd defrag work when they work on virtual/logical blocks rather than physical?

  • Thread starter Thread starter Anthony Paul
  • Start date Start date
A

Anthony Paul

Hello everyone!

I completely understand the logic behind defragmentation but I simply
don't understand how it could work given the hard-drives internal block
mapping. If the hard drive assigns logical blocks 1, 2, and 3 to
physical blocks 20, 132, and 88 respectively, then what good is it if
the defragger orders it by logical contiguity when it's the physical
that counts? I've searched usenet but couldn't find a straight answer,
any gurus up to the challenge?

Cheers!

Anthony
 
Previously Anthony Paul said:
Hello everyone!
I completely understand the logic behind defragmentation but I simply
don't understand how it could work given the hard-drives internal block
mapping. If the hard drive assigns logical blocks 1, 2, and 3 to
physical blocks 20, 132, and 88 respectively, then what good is it if
the defragger orders it by logical contiguity when it's the physical
that counts? I've searched usenet but couldn't find a straight answer,
any gurus up to the challenge?

Very simple, the mapping between the outside block address and the
physical block placement on the disk is allmost 1:1. Other mappings
are only used for defect replacement and that will affect onlt an
insignificantly small number of blocks.

Arno
 
Hello Arno, thank you for replying!

Really? Hmmm... so is the only purpose for the mapping itself just for
defect control? And does anyone have any links to a site that explains
how this works on a low level? It's just out of curiosity.

Thanks!

Anthony
 
Anthony said:
Hello Arno, thank you for replying!

Really? Hmmm... so is the only purpose for the mapping itself just for
defect control? And does anyone have any links to a site that explains
how this works on a low level? It's just out of curiosity.

Thanks!

Anthony

HD designers use several strategies for sector replacement, and they
generally treat their choices as proprietary information (trade secrets).

One common scheme is the sliding-sector, in which tracks have hidden sectors
at the "end" of the track. If a defect-free track holds logical sectors
ItoK in physical sectors ItoK and the HD decides that physical sector J is
bad (I<J<K), then after allocating a hidden sector, logical sectors ItoJ-1
will be in physical sectors ItoJ-1, and logical sectors JtoK will be in
physical sectors J+1toK+1. Yes, Virginia, it is a tad more complicated
than that; OK, maybe a *lot* more complicated.
 
Bob Willard said:
HD designers use several strategies for sector replacement, and they
generally treat their choices as proprietary information (trade secrets).
One common scheme is

Well, it used to be on IBM drives. Not so anymore.
the sliding-sector,

Or nested reassign.
in which tracks have hidden sectors at the "end" of the track.

Or at the end of a cylinder.
If a defect-free track holds logical sectors ItoK in physical sectors
ItoK and the HD decides that physical sector J is bad (I<J<K), then
after allocating a hidden sector, logical sectors ItoJ-1 will be in
physical sectors ItoJ-1, and logical sectors JtoK will be in physical
sectors J+1toK+1.

I think this might have been on SCSI drives, I'm not so sure it was
used on IDE. On IDE a sector might just have been replaced with a
sector further up the track without slipping all the other sectors.
On SCSI you were able to choose which mode to use.
Yes, Virginia, it is a tad more complicated than that;

No, not really. You made it overly complicated.
I stays where it is, (the contents of) J and all sectors behind it move up
one place in the track or cylinder and there blocknumbers readjusted.
While this prevents a seek to a replacement sector and a seek back when
reading I-K sequentially, the replacement action itself is rather time
consuming when sectors have to be copied further up in a secure way.

It's also expensive use of spares. A drive may run out of spares
merely because it runs out of spares in only one track or cylinder.
It also has the problem that you have extra inbuilt latency because
of the spare sectors not used in every track or cylinder.
That may be why that was abandoned for spares in the last cylinder(s) of
the drive. At least IBM/Hitachi provided a way to (Low Level) reformat
(resequence) the drives in such a way that all logical block numbers -as
they appear physically on the drive- are renumbered in sequential order
again, similar to the description in the paragraph above, but without the
copying. Which is similar to what happens before they leave the factory.
 
Previously Anthony Paul said:
Hello Arno, thank you for replying!
Really? Hmmm... so is the only purpose for the mapping itself just for
defect control? And does anyone have any links to a site that explains
how this works on a low level? It's just out of curiosity.

No official thing, but basically the disk stores a bitmap
(which is loaded into drive RAM when running) which gives
the good/defect status for each sector. The bitmap is for very
fast access. For the defect mapping it depends on were the
spare sectors are. A typical solution is to have localized
spares. One possibility is that the disk has a table for the
localized reallocation, that gives the original sector number
for each spare. Finding the right one would then need a linear
seach through that table. If a spare-pool has, say, 1000 sectors
in it, this is still an operation that needs < 1ms or so and
is not an issue. Optimization would include to do mapping tables
that are sorted (allows n log n binary search) or hashed.

Arno
 
Previously Bob Willard said:
Anthony Paul wrote:

HD designers use several strategies for sector replacement, and they
generally treat their choices as proprietary information (trade secrets).
One common scheme is the sliding-sector, in which tracks have hidden sectors
at the "end" of the track. If a defect-free track holds logical sectors
ItoK in physical sectors ItoK and the HD decides that physical sector J is
bad (I<J<K), then after allocating a hidden sector, logical sectors ItoJ-1
will be in physical sectors ItoJ-1, and logical sectors JtoK will be in
physical sectors J+1toK+1. Yes, Virginia, it is a tad more complicated
than that; OK, maybe a *lot* more complicated.

That would be sort of the pool idea, with the pool resticted
to one track. Easier to implement, but more wasteful.

Arno
 
Anthony Paul said:
Really? Hmmm... so is the only purpose for the mapping itself just for defect control?

No, it also maps logical blocks to heads/cylinders/sectors.
And with modern drives the number of sectors per track
varys in bands across the drive, so the original CHS
spec isnt even possible with modern hard drives.
And does anyone have any links to a site that explains
how this works on a low level? It's just out of curiosity.

You dont need one, its too obvious to need that.
 
Anthony Paul said:
I completely understand the logic behind defragmentation

No you dont.
but I simply don't understand how it could work given the hard-drives
internal block mapping. If the hard drive assigns logical blocks 1, 2,
and 3 to physical blocks 20, 132, and 88 respectively,

Drives dont do that.
then what good is it if the defragger orders it by
logical contiguity when it's the physical that counts?

In fact logical and physical are the same with the exception of
the reallocated bads and there arent enough of those to matter.

And defrags are also about ensuring that particular files are
in a single contiguous set of blocks, rather than fragmented.
I've searched usenet but couldn't find a straight answer,

Which should have told you that there is a fundamental
problem with your idea about how hard drives work.
any gurus up to the challenge?

There is no 'challenge'
 
Thanks to all for replying!

Rod said:
No you dont.

Hmmm... I thought the whole purpose of defragmentation was to maintain
file contiguity so that the read/write head could read the sectors in a
sequential fashion rather than have to jump around. However, since
you've pointed out that I don't understand it at all, perhaps you can
show me which part of my definition needs correcting.
Drives dont do that.

Yes, I realize that now... you've been kind enough to point out the
fallacies, perhaps you would like to devote a modicum of that effort
towards enlightenment as well.
In fact logical and physical are the same with the exception of
the reallocated bads and there arent enough of those to matter.

Yes, this is a surprise since I've been told otherwise in the past but
it makes sense to me.
And defrags are also about ensuring that particular files are
in a single contiguous set of blocks, rather than fragmented.

How is this different from my understanding of the logic behind
defragmentation?
Which should have told you that there is a fundamental
problem with your idea about how hard drives work.


There is no 'challenge'

I'm puzzled as to why you felt the inherent need to reply in this
fashion but to each his own; and for the record, it was precisely
because I knew that there was something fundamentally wrong that I
posted the question in the first place. Perhaps if more effort went
into educating people rather than castigating them for their lack of
knowledge... no no, what am I saying, that would be too much to ask
for.

Thanks to all for clarifying the matter!

Cheers!

Anthony
 
Anthony Paul said:
Rod Speed wrote
Hmmm... I thought the whole purpose of defragmentation was to
maintain file contiguity so that the read/write head could read the
sectors in a sequential fashion rather than have to jump around.
Yes.

However, since you've pointed out that I don't understand it at all,
perhaps you can show me which part of my definition needs correcting.

I pointed that out already, the next para.
Yes, I realize that now... you've been kind enough to point
out the fallacies, perhaps you would like to devote a
modicum of that effort towards enlightenment as well.

Did that too. With some its more profitable to try it with a stone tho.
Yes, this is a surprise since I've been told otherwise in the past

Then that was just plain wrong.
but it makes sense to me.

There's no point in the logical order not being the same as
the physical with the exception of reallocated bad sectors.
How is this different from my understanding of the logic behind defragmentation?

See above with your para with block numbers in it.
I'm puzzled as to why you felt the inherent need to reply in this fashion

I pointed out the fallacys in your original. Thats just another of those.
but to each his own; and for the record, it was precisely
because I knew that there was something fundamentally
wrong that I posted the question in the first place.
Duh.

Perhaps if more effort went into educating people

I did that.
rather than castigating them for their lack of knowledge...

I didnt do that.
no no, what am I saying, that would be too much to ask for.

Pathetic, really.
 
Previously Anthony Paul said:
Thanks to all for replying!
Hmmm... I thought the whole purpose of defragmentation was to maintain
file contiguity so that the read/write head could read the sectors in a
sequential fashion rather than have to jump around. However, since
you've pointed out that I don't understand it at all, perhaps you can
show me which part of my definition needs correcting.

Hehe, Rod shooting himself in the foot. Of course that is exactly
the point of defragmentation.

Arno
 
Hehe, Rod shooting himself in the foot.
Nope.

Of course that is exactly the point of defragmentation.

Never said it wasnt.

Pity about the next bit which shows that he doesnt understand it at all.
 
I believe it's quite obvious to everyone that you're the one that's
confused here. The statement that I don't understand the logic behind
defragmentation is incorrect, as I've already stated my definition
which everyone thus far seems to agree on including the offending party
(for lack of a better term.) Instead, what I did NOT understand was the
internal mapping scheme that hard drives use. The two are quite
distinct and knowledge of one does not require knowledge of the other.
Had certain individuals not been so bent on trying to inflate their own
egos at the expense of others they would have realized the error in
their argument.

Since this thread no longer serves any purpose other than to continue
feeding trolls that love to hide behind the blanket of security that
internet anonymity affords them, I consider it closed. Thanks to all
for the input, and please, don't feed the trolls!

Anthony
 
Anthony Paul said:
I believe it's quite obvious to everyone
that you're the one that's confused here.

How odd that every single individual who chose to comment on your
original rubbed YOUR nose in YOUR misunderstanding of how hard
drives work with regard to logical and physical block ordering.
The statement that I don't understand the
logic behind defragmentation is incorrect,
Nope.

as I've already stated my definition

Pity that wasnt the problem with your understanding, stupid.
which everyone thus far seems to agree on including the offending
party (for lack of a better term.) Instead, what I did NOT understand
was the internal mapping scheme that hard drives use.

Which is what I said, stupid.
The two are quite distinct

No one ever said otherwise.
and knowledge of one does not require knowledge of the other.

Pathetic, really.
Had certain individuals not been so bent on trying
to inflate their own egos at the expense of others
they would have realized the error in their argument.

Never ever could bullshit your way out of a wet paper bag.
Since this thread no longer serves any purpose other
than to continue feeding trolls that love to hide behind the
blanket of security that internet anonymity affords them,

Never ever could bullshit your way out of a wet paper bag.
I consider it closed.

You have always been, and always will be, completely and utterly
irrelevant. What you might or might not claim to consider in spades.
Thanks to all for the input, and please, don't feed the trolls!

Corse you never ever do anything like that yourself, eh ?

Pathetic excuse for a bullshit artist.
 
Anthony said:
Thanks to all for replying!



Hmmm... I thought the whole purpose of defragmentation was to maintain
file contiguity so that the read/write head could read the sectors in a
sequential fashion rather than have to jump around. However, since
you've pointed out that I don't understand it at all, perhaps you can
show me which part of my definition needs correcting.


Yes, I realize that now... you've been kind enough to point out the
fallacies, perhaps you would like to devote a modicum of that effort
towards enlightenment as well.


Yes, this is a surprise since I've been told otherwise in the past but
it makes sense to me.


How is this different from my understanding of the logic behind
defragmentation?


I'm puzzled as to why you felt the inherent need to reply in this
fashion but to each his own; and for the record, it was precisely
because I knew that there was something fundamentally wrong that I
posted the question in the first place. Perhaps if more effort went
into educating people rather than castigating them for their lack of
knowledge... no no, what am I saying, that would be too much to ask
for.

Thanks to all for clarifying the matter!
Don't worry about anything from Rood Speed.
 
I'm not, I smelled a troll from the start, so after a couple of his
abrasive comments I decided to check his message history. I wasn't
surprised to find that he's made quite a name for himself in the
community with the majority of his posts containing the very same
expletives used in mine. The antidote to people like him is to ignore
them; they just want attention, even if it's negative.

Cheers!

Anthony
 
Back
Top