S
Skybuck Flying
Hello,
Here is an idea which might have some value for some people:
The file system of for example Microsoft Windows could use free space to
store redundant data/copies.
The free space is not used anyway so it could be used to recover from bad
sectors more easily.
The free space remains available as free space but is secretly also used as
redundant data.
In the event of a bad sector perhaps the file system can recover from it
more easily.
Perhaps this technology idea is not needed for harddisks and current file
system since it seems already pretty stable, perhaps it already does it ?
But I don't think so.
However for new technologies like SSD which might be more error prone it
could be interesting idea to design a new file system or to modify the
existing file system so that it can take more adventage of free space for
more redundancy...
I have read about SSD spreading data across it's chip to prevent quick wear
of logic/the same sections, but does it also use free space for more
redundancy ? (I don't think so but I could be wrong).
So in case this is a new idea it could have some value, so I thought I'd
mention it.
Ofcourse users can also do this manually by making multiple copies of
folders and important data.
Perhaps the file system could also be extended with a "importancy tag".
Users could then "tag" certain folders as "highly important".
The more important the folder is the more redundancy it would get
The system itself could have an importancy of 2, so that it can survive a
single bad sector.
Small little folders but with "super high importancy" could even receive a
redundancy of 4, 10, maybe even 100.
(Each redundancy means a duplicate copy, so 100 would mean 100 copies, 1
real copy, and 99 copies in free space).
Bye,
Skybuck.
Here is an idea which might have some value for some people:
The file system of for example Microsoft Windows could use free space to
store redundant data/copies.
The free space is not used anyway so it could be used to recover from bad
sectors more easily.
The free space remains available as free space but is secretly also used as
redundant data.
In the event of a bad sector perhaps the file system can recover from it
more easily.
Perhaps this technology idea is not needed for harddisks and current file
system since it seems already pretty stable, perhaps it already does it ?
But I don't think so.
However for new technologies like SSD which might be more error prone it
could be interesting idea to design a new file system or to modify the
existing file system so that it can take more adventage of free space for
more redundancy...
I have read about SSD spreading data across it's chip to prevent quick wear
of logic/the same sections, but does it also use free space for more
redundancy ? (I don't think so but I could be wrong).
So in case this is a new idea it could have some value, so I thought I'd
mention it.
Ofcourse users can also do this manually by making multiple copies of
folders and important data.
Perhaps the file system could also be extended with a "importancy tag".
Users could then "tag" certain folders as "highly important".
The more important the folder is the more redundancy it would get
The system itself could have an importancy of 2, so that it can survive a
single bad sector.
Small little folders but with "super high importancy" could even receive a
redundancy of 4, 10, maybe even 100.
(Each redundancy means a duplicate copy, so 100 would mean 100 copies, 1
real copy, and 99 copies in free space).
Bye,
Skybuck.