SpinRite vs dd_rescue , dd_rhelp or ddrescue

  • Thread starter Thread starter geletine
  • Start date Start date
G

geletine

Has anybody used these ? They are written in c opposed to asm which
would proberly make them portable, as far as i know spinrite is only
written for x86 chips.

thanks
 
Previously geletine said:
Has anybody used these ? They are written in c opposed to asm which
would proberly make them portable, as far as i know spinrite is only
written for x86 chips.

I use dd_tescue for all kinds of stuff. I have used it
to recover data from media with defect sectors several times.
It works insofar that it does get all the data in files
or partitions that the drive is still willing to give.
It also has the nice option to not write anything to the
target if the source sector was not read successfully. That
allows to combine the results from several reads.

Pretty versatile and powerful, but you need to understand what you are
doing. A typical Unix utility in that regard.

AFAIK dd_rescue needs glibc. With that it is already pretty portable
to many different platforms. Still, it should be easy to port to any
other Unix-like OS and may be portable to the giant island of
incompatibility (Windows) too.

Arno
 
Arno said:
Pretty versatile and powerful, but you need to understand what you are
doing. A typical Unix utility in that regard.

I am a unix user myself, so it should not be a problem
AFAIK dd_rescue needs glibc. With that it is already pretty portable
to many different platforms. Still, it should be easy to port to any
other Unix-like OS and may be portable to the giant island of
incompatibility (Windows) too.

I should compile under MinGW or Cygwin in that case.
 
Arno Wagner said:
I use dd_tescue for all kinds of stuff. I have used it
to recover data from media with defect sectors several times.
It works insofar that it does get all the data in files
or partitions that the drive is still willing to give.
It also has the nice option to not write anything to the
target if the source sector was not read successfully.
That allows to combine the results from several reads.

Actually, that makes it rather more difficult.
 
'F'Nut said:
Actually, that makes it rather more difficult.

Another redundant, and irrelevant comment, adding nothing to the thread and
helping absolutely no one, by F'Nut. Thank you F'Nut.
 
Previously Joep said:
Another redundant, and irrelevant comment, adding nothing to the
thread and helping absolutely no one, by F'Nut. Thank you F'Nut.

Actually it is not redundant, since it is completely wrong and
misses the point.

Arno
 
Actually it is not redundant,

At least you got that right.
since it is completely wrong

Nope, it's rather obvious.

You can only replace wrong parts with good parts if you know that the
individual parts are in the right place (in the right position) *in the file*
AND which one of the parts is obviously wrong, which obviously you won't
know if you leave out 'bad' parts. And obviously it's completely silly to
COMPARE different files when you already know that all of them con-
tain good data but are of different content and length. You may be able
to decide which parts are more complete than other parts in the seperate
files and cross-update them accordingly but you will never know which
one is complete because you won't know what's missing.

With fake but easily recognizable 'bad' data that will not be a problem
and makes cross-updating the individual files a breeze.

Zvi Netiv has a tool to cross-replace sectors between files but it is
completely useless with files that have missing parts (actual or fake).
and misses the point.

Indeed you did, 'as always'.
 
At least you got that right.
Nope, it's rather obvious.
You can only replace wrong parts with good parts if you know that
the individual parts are in the right place (in the right position)
*in the file* AND which one of the parts is obviously wrong, which
obviously you won't know if you leave out 'bad' parts. And obviously
it's completely silly to COMPARE different files when you already
know that all of them con- tain good data but are of different
content and length. You may be able to decide which parts are more
complete than other parts in the seperate files and cross-update
them accordingly but you will never know which one is complete
because you won't know what's missing.

As I said, you miss the point. The offsets for defective input sectors
will be there in the target file, but no data will be in there. On
reads that results in zeros. Alternatively you can have dd_rescue
write "hard" zeros in the first copy, but that does nothing except to
allocate more disk space. Obviously you have never heard that you can
leave parts of a file unwritten. Or you have no idea how data in
a file is addressed. Hint: Look up "sparse file" some time.
With fake but easily recognizable 'bad' data that will not be a problem
and makes cross-updating the individual files a breeze.

Wrong. With bad data is not written to the target, you can just copy
the source several times to the same target, retaining all good data
already written. If something "obviously bad" is written, that may
erase good data written in previous copy passes.

Arno
 
As I said, you miss the point.

And gladly so, child.
I would consider myself seriously under the weather if I ever found myself
understanding your clueless rantings.
The offsets for defective input sectors will be there in the target file, but
no data will be in there.
On reads that results in zeros.
Alternatively you can have dd_rescue write "hard" zeros in the first copy,
but that does nothing except to allocate more disk space.
Obviously you have never heard that you can leave parts of a file unwritten.
Or you have no idea how data in a file is addressed.
Hint: Look up "sparse file" some time.

Then you should have mentioned that in your previous explanation, child.
That's not obviously clear to any non-unix zealot.
It also presupposes a system that supports that type of file.
I'll bet Zvi Netiv's utility (from the part that you cleverly snipped)
has never heard of it.

Right with the methodology that works with user intervention, child.
Where you replace bad parts in one file with good parts from another.
With bad data is not written to the target, you can just copy the source
several times to the same target, retaining all good data already written.

And still have holes in them that read as zeroes without knowing whether
they are 'holes' or actual data of zeroes on a file system (or application)
that doesn't understand sparse files.
If something "obviously bad" is written, that may
erase good data written in previous copy passes.

Sure, in your methodology that you failed to explain, child. Not in his.

In that methodology the failing data is easily recognizable provided
the false data inserted was made easily recognizable.

In your example the false data are zeroes (whether on-disk or not)
that may either represent a 'hole' or be (f)actual data of zeroes.

Did you ever hear of: so clever that it's stupid?
 
Oscar said:
And gladly so, child.
I would consider myself seriously under the weather if I ever found myself
understanding your clueless rantings.




Then you should have mentioned that in your previous explanation, child.
That's not obviously clear to any non-unix zealot.
It also presupposes a system that supports that type of file.
I'll bet Zvi Netiv's utility (from the part that you cleverly snipped)
has never heard of it.



Right with the methodology that works with user intervention, child.
Where you replace bad parts in one file with good parts from another.


And still have holes in them that read as zeroes without knowing whether
they are 'holes' or actual data of zeroes on a file system (or application)
that doesn't understand sparse files.


Sure, in your methodology that you failed to explain, child. Not in his.

In that methodology the failing data is easily recognizable provided
the false data inserted was made easily recognizable.

In your example the false data are zeroes (whether on-disk or not)
that may either represent a 'hole' or be (f)actual data of zeroes.

Did you ever hear of: so clever that it's stupid?


Hello, Oscar/Folkert:

So, now, you're impersonating one of Rod Speed's sock puppets (and
mimicking his writing style), in a sadistic attempt to bedevil Arno
Wagner? Plus, you virtually announce your intentions, by scribbling
"(e-mail address removed)" as a phoney return address?

I'm starting to suspect that you hold Arno's intelligence in sheer
contempt. <G>


Cordially,
John Turco <[email protected]>
 
John Turco said:
Hello, Oscar/Folkert:

So, now, you're impersonating one of Rod Speed's sock puppets (and
mimicking his writing style), in a sadistic attempt to bedevil Arno
Wagner? Plus, you virtually announce your intentions, by scribbling
"(e-mail address removed)" as a phoney return address?

I'm starting to suspect that you hold Arno's intelligence in sheer
contempt. <G>

Just what you get when he escapes from his jailers for a while.
 
John Turco said:
Hello, Oscar/Folkert:
So, now, you're impersonating one of Rod Speed's sock puppets

Me? My my John, what do you take me for, I would never do that.
What a preposturous thought.
(and mimicking his writing style), in a sadistic attempt to bedevil
Arno Wagner?

Huh, who's he? Sum rocket scientist or sumfin?
Plus, you virtually announce your intentions, by scribbling
"(e-mail address removed)" as a phoney return address?
I'm starting to suspect that you hold Arno's intelligence in sheer contempt. <G>

Wot intelligence.
 
Run out of arguments?

Nope, as the part that you conveniently snipped clearly suggests otherwise.
Seems to be the case.

Liar bullshitter. You better have your seems machinery looked at child.
Well, talking down to me

Which ofcourse you obviously did not start with in the part that you
carefully snipped:

" Obviously you have never heard that you can leave parts of a file unwritten. "
" Or you have no idea how data in a file is addressed. "
is a poor and rather transparent substiute.

Which you are ever so happy to use yourself as a convenient excuse to not have to
bullshit yourself out of your predicament, child.
 
Back
Top