As I said, you miss the point.
And gladly so, child.
I would consider myself seriously under the weather if I ever found myself
understanding your clueless rantings.
The offsets for defective input sectors will be there in the target file, but
no data will be in there.
On reads that results in zeros.
Alternatively you can have dd_rescue write "hard" zeros in the first copy,
but that does nothing except to allocate more disk space.
Obviously you have never heard that you can leave parts of a file unwritten.
Or you have no idea how data in a file is addressed.
Hint: Look up "sparse file" some time.
Then you should have mentioned that in your previous explanation, child.
That's not obviously clear to any non-unix zealot.
It also presupposes a system that supports that type of file.
I'll bet Zvi Netiv's utility (from the part that you cleverly snipped)
has never heard of it.
Right with the methodology that works with user intervention, child.
Where you replace bad parts in one file with good parts from another.
With bad data is not written to the target, you can just copy the source
several times to the same target, retaining all good data already written.
And still have holes in them that read as zeroes without knowing whether
they are 'holes' or actual data of zeroes on a file system (or application)
that doesn't understand sparse files.
If something "obviously bad" is written, that may
erase good data written in previous copy passes.
Sure, in your methodology that you failed to explain, child. Not in his.
In that methodology the failing data is easily recognizable provided
the false data inserted was made easily recognizable.
In your example the false data are zeroes (whether on-disk or not)
that may either represent a 'hole' or be (f)actual data of zeroes.
Did you ever hear of: so clever that it's stupid?