I think if they get a problem that would take that level of effort,
they just give up instead.
10'000 EUR/USD as upper limit is certainly realistic. We recently
asked a quote for a disk with a specific problem and it was something
like 2000 EUR/USD.
Regardless of this, they have to read the data from the drive using a
disk head at enormous speed. Even if disk comes right up to the
Shannon limit getting data out of the read signal in normal operation,
if they switch over to some much slower recovery process involving
microprobes reading a few bits per minute instead of disk heads
reading megabits per second, they can possibly get a much better S/N
ratio and therefore get more data out.
That is not the way I read the (admittedly limited) information about
the current surface materials. The impression I got is that if you
make the bits smaller, then neighbouring ones start to cancel each
other out in a short time, i.e. spontaneous bit-flips become
likely. The perpendicular recording stuff is all about making the bits
larger (in 3D), while making their 2D surface footprint smaller. You
argument is however certainly valid for tape and older HDD
technologies.
There is another possibility too, which is you write data to some
sector, it reads back with low-level errors that are corrected by the
drive ECC, and the drive notices the errors and decides to mark athat
physical sector as bad and relocate the data to a spare sector. The
user application never gets wrong data or notices that anything
unusual happened. Thereafter, no erasure or overwriting in normal
operation ever touches the original sector, and the data is always
recoverable from that sector.
Agreed. That is a possibility and if your data is sensitive enough
that this is a problem, then you need physical destruction. As I
was just arguing thet one overwrite is likely enough for a modern
disk, this is no contradiction to what I said, since reallocated
sectors will not be overwritten better with multiple overwrites.
In order to quantify the risk, you have to ask yourself
the following questions:
- Do I have data that fits into one secotr that an attacker
would still find valuable and could identify?
- How large is the probability the data is in a reallocated sector?
For the second, an estimatioin like the following could be used:
A disk has (e.g.) 80GB. Assume sectors are reallocated at random and
assume not morr than 1000 (e.g.) are rellocated in a disks-lifetime.
Now for each "single-sector secret" you get a probability of
roughly 1/150'000 that it is in a defective sector. Multiply by
the value of the secret and how often a secret gets rewritten
in a way that changes its place on disk (usually means being
written to a new file, but could be worse with a filesystem that
has data-journalling). Add up for all secrets on the disk.
If the resulting number exceeds the disk value, do physical
destruction.
Yeah, it is speculative but not groundless. I think that
commercial recovery services don't even try stuff at that level.
There is evidence they did, but failed. Some company has a
4 year old whitepaper about ther upcomming universal disk surface
reader. (Sorry, forgot the reference.)
Some very well funded government agencies certainly will have tried
and continue to try with each new recording technology. Whether they
actually succeed is a different question. What is sure, is that it
will be expensive and not available to most companies and usually
not to law enforcement, without the fact becomming public knowledge
relatively fast. So protecting against this is relevant for
state-secrets (or the like), but not allmost all private or
commercial data. And if recovery is slow and expensive, it will
stay that way.
Seagate announced some drives a while back with built-in encryption.
I don't know if they're actually selling them now though.
Don't know. But I think I read about such a technology being already
broken recently in c't magazine. Might have been something different
though.
Arno