Robin said:
I've got a 2T USB3 external HDD (a laCie, which are supposed to be
excellent quality) that I want to turn into two 1T volumes.
I used Perfect Disk to get all of the files up front (I see now that I
needn't have done that) and it's running. And running. And running.
I've never shrunk a volume before. Can I expect it to run more or less
as long as a defrag on that size of volume?
For a Windows 7 shrink, the max shrink is 50% (as long as no
other tool has been fooling with the NTFS metadata). There is some
metadata the OS doesn't know how to move, that PerfectDisk does know
how to move. So PerfectDisk is the "intervener" that enables
the Windows 7 built-in shrink, to get below the 50% mark. Note
that PerfectDisk doesn't do such a good job, that you get to the
desired size in one shot. The metadata is moved, but it's not "jammed
against the left snugly".
You don't need PerfectDisk, to go from a 2TB partition, to a 1TB partition.
The OS capability, should be able to do that by itself.
If you didn't defrag first, then even the OS tool will need
to move the data (using the internal defrag API to safely
move data around), The OS capability, can move data around,
but certain metadata cannot be moved by it.
In terms of moving the data clusters around, you pay the
same time price, whether the OS capability does it, or
PerfectDisk does it. The difference is, an extra few seconds
while PerfectDisk moves the metadata the OS doesn't know
how to move.
So if data was physically spread all over the 2TB partition,
*something* has to move the clusters below the 1TB mark.
You can do it in advance with PerfectDisk, in which case
the Windows 7 shrink should take no time at all. If you don't
use PerfectDisk, and just use Windows 7, it'll need the same
amount of time to move the data clusters out of the way.
Depending on the kind of content, the fastest way to
do this kind of op, is to copy the files off the 2TB,
delete the empty partition, make a new partition, copy
the files back. (I use Robocopy for this.) Robocopy may
be included on the newer OSes, while on older OSes, you
download it. Note - the "mir" or mirror is dangerous.
Triple-check your command syntax! You've been warned.
robocopy Y:\ F:\ /mir /copy:datso /dcopy:t /r:3 /w:2 /zb /np /tee /v /log:y_to_f.log
If it is actually the OS C: partition,
then you'll need a second OS to do it. For example,
until mid-January, you could use Win8 Preview as your
second OS, and do the necessary maintenance. (On a desktop,
you put Windows 8 on a separate disk, and unplug Windows 7
until it's done. On a laptop, offline data movement with a
second OS, is much harder to achieve. In which case, you
unscrew the hard drive from the laptop, install it in your
desktop PC, and do your operations there.)
So in answer to your question, yes, defragmenting a
2TB volume, is going to take a while. If you were *not*
doing shrink at the moment, you can stop a defragmenter at
any time. There should be a Stop option. On the shrink,
I don't know how you safely stop that. It may be
inherently safe (due to using the defragmenter APIs designed
into the OS), but I'm not turning off the PC in
the middle of one of those, to find out
The reason the performance in megabytes/sec is so low on
the defragmenter API, is the transfers are supposed to be
done in a "power safe" way. If the defragmenter API is
being used, the setup is supposed to survive a power
outage. The transfer size is relatively tiny, and the disk
heads fly around a lot, dropping performance to the 1MB/sec
to 3MB/sec range. Even if the drive has a disk cache, the
commands issued may insist the data not be cached. So for the
most part, the design is brain dead and about as slow as
you can possibly do it.
One of the older OSes, had a registry option where you could
claim your storage device was "power safe". Say for example,
you were on a super-UPS, where the disk would have power
for an entire day. You could set the registry entry, and
tell the OS that the power cannot possibly fail. The idea
was, a "more generous" sector transfer size could be
used, and the API would run faster. The info for doing this,
is hard to find on purpose, and if you look for it, I doubt
you'll find the bits and pieces necessary. I don't really think
they want people using that (because, guys like me would
flip that bit on my non-protected PC and just gamble on it).
So this idea is purely of historical interest.
Paul