Norm said:
"David W. Hodgins" wrote
Yesterday, I was at the Sandisk site to download their SSD toolkit (TRIM).
On their website, they said the same thing as above. However, they do not
write the system software. I'll grant that to defrag an SSD drive too often
is wasteful. However, use of benchmark tools tells a different story. On the
one hand, you have the theoretical aspects which proclaim that an SSD has no
"seek time" because it is non mechanical. On the other hand, you have
benchmark tests that suggest (to the naive) that SSD has a nonzero seek
time. The issue is that the OS software is written for a block structured
device. Call it the seek time of the software. Even on an SSD, benchmarks
tools say that the seek time of the software is reduced after a defrag.
I also have a Super Talent SSD in which the manufacturer proclaims that TRIM
is handled automatically. I think what they say is "oversell". However, in
Win7 it is possible to move the location of the pagefile and the Readyboost
cache. I move the location of these files from time to time and it seems to
improve the perceived "lag time".
Here is another stupid fact. Google Android only incorporated support for
TRIM in Android 4.3. Android 4.3 is the next to last update. That means
there may be as many as 30 billion Android devices out there that do not
have support for SSD TRIM. So after a year of use an Android device is
frustratingly slow.
The problem of SSD lag time is also a worry in the iPad world. You need the
latest and greatest iOS and supporting hardware and a fat wallet.
I've noticed something similar, when using a RAMDisk.
On the one hand, a software RAMDisk has a very high sustained
transfer rate. I can get a 4GB/sec bandwidth rating.
The fun begins, when you deal with 60000 small files and
attempt to do some things. The OS almost gives the impression
there is an IOP limit present for some reason. If I saw the
CPU being pegged, then I'd be satisfied the OS was doing
all that it could - it would be saturated. But instead,
I can see it do things at a certain speed, and there are
CPU cycles left over. And the operations I'm doing, don't
complete as fast as I would expect.
So while the seek time of a SATA SSD might be 25uS due to
Flash readout time, it's just possible the desktop OS
adds more time per IOP than we'd like. And a fragmented
file starts to cost us something.
*******
And the best way to defrag a device, when a decent amount
of fragmentation is present, is to copy the files off,
reformat (quick type), copy the files back, and do fixboot C:
or equivalent to fix up the partition boot code if present.
That's what I do for my WinXP machine on occasion. The fact
that takes less time than defragmentation, suggests a lower
number of writes for the same amount of benefit. If there was
a tiny amount of fragmentation on the partition, then the
defragmenter might finish sooner. If the partition is a mess,
then the copy method wins. My defragmentation attempts were
taking more than eight hours, while the copy method about
half an hour to forty minutes.
I can do that stuff on my desktop, because I have more than
one OS, and more than one hard drive. So it's relatively
easy to pick tools and situations for the job of copying off
C: .
I don't know how to do such a procedure safely for
any OS like Vista or later. I'd probably break something
on those. I don't know whether Robocopy would get everything.
Paul