J. P. Gilliver (John) said:
In message <
[email protected]>, Guess Who
That is even more irritating than people who just say "google it".
Sometimes, USB flash drives give non-reproducible errors.
As a result, if you told me you integrity checked a USB key,
I'd have to discount the effectiveness of such a test. It
doesn't guarantee that the next time you use it, it won't
have a problem.
And if the integrity checker does a lot of writes, all it's
doing is wearing out the flash blocks. A modern MLC flash
is rated for as low as 3K cycles. And that limits the cost
effectiveness of a write/read verify type of test. If you left
it running over night, it might wear out the flash.
It's easy to cook up your own test case. First, acquire a checksum
tool, such as md5sum, sha1sum, fciv etc. Checksum a large file
on your hard drive. Copy the file to the flash. Run the checksum
tool again, this time reading from the flash. The file size chosen,
should be so large, that the file can't fit in system memory,
in the system file cache. So if I had a 4GB RAM computer, 3.1GB free,
I'd use any file larger than 3.1GB for a test. If the USB key has a LED,
verify the LED is active during the read-verify step. (If the file
was read from the file cache in system memory, then you wouldn't see
much if any accesses to the USB key.)
If you want large, random files, you can use dd. You would first
send the file to your hard drive. And then checksum it, before
copying the file to the USB key.
http://www.chrysocome.net/dd
dd if=/dev/random of=J:\testfile.bin bs=65536 count=65536
That would create a 4GB file as J:\testfile.bin and the data
content would be (pseudo-)random. If J: happened to be FAT32,
you'd change the parameters to 65536, 65535 to stay under the
4GB limit.
When someone writes a dedicated test program, they likely
have the option of turning off the system file cache, rather
than defeating it in the crude way I'm suggesting.
The Microsoft "fsutil" utility, can also create large files.
If the target file system is FAT32, in fact fsutil is fine
for that function. If you use "fsutil" and do a createfile on
an NTFS partition, the file is "sparse" and very little writing
is done to the file system. And then you're stuck with the
conundrum, of whether making copies of the newly created file,
preserves the sparseness or not. So while "fsutil" is a valid
option in some cases, I just don't bother with it any more.
I would have preferred that Microsoft wasn't nearly so clever,
as to do it that way. (On a Sun system, the mkfile utility
always creates real files, and was my old favorite when on
SunOS/Solaris systems. The "dd" util is what I use now.)
http://en.wikipedia.org/wiki/Sparse_file
Paul