R
Rod Speed
Mr. Grinch said:All other performance optimizations aside (OS, application) the
biggest difference will be seen only with certain types of applications.
Something that does sequential read / writes of a large file for example
will see no benefit. The file will never fit in the memory of the cache,
and a sequential operation there is no reason to ever re-read or write the
same portion of the file, you just keep going to the end. This would be
like a backup or restore or disk image operation. The bottle neck is still
the physical drive platters a 2mb vs 8mb cache will show no difference.
On the other hand, if you have a situation where the same sectors
of the disk are being re-read and re-written constantly, the 8mb
MIGHT be faster. I say might because if the data is less than 2
mb, then both caches will perform the same. But if it's significantly
bigger, then you'll see the 8mb version pull ahead. Examples that
would involve constant access to the same sectors would be a
file system that constantly access the FAT or equivalent,
That would normally be handled by the OS level caching.
for example, a database,
That would normally be done by the database system itself, that level of caching.
or a swap file / page file.
And it would be ****ed OS that did the swap file/page file that way.
The thing to keep in mind is we don't all run the same apps
and so some people will see a benefit and others won't.
Or it may well be that hardly any real world work
will actually see any significant benefit at all.
Some people could see a benefit but only if they've
configured the app to use that drive (ie - swap file).
No modern OS is that poorly implemented.