Stacey said:
Here's what may have been going on?
"The use of cache improves performance of any hard disk, by reducing the
number of physical accesses to the disk on repeated reads and allowing data
to stream from the disk uninterrupted when the bus is busy."
So if the bus is busy (capture or fire wire card) the cache can keep feeding
info to the drive. I do know that this "bus busy" issue is why via chipset
boards have problems with video capture.
The explanation I gave was referring to write-caching. The above description
is for read-caching (where the hard drive says, "ah, he's just read this
sector, there's a fair chance he'll want to read the next sector, so I'll
copy that into the buffer while I'm here"). I suppose there could be
circumstances where this may help with capturing, but I'm having a hard time
thinking up any. In any case, you would be much better off making sure no
process is going to interrupt the capturing process (especially if that
process wants to use the hard disk).
As far as turning off everything, there are some things you can't turn off
that may try to access the drive (even just reading from it) without you
knowing it. Having a larger cache helps keep systems that aren't configured
perfectly from having problems.
I certainly hope not. Sure, there are lots of Windows processes that can't
be terminated. Most of these run in the background, and either a) only do
work when the CPU is idling or b) do work when you actually request it.
Also (I'm trying to get this straight) aren't cached writes "fifo" (first in
first out)
Most explanations of this issue tend to talk about drives using the FIFO
strategy, because it is easiest to explain/understand. Early drives used
FIFO, but I don't believe any manufacturer does any more. Much more advanced
algorithms have been developed now, which seem to offer a better hit-ratio.
Three more caching strategies are listed here:
http://www.pcguide.com/ref/hdd/op/cacheCircuitry-c.html. That site is old
now (by at least a few years), so I wouldn't be surprised if there are new
strategies, or at least more revised versions of these.
It's an interesting point though. Lots of people look at 8-meg drive caches
and think the drive must have good performance, but a drive using a dumb
caching algorithm won't perform as well as a drive using a smarter one.
even when streaming to the drive and not just loaded when first
writing and once they fill up just stop doing anything? I wonder if the
logic on some drives do act that way while others stream the data through
the cache when writing? I can see why they wouldn't bother on reads and
maybe I'm confusing what was going on with that system that was fixed going
to a larger cache drive.
You lost me there! If you are suggesting that the cache is used for both
reads and writes, then you are correct. In fact, they actually leave writes
cached, in case they are needed again soon.
I think you are also asking about what drives do when they fill up. The way
I talked (wrote) about this was probably misleading. As far as I can gather,
the drive doesn't stop using the buffer when it fills up. It will still
place new writes in the buffer, but before it can do that there has to be
room in it. If the buffer is full of data that is waiting to be written to
the hard drive then there is nothing in the buffer that can be removed, so
the new data to be written will have to wait until some of the data in the
buffer has been written in the platters, so that there is room in the buffer
for the new write. Wow, that was a long sentence. Anyway, if you followed
that you will see that the buffer is still used, but in order to use it we
must write something to the platters, removing any advantages the buffer
offered.
I still think it's a good idea given the small difference in price.
As do I. I hope I haven't come across as being too negative. I believe
people place far too much emphasis on this figure, but certainly don't think
it's a bad thing. A drive with an 8-meg buffer can only perform better than
one with a 2-meg buffer (all else being equal).
Gareth