Performance hit from using Write Through caching vs Write Back

  • Thread starter Thread starter dg
  • Start date Start date
D

dg

Does anybody have any input on this subject? I am learning so much about
RAID lately, so much that I am getting a headache. My RAID card has no
battery backup so I am not going to use Write Back caching for a RAID5
array. So, how bad does that hurt performance? I know what the difference
is technically, but I don't know how that equates to real world use. ANY
input appreciated, but less insults are better!

Thanks!
--Dan
 
dg said:
Does anybody have any input on this subject? I am learning so much about
RAID lately, so much that I am getting a headache. My RAID card has no
battery backup so I am not going to use Write Back caching for a RAID5
array.

So get a low cost UPS for the whole system and enable write-back caching.
So, how bad does that hurt performance?

You'll notice it.
 
I've got a UPS, but I started thinking about power supply failure and such.
I could go with a new case with redundant PS's, but then again I could just
get the 6 port card with battery backup. Hmmmm...

--Dan
 
dg said:
I've got a UPS, but I started thinking about power supply failure and such.
I could go with a new case with redundant PS's, but then again I could just
get the 6 port card with battery backup. Hmmmm...

Dan, trust me and take his advice on the UPS since that is the most
intelligent thing he said so far.

Rita
 
dg said:
I've got a UPS, but I started thinking about power supply failure and such.
I could go with a new case with redundant PS's, but then again I could just
get the 6 port card with battery backup. Hmmmm...

So how certain do you have to be that no byte ever gets lost? An overall
re-thinking of what the actual requirements are seems needed. What if that
array or controller dies? What exactly are you trying to protect from?
 
Rita Ä Berkowitz said:
Dan, trust me and take his advice on the UPS since that is the most
intelligent thing he said so far.

From under which rock did this newbie Berkowitz appear?
 
dg said:
Does anybody have any input on this subject? I am learning so much about
RAID lately, so much that I am getting a headache. My RAID card has no
battery backup so I am not going to use Write Back caching for a RAID5
array. So, how bad does that hurt performance?
I know what the difference is technically, but I
don't know how that equates to real world use.

So you own it for snobbery then? You don't actually (real world) use it.
 
Yeah, maybe I should. I have to stop myself from too much "what if..."
questions. There is always a potential for something to screw up and cause
me to lose my data, no matter how well I have things configured. I should
add that this particular array will be used for data only, no apps will run
from this drive. Do you all still thing write back caching is the way to
go? I do know that if I had to do this all over again I would definately go
with the 6 port card WITH battery backup.


THANKS!
--Dan
 
My intention was to say that I know what the difference is between the two
caching methods, but I have never had a side by side comparison of two
arrays using different caching methods.

--Dan
 
Previously dg said:
Does anybody have any input on this subject? I am learning so much about
RAID lately, so much that I am getting a headache. My RAID card has no
battery backup so I am not going to use Write Back caching for a RAID5
array. So, how bad does that hurt performance? I know what the difference
is technically, but I don't know how that equates to real world use. ANY
input appreciated, but less insults are better!

Your HDDs have write-buffering, and so does your OS. All will
loose data when the power fails. A modern server OS may buffer
data up to several minutes before flushing it to disk.

The one thing you need to make sure is that your filesystem
can live with it. There are some journalling filesystems that
have trouble when writes are reordered and then the power fails.

The solution is not to do write-through on the main filesystem, but
to put the journal on a small disk (e.g. 50MB) with write-through.
If you do write through on the main disk(s) you will get massive
performance loss.

Arno
 
Your HDDs have write-buffering, and so does your OS. All will
loose data when the power fails. A modern server OS may buffer
data up to several minutes before flushing it to disk.
Nonsense. The most UNIX did was 30 secs. Windows NT is a few secs.
The one thing you need to make sure is that your filesystem
can live with it. There are some journalling filesystems that
have trouble when writes are reordered and then the power fails.
That's why any decent OS sets the FUA flag for logfile writes.
 
I just did some reading about journaling file systems. To be honest, I was
not familiar with procedures of journaling file systems before reading your
post and doing some google searches. I have more questions now, and I
really hope you can give me a couple tips.

After reading your post, I am under the impression that we want to enable
write back caching on the RAID5 array, which will be an NTFS partition, yet
put the NTFS journal on a second disk somewhere else using write through
caching. I am thinking that this is a pretty good guarantee that I have a
good journal to compare my NTFS disk to should a power failure occur. This
sounds good, but how to implement such a plan? And once I have implemented
the plan, if the journal disk dies, is that a big problem for my array or
can I just specify another journal location? My machine is running XP Pro
with all current patches. First things first, where is the default journal
location and how do I change it?. I could ask more questions now but I
think this is a good starting point.

THANKS!
--Dan
 
What, didn't you boot her out when you crept under yours?

Come on guy(s) fight nice. I didn't think that sharing one corncob could
lead to so much contention between personalities. If it would help I'll
send over a few more fresh turpentine soaked corncobs.



Rita
 
Last night I came to the realization that write back caching will most
likely be EXTREMELY noticeable in performance. The reasing being, 64MB of
cache is a lot of cache! I figure any writes made, less than 64MB, will be
nearly instant as they are considered written as soon as they make it into
the cache ram. Now I am looking at my motherboard with built in RAID in a
whole different way. The motherboard RAID has no cache as far as I know, so
it can't perform nearly the same as a dedicated RAID card with onboard
cache. A really big deal if you ask me!

Thanks!
--Dan
 
It is not as simple as that. RAM is used as a read cache too, to cut down on
read-xor-write cycles. If you write sequentially, as soon as a full stripe is
accumulated the xor is computed and the whole lot written out (a few 100KB).
 
Back
Top