Best Practice for Dealing with double inaccuracies

  • Thread starter Thread starter JBeerhalter
  • Start date Start date
J

JBeerhalter

I understand why doubles are not entirely accurate(i.e. if I store the
number .10 in a double it might actually have the value .
100000000001), I was just curious if someone could direct me toward
best practices for dealing with them.

I could cleary eschew double in favor of decimal, or just constantly
round everything, but these seem to involve alot of overhead that I'd
rather not deal with.

-JB
 
I understand why doubles are not entirely accurate(i.e. if I store the
number .10 in a double it might actually have the value .
100000000001), I was just curious if someone could direct me toward
best practices for dealing with them.

I could cleary eschew double in favor of decimal, or just constantly
round everything, but these seem to involve alot of overhead that I'd
rather not deal with.

Why do you think it matters? If you are doing something that would be
defeated by this "inaccuracy", don't do it.
 
I understand why doubles are not entirely accurate(i.e. if I store the
number .10 in a double it might actually have the value .
100000000001), I was just curious if someone could direct me toward
best practices for dealing with them.

I could cleary eschew double in favor of decimal, or just constantly
round everything, but these seem to involve alot of overhead that I'd
rather not deal with.

Use your favorite web search tool to find the article "What every computer
scientist should know about floating point arithmetic". There are various
copies of it all over the place. It'll tell you more than you wanted to
know (but not more than you need to know!)

-cd
 
Back
Top