G
Guest
I have a section of code that adds some double values and gives an incorrect
result. This occurs with data that isn't really waht I would call high
precision. An example is the following code snippet:
---------------
double a = 2.7;
double b = 2.7;
double c = 0.001;
double result=0;
result=result+a;
result=result+b;
result=result+c;
------------------
After running this code the value of the result variable in the quickwatch
(or watch) window is 5.4010000000000007. How is this possible??
If I do the following:
System.Diagnostics.Debug.WriteLine("Value of result = " + result);
it comes out correctly as 5.401. But I need to be able to compare this to
another variable and the result is wrong.
Any ideas what is going on here
result. This occurs with data that isn't really waht I would call high
precision. An example is the following code snippet:
---------------
double a = 2.7;
double b = 2.7;
double c = 0.001;
double result=0;
result=result+a;
result=result+b;
result=result+c;
------------------
After running this code the value of the result variable in the quickwatch
(or watch) window is 5.4010000000000007. How is this possible??
If I do the following:
System.Diagnostics.Debug.WriteLine("Value of result = " + result);
it comes out correctly as 5.401. But I need to be able to compare this to
another variable and the result is wrong.
Any ideas what is going on here