double discrepancy between PDA and PC

  • Thread starter Thread starter ataha
  • Start date Start date
A

ataha

I have a dll written in C# for a PDA application that uses doubles to
do mathematical operations (mean, slope, noise) on an array of readings
and produces electrolyte and gas readings in a sample of blood. I also
have a PC version of the PDA application that uses the very same dll
for calculating the results.

The problem is that the PDA application and the PC application are
producing different results.

I understand the inherent problems in using doubles to do arithmetic,
but shouldnt the errors be the same on the PDA and the PC since it is
using managed code ? Isnt the dll in intermediate language, and thus
shouldnt the results be exactly the same ?

I tried switching the doubles to decimals and im still seeing errors.
Is anyone aware of what would create such a discrepancy ?

thanks
Taha
 
Yes, it's IL, but IL is compiled to native instructions which are executed
on particular CPU with particular FP precession.
Most x86 CPUs have coprocessors which traditionally using 80 bit FP numbers
internally even if you work with 32 bit FP.
On the other hand, most CPUs in devices don't have FP units at all and FP is
emulated. Which means your results would be different.



By the way, I wonder what kind of precession initial data has. It's quite
likely that precision is limited by sensors at few percents at best.
Which means there's probably no need for doubles in these calculations, 32
bit float would offer precision which is way higher than initial data you're
using.
That might speed up calculations on devices a lot.


--
Best regards,


Ilya

This posting is provided "AS IS" with no warranties, and confers no rights.

*** Want to find answers instantly? Here's how... ***

1. Go to
http://groups-beta.google.com/group/microsoft.public.dotnet.framework.compactframework?hl=en
2. Type your question in the text box near "Search this group" button.
3. Hit "Search this group" button.
4. Read answer(s).
 
Ilya said:
By the way, I wonder what kind of precession initial data has. It's quite
likely that precision is limited by sensors at few percents at best.
Which means there's probably no need for doubles in these calculations, 32
bit float would offer precision which is way higher than initial data you're
using.
That might speed up calculations on devices a lot.

Well calculation speed isnt a huge concern on the PDA. It calculates
once, at the very end in a background thread, and it takes 2-3 seconds
to get all the results and write the xml file to file store. The user
can live with such performance.

The device that takes the readings on the blood is measuring voltages
and currents and receives the data in 3 bytes, which is converted to a
float by the processor onboard the device. The device uses bluetooth to
transmit the floats to a PDA. Once the PDA has all the readings, it
goes ahead with the calculations which include calculating a mean,
slope, noise, and second derivative. It takes the values and does
additional operations on them: multiply, divide, raise 10 to the power
of x... and gets a result.

Do your observations on floating point numbers and mathematical
coprocessors also apply to the decimal type ? I switched the code that
calculates the mean, slope, noise and second derivative to all-decimal
calculation (I cast the floats to decimals before I use them) and while
the mean, slope and noise are identical, the second derivative, which
requires more calculation is always different.

Taha
 
Decimal uses software implementation which supposed to be the same on all
platforms.

It's designed primarily for financial calculations (+, -, * and, sometimes,
/), not for scientific one.



You've mentioned using 10^x, that's probably done with Pow() which takes
doubles.

Generally, whatever you use from Math class uses doubles and that might
generate the difference.



By the way, how big the difference is? It's not outrageous, is it?

I would guess you're probably using ph-meter like equipment, is that right?

What's the certified accuracy of that device?


--
Best regards,

Ilya

This posting is provided "AS IS" with no warranties, and confers no rights.

*** Want to find answers instantly? Here's how... ***

1. Go to
http://groups-beta.google.com/group/microsoft.public.dotnet.framework.compactframework?hl=en
2. Type your question in the text box near "Search this group" button.
3. Hit "Search this group" button.
4. Read answer(s).
 
Ilya said:
You've mentioned using 10^x, that's probably done with Pow() which takes
doubles.
Generally, whatever you use from Math class uses doubles and that might
generate the difference.

Yes thats true. And that may be causing additional distortion, but
before I call Pow() at all, I am calculating the second derivative of
the values. This is a purely mathematical computation with additions,
divisions and multiplications.. and the difference starts here. The
difference creeps in at about 5 decimal places in the second
derivative, and then gets exaggerated through additional calculations
(Pow, etc). By the time im done, the difference could be something like
169.5 to 170.5 for the sensor that has the most additional
calculations, although its usually around 0.3. A difference of 1 is
pretty big. For the other sensors that dont require a lot of additional
calculations the result is almost always the same after it gets
rounded.

When I use Pow(), I cast to a double and back to a decimal again.
I would guess you're probably using ph-meter like equipment, is that right?
What's the certified accuracy of that device?

Not exactly. There are biological sensors (pins) in a smart card and
the card reader takes electrical measurements off the pins with respect
to a reference pin, sometimes of voltage, sometimes of current,
depending on the sensor and those are the values I use to do my
calculation. I dont know how well im explaining this, I just write the
PDA software :). Here's one of our website pages that might explain it
better: http://www.epocal.com/biosensors.htm

Taha
 
Even decimals are processed by the CPU at some point. I'm pretty sure +, -
and * would yield identical results for decimals.



However, / might be different. Decimal is processed in pieces, it eventually
comes down to 64/32 bit "div" instruction on x86 which is not available at
all on ARM.

Probably standard math library might be a bit different than div instruction
on x86 in, say, rounding.



Can you pinpoint which operation produces the difference first? Is this
difference smaller for decimals or doubles?


--
Best regards,

Ilya

This posting is provided "AS IS" with no warranties, and confers no rights.

*** Want to find answers instantly? Here's how... ***

1. Go to
http://groups-beta.google.com/group/microsoft.public.dotnet.framework.compactframework?hl=en
2. Type your question in the text box near "Search this group" button.
3. Hit "Search this group" button.
4. Read answer(s).
 
(Pow, etc). By the time im done, the difference could be something like
169.5 to 170.5 for the sensor that has the most additional
calculations, although its usually around 0.3. A difference of 1 is
pretty big. For the other sensors that dont require a lot of additional
A difference of one is still smaller than 1% error ;-)
 
Lloyd said:
A difference of one is still smaller than 1% error ;-)

Thats true. But its pretty big for a medical reading.

To answer Ilya's question though, when I used doubles, the errors crept
in right off the bat when I was summing up x & y in order to calculate
the mean.

With decimals the sums are the same. The mean, slope and noise are the
same (these all require divisions). The second derivative is different.
The second derivative though, requires that I calculate the slope at
each of the 20 data values (going back 2 values and forward 2 values to
do so), then substituting the slope for the original values and
calculating the slope of that.. so there is much more calculation
going. The second derivative in decimal format starts to be different
at about 5 or 6 decimal places then gets exaggerated through other
calculations.

Taha
 
Hi Taha,

Have you considered scaling your numbers to integers before performing
the calculations?

For example, if your input scale is 5 (98.12345), multiply that number
by 10^5 and cast it to an integer (or long integer). Perform your
calculations. The integer-based operations, like addition and
multiplication, will be perfectly accurate. You'll only experience
rounding when you get to the Pow method; of course, you could always
write your own version that works with long integers.

After your calculations are done, you would scale the result back via
division. I would guess that this method would produce a much smaller
variance.

However, if you're still not seeing close results, you could always use
strings for input and display without converting to actual
double/decimal at all. By that, I mean read the input via a string and
manually move the decimal point before converting to an integer. To
scale back from an integer, convert to string and reposition the decimal
point before displaying results. Naturally, this won't work if you need
to store the results as a number in a store.

I hope this helps some, or at least sparks other ideas.

Regards,
Matt
 
Back
Top