I'm willing to bet that academic-grade math software, which is used in
colleges and many commercial settings, (e.g., Mathematica), can provide
Some software gives you better accuracy than does Excel. Some software
uses a larger data width that can provide more accuracy out farther to
the right. Also, some software can perform some mathematical
operations symbolically rather than numerically.
But even in the real world, rounding to some degree is necessary. No
matter how far out you carry out the calculations, you'll never get a
computationally correct answer to the addition 1/3 + 1/3 + 1/3 = 1.
No "academic grade" math software that doesn't use symbolic
manipulation will give you answer of 1. Take it out to 1000000 decimal
places and you'll still not equal to 1.
Professional computer programmers know the limitations of
computational arithmetic and write code to accommodate those
limitations. For example, when testing equality of floating point
numbers, code is not generally written as
If (X - Y) = 0 Then
' do something
Instead, code is written as
If Abs(X - Y) <= Epsilon Then
' do something
where Epsilon is some value, such as 0.00000001, scaled to the
compiler's representation of floating point numbers. Some programming
languages have this constant built in as a native element of the
language. In other languages you declare it yourself.
Greater accuracy in computed floating point numbers comes at a cost of
performance. It takes more operations to calculate a more accurate
representation of a quantity. At some point, the software must take an
approximation. Whether that approximation is at about 7 places in
Single Precision Floating Points, about at 15 places in Double
Precision Floating Points, or 1000000 places in some hypothetical
software, there will necessarily be some rounding. As long as you
are limited by a finite number of decimal places, rounding is
inevitable.
The designers of the software and programming languages take into
consideration the real world needs of the end users, the applications
built with the code and compiler, and the performance of the hardware
to decide how accurate the representation of the quantity needs to be.
For nearly all purposes, 15 digits of precision is adequate.
In order to make software and data consistent and sharable among
different systems, applications, and platforms, some standardized
format must be adhered to. For most software, that standard is the 8
byte Double Precision Floating Point standard published by IEEE. Is it
perfect? No. Is it the best possible standard? No. But it is what
nearly all software uses. Without some standard, you couldn't share
data between different programs. Would you leave it up to the user to
instruct the software to use 128 bits rather than 64? And then assume
that all users of the same data know to use 128 rather than 64? Can't
happen in the real world.
Software that uses the IEEE standard isn't "defective". The
limitations are known or should be known by the users and developers.
One could make the argument that the documentation is deficient by not
making clear the limitations of the software, but as long as the
standard is followed, the software does what it is designed to do.
Cordially,
Chip Pearson
Microsoft Most Valuable Professional
Excel Product Group, 1998 - 2009
Pearson Software Consulting, LLC
www.cpearson.com
(email on web site)