So, while base-10 systems like BCD and DPD are very useful in calculators, your assertion that "ieee754 fp is crap" has no factual basis.
I did say, 'where precision calculations were concerned'
Precision or accuracy has nothing to do with whether the numbers are represented using base-10 or base-2 floating-point formats, I've shown that above.
The conversion to and from decimal number form is exact, cheap, and easy with base-10. That is all. Other than that, precision or accuracy is not involved.
If you have sufficient binary precision, and use the binary approximates, you get the same results with base-2 as you do with base-10, if you track the number of significant digits yourself. We have even had discussions here on EEVBlog on the various methods of doing this efficiently. The key point is that as long as your precision – the number of significant digits your base type can describe exactly – is sufficient, and you track the number of digits, you can produce the exact same results using base-2 internal representation as with base-10 internal representation.
It is the
resource use in decimal conversion – and to a lesser degree, tracking the number of decimal digits needed for correct base-2 representation – that makes the difference between base-10 and base-2 representations. "Precision" or "accuracy" has nothing to do with it.
When I want to do precise math, I use algebraic representation (symbolic math), and for example rational numbers to represent decimal numbers: for example,
0.7531 = 7531/10000. This is common in symbolic algebra toolkits. For all rational reals, the numerator is an integer, and the denominator a nonnegative integer. Irrational numbers are approximated using sufficiently precise rational number. Transcendental irrational constants
e and
π can be approximated to any desired precision, or treated specially. No base-10 representation at all, except that when parsing, the decimal form will initially have a power-of-ten denominator.
(I have toyed with the idea of representing arbitrary-precision real numbers as products of powers of primes, but I'm afraid the math involved in implementing efficient addition and subtraction for them, is beyond my math-fu. Multiplication and division is trivial with them, it's the addition and subtraction that are extremely hard with such.)
I do use BCD, DPD, or related base-10 representations myself on microcontrollers when dealing with values that get displayed in decimal form, when I only need basic arithmetic operations (× ÷ + -) on them. Again, it is not about precision, but about the effort needed for the conversion. (On a low-power 32-bit MCU, say a Cortex M0, I might even use an intermediate form, where two 14-bit units, each describing four decimal digits (0000-9999), are stored in the 28 least significant bits of a 32-bit word. This means that I can implement the basic arithmetic using only 32-bit operations (no 32×32=64 multiplication or 64/32=32 division needed), and with a divide-by-10000-and-remainder 32-bit operation, I can implement arbitrary precision arithmetic. Precision limited only by available memory.)
@ Nominal Animal: You're very patient =)
I remember when I myself first learned about numerical precision (in representations) and accuracy (of real-world approximations of non-polynomial functions like trigonometric functions, logarithms, antilogarithms, and exponents), and how inefficient and complicated I erroneously thought the binary representations are.
I don't blame anyone for initially misunderstanding this stuff, but I do want to clear such misconceptions.
It is good for students and everyone else to understand the limits of resolution -- even nature has one. Everyone should adapt the mindset that exact result only exist in theory.
I like to think that while numbers are exact, the things we measure using numbers are not. The approximations do not apply to the numerical values, but to the measurements or estimates we use the numbers to represent. (Statistics, and concepts like standard deviation or confidence intervals, have a very similar brain-twist.)
You could argue that counting things is exact. Quantum physics says otherwise, because reality itself isn't exact in that way. (Even though quantum fields and such may be exact, the observables, the quantities we can measure, are not. It appears that's just how reality works.)
To illustrate, consider the task of obtaining a decimal representation of
e/π ≃ 0.865. How do you calculate the result? To an arbitrary precision?
This is a good example, because both are irrational, transcendental numbers that (as far as we know) are not related to each other, and it is not obvious (even if it is linear) how (lack of) precision in the numerator or denominator will affect the precision in the result; range arithmetic would be useful here.
The precision of the result won't be affected by the base or format of numeric representation you use; it is all about the method(s) you use for the calculation.