Section: Research Program
Floating-point arithmetic
Properties of floating-point arithmetic
Thanks to the IEEE 754-2008 standard for floating-point arithmetic, we now have an accurate definition of floating-point formats and operations. The behavior of a sequence of operations becomes at least partially predictable. We therefore can build algorithms and proofs that use these specifications. Some of these algorithms are new, some others have been known for years, but only for radix-2 systems. Also, their proofs are not exempt from flaws: some algorithms do not work, for instance, when subnormal numbers appear. We wish to give rigorous proofs, including the exact domain of validity of the corresponding algorithms, and to extend when possible these algorithms and proofs to new formats specified by the recent floating-point standard (decimal formats, large precision formats).
Error-free transformations and compensated algorithms
To achieve a prescribed accuracy for the result of a given computation,
it is often necessary to increase the precision
of the intermediate operations beyond the highest precision available in hardware.
On superscalar processors, an efficient solution is to compute, at
runtime, the error due to critical floating-point operations, in order
to later compensate for them. Such compensated algorithms have been
studied for the summation of