EN FR
EN FR


Section: New Results

Floating-Point and Numerical Programs

Computer Arithmetic and Formal Proofs: Verifying Floating-point Algorithms with the Coq System

S. Boldo and G. Melquiond published a book that provides a comprehensive view of how to formally specify and verify tricky floating-point algorithms with the Coq proof assistant. It describes the Flocq formalization of floating-point arithmetic and some methods to automate theorem proofs. It then presents the specification and verification of various algorithms, from error-free transformations to a numerical scheme for a partial differential equation. The examples cover not only mathematical algorithms but also C programs as well as issues related to compilation [32].

Automating the Verification of Floating-Point Programs.

The level of proof success and proof automation highly depends on the way the floating-point operations are interpreted in the logic supported by back-end provers. C. Fumex, C. Marché and Y. Moy addressed this challenge by combining multiple techniques to separately prove different parts of the desired properties. They use abstract interpretation to compute numerical bounds of expressions, and use multiple automated provers, relying on different strategies for representing floating-point computations. One of these strategies is based on the native support for floating-point arithmetic recently added in the SMT-LIB standard. The approach is implemented in the Why3 environment and its front-end SPARK 2014. It is validated experimentally on several examples originating from industrial use of SPARK 2014 [37], [21].

Round-off Error Analysis of Explicit One-Step Numerical Integration Methods.

S. Boldo, A. Chapoutot, and F. Faissole provided bounds on the round-off errors of explicit one-step numerical integration methods, such as Runge-Kutta methods. They developed a fine-grained analysis that takes advantage of the linear stability of the scheme, a mathematical property that vouches the scheme is well-behaved [14].

Robustness of 2Sum and Fast2Sum.

S. Boldo, S.Graillat, and J.-M. Muller worked on the 2Sum and Fast2Sum algorithms, that are important building blocks in numerical computing. They are used (implicitly or explicitly) in many compensated algorithms or for manipulating floating-point expansions. They showed that these algorithms are much more robust than it is usually believed: the returned result makes sense even when the rounding function is not round-to-nearest, and they are almost immune to overflow [10].

Formal Verification of a Floating-Point Expansion Renormalization Algorithm.

Many numerical problems require a higher computing precision than the one offered by standard floating-point formats. A common way of extending the precision is to use floating-point expansions. S. Boldo, M. Joldes, J.-M. Muller, and V. Popescu proved one of the algorithms used as a basic brick when computing with floating-point expansions: renormalization that “compresses” an expansion [15].