Section: Research Program
Proof of Numerical Programs
Permanent researchers: S. Boldo, C. Marché, G. Melquiond

Linked with objective 1 (Deductive Program Verification), the methodology for proving numerical C programs has been presented by S. Boldo in her habilitation [52] and as invited speaker [53]. An application is the formal verification of a numerical analysis program. S. Boldo, J.C. Filliâtre, and G. Melquiond, with F. Clément and P. Weis (POMDAPI team, Inria Paris  Rocquencourt), and M. Mayero (LIPN), completed the formal proof of the secondorder centered finitedifference scheme for the onedimensional acoustic wave [55][4].

Several challenging floatingpoint algorithms have been studied and proved. This includes an algorithm by Kahan for computing the area of a triangle: S. Boldo proved an improvement of its error bound and new investigations in case of underflow [51]. This includes investigations about quaternions. They should be of norm 1, but due to the roundoff errors, a drift of this norm is observed over time. C. Marché determined a bound on this drift and formally proved it correct [9]. P. Roux formally verified an algorithm for checking that a matrix is semidefinite positive [115]. The challenge here is that testing semidefiniteness involves algebraic number computations, yet it needs to be implemented using only approximate floatingpoint operations.

Because of compiler optimizations (or bugs), the floatingpoint semantics of a program might change once compiled, thus invalidating any property proved on the source code. We have investigated two ways to circumvent this issue, depending on whether the compiler is a black box. When it is, T. Nguyen has proposed to analyze the assembly code it generates and to verify it is correct [112]. On the contrary, S. Boldo and G. Melquiond (in collaboration with J.H. Jourdan and X. Leroy) have added support for floatingpoint arithmetic to the CompCert compiler and formally proved that none of the transformations the compiler applies modify the floatingpoint semantics of the program [58], [57].

Linked with objectives 2 (Automated Reasoning) and 3 (Formalization and Certification of Languages, Tools and Systems), G. Melquiond has implemented an efficient Coq library for floatingpoint arithmetic and proved its correctness in terms of operations on real numbers [107]. It serves as a basis for an interval arithmetic on which Taylor models have been formalized. É. MartinDorel and G. Melquiond have integrated these models into CoqInterval [15]. This Coq library is dedicated to automatically proving the approximation properties that occur when formally verifying the implementation of mathematical libraries (libm).

Double rounding occurs when the target precision of a floatingpoint computation is narrower than the working precision. In some situations, this phenomenon incurs a loss of accuracy. P. Roux has formally studied when it is innocuous for basic arithmetic operations [115]. É. MartinDorel and G. Melquiond (in collaboration with J.M. Muller) have formally studied how it impacts algorithms used for errorfree transformations [105]. These works were based on the Flocq formalization of floatingpoint arithmetic for Coq.

By combining multiprecision arithmetic, interval arithmetic, and massivelyparallel computations, G. Melquiond (in collaboration with G. Nowak and P. Zimmermann) has computed enough digits of the MasserGramain constant to invalidate a 30year old conjecture about its closed form [108].
Projectteam Positioning
This objective deals both with formal verification and floatingpoint arithmetic, which is quite uncommon. Therefore our competitors/peers are few. We may only cite the works by J. Duracz and M. Konečný, Aston University in Birmingham, UK.
The Inria team AriC (Grenoble  RhôneAlpes) is closer to our research interests, but they are lacking manpower on the formal proof side; we have numerous collaborations with them. The Inria team Caramel (Nancy  Grand Est) also shares some research interests with us, though fewer; again, they do not work on the formal aspect of the verification; we have some occasional collaborations with them.
There are many formalization efforts from chip manufacturers, such as AMD (using the ACL2 proof assistant) and Intel (using the Forte proof assistants) but the algorithms they consider are quite different from the ones we study. The works on the topic of floatingpoint arithmetic from J. Harrison at Intel using HOL Light are really close to our research interests, but they seem to be discontinued.
A few deductive program verification teams are willing to extend their tools toward floatingpoint programs. This includes the KeY project and SPARK. We have an ongoing collaboration with the latter, in the context of the ProofInUSe project.
Deductive verification is not the only way to prove programs. Abstract interpretation is widely used, and several teams are interested in floatingpoint arithmetic. This includes the Inria team Antique (Paris  Rocquencourt) and a CEA List team, who have respectively developed the Astrée and Fluctuat tools. This approach targets a different class of numerical algorithms than the ones we are interested in.
Other people, especially from the SMT community (cf objective 2), are also interested in automatically proving formulas about floatingpoint numbers, notably at Oxford University. They are mainly focusing on pure floatingpoint arithmetic though and do not consider them as approximation of real numbers.
Finally, it can be noted that numerous teams are working on the verification of numerical programs, but assuming the computations are real rather than floatingpoint ones. This is out of the scope of this objective.