The general objective of the Toccata project is to promote formal specification and computer-assisted proof in the development of software that requires high assurance in terms of safety and correctness with respect to its intended behavior. Such safety-critical software appears in many application domains like transportation (e.g., aviation, aerospace, railway, and more and more in cars), communication (e.g., internet, smartphones), health devices, etc. The number of tasks performed by software is quickly increasing, together with the number of lines of code involved. Given the need of high assurance of safety in the functional behavior of such applications, the need for automated (i.e., computer-assisted) methods and techniques to bring guarantee of safety became a major challenge. In the past and at present, the most widely used approach to check safety of software is to apply heavy test campaigns, which take a large part of the costs of software development. Yet they cannot ensure that all the bugs are caught, and remaining bugs may have catastrophic causes (e.g., the Heartbleed bug in OpenSSL library discovered in 2014).

Generally speaking, software verification approaches pursue three goals: (1) verification should be sound, in the sense that no bugs should be missed, (2) verification should not produce false alarms, or as few as possible, (3) it should be as automatic as possible. Reaching all three goals at the same time is a challenge. A large class of approaches emphasizes goals (2) and (3): testing, run-time verification, symbolic execution, model checking, etc. Static analysis, such as abstract interpretation, emphasizes goals (1) and (3). Deductive verification emphasizes (1) and (2). The Toccata project is mainly interested in exploring the deductive verification approach, although we also consider the other ones in some cases.

In the past decade, significant progress has been made in the
domain of deductive program verification. This is emphasized by some
success stories of application of these techniques on industrial-scale
software. For example, the Atelier B system was used to develop
part of the embedded software of the Paris metro line
14 41 and other railway-related systems; a
formally proved C compiler was developed using the Coq proof
assistant 69; the L4-verified project developed a
formally verified micro-kernel with high security guarantees, using
analysis tools on top of the Isabelle/HOL proof
assistant 68. A bug in the JDK implementation of
TimSort was discovered using the KeY
environment 64 and a fixed version was
proved sound. Another sign of recent progress is the emergence of
deductive verification competitions (e.g.,
VerifyThis 44). Finally, recent trends in the
industrial practice for development of critical software is to require
more and more guarantees of safety, e.g., the new DO-178C standard for
developing avionics software adds to the former DO-178B the use of
formal models and formal methods. It also emphasizes the need for
certification of the analysis tools involved in the process.

There are two main families of approaches for deductive
verification. Methods in the first family build on top of mathematical
proof assistants (e.g., Coq, Isabelle) in which both the model and the
program are encoded; the proof that the program meets its
specification is typically conducted in an interactive way using the
underlying proof construction engine. Methods from the second family
proceed by the design of standalone tools taking as input a program in
a particular programming language (e.g., C, Java) specified with a
dedicated annotation language (e.g., ACSL 35,
JML 51) and automatically producing a set of
mathematical formulas (the verification conditions) which are
typically proved using automatic provers (e.g., Z3 71,
Alt-Ergo 54, CVC4 34).

The first family of approaches usually offers a higher level of assurance than the second, but also demands more work to perform the proofs (because of their interactive nature) and makes them less easy to adopt by industry. Moreover, they generally do not allow to directly analyze a program written in a mainstream programming language like Java or C. The second kind of approaches has benefited in the past years from the tremendous progress made in SAT and SMT solving techniques, allowing more impact on industrial practices, but suffers from a lower level of trust: in all parts of the proof chain (the model of the input programming language, the VC generator, the back-end automatic prover), potential errors may appear, compromising the guarantee offered. Moreover, while these approaches are applied to mainstream languages, they usually support only a subset of their features.

One of our original skills is the ability to conduct proofs by using automatic provers and proof assistants at the same time, depending on the difficulty of the program, and specifically the difficulty of each particular verification condition. We thus believe that we are in a good position to propose a bridge between the two families of approaches of deductive verification presented above. Establishing this bridge is one of the goals of the Toccata project: we want to provide methods and tools for deductive program verification that can offer both a high amount of proof automation and a high guarantee of validity. Indeed, an axis of research of Toccata is the development of languages, methods and tools that are themselves formally proved correct. Recent advances in the foundations of deductive verification include various aspects such as reasoning efficiently on bitvector programs 60 or providing counterexamples when a proof does not succeed 55.

A specifically challenging aspect of deductive verification methods is
how does one deal with memory mutation in general, an issue that
appear under various similar forms such the reasoning on mutable data
structures or on concurrent programs, with the common denominator of
the tracking of memory change on shared data. The ability to track
aliasing is also a key for the ability of specifying programs and
conduct proofs using the advanced notion of ghost code58, notion that can be push forward very far as
demonstrated by our work on ghost monitors 534.

In industrial applications, numerical calculations are very common (e.g. control software in transportation). Typically they involve floating-point numbers. Some of the members of Toccata have an internationally recognized expertise on deductive program verification involving floating-point computations. Our past work includes a new approach for proving behavioral properties of numerical C programs using Frama-C/Jessie 33, various examples of applications of that approach 49, the use of the Gappa solver for proving numerical algorithms 57, an approach to take architectures and compilers into account when dealing with floating-point programs 50, 73. We contributed to the CompCert verified compiler, regarding the support for floating-point operations 48. We also contributed to the Handbook of Floating-Point Arithmetic 72. A representative case study is the analysis and the proof of both the method error and the rounding error of a numerical analysis program solving the one-dimension acoustic wave equation 4645. We published a reference book on the verification of floating-point algorithms with Coq 2. Our experience led us to a conclusion that verification of numerical programs can benefit a lot from combining automatic and interactive theorem proving 47, 49, 61. Verification of numerical programs is another main axis of Toccata.

Deductive program verification methods are built upon theorem provers to decide whether a expected proof obligation on a program is a valid mathematical proposition, hence working on deductive verification requires a certain amount of work on the aspect of design of theorem provers. We are involved in particular in the Alt-Ergo SMT solver, for which we designed an original approach for reasoning on arithmetic facts 510 ; the Gappa tool dedicated to reasoning on rounding errors in floating-point computations 70; and the interval tactic to reason about real approximations 8. Proof by reflection is also a powerful approach for advanced reasoning about programs 9.

In the past, we have been more and more involved in the development of significantly large case studies and applications, such as for example the verification of matrix multiplication algorithms 3, the design of verified OCaml librairies 52, the realization of a platform for verification of shell scripts 39 1, or the correct-by-construction design of an efficient library for arbitrary-precision arithmetic 9.

Our scientific programme detailed below is structured into four axes:

Let us conclude with more general considerations about our agenda of current four years period 2019-2013: we want to keep on

Permanent researchers: S. Conchon, J.-C. Filliâtre, A. Guéneau C. Marché, G. Melquiond, A. Paskevich

This axis covers the central theme of the team: deductive verification, from the point of view of its foundations but also our will to spread its use in software development. The general motto we want to defend is “deductive verification for the masses”. A non-exhaustive list of subjects we want to address is as follows.

A significant part of the work achieved in this axis is related to the Why3 toolbox and its ecosystem, displayed on Figure 1. The boxes in red background correspond to the tools we develop in the Toccata team. A recent representative publication is a report on abstraction and genericity features of Why3 659.

Permanent researchers: J.-C. Filliâtre, A. Guéneau, C. Marché, G. Melquiond, A. Paskevich

This axis concerns specifically the techniques for reasoning on programs where aliasing is the central issue. It covers the methods based on type-based alias analysis and related memory models, on specific program logics such as separation logics, and extended model-checking. It concerns the application on analysis of C or C++ codes, on Ada codes involving pointers, but also concurrent programs in general. The main topics planned are:

Permanent researchers: S. Boldo, C. Marché, G. Melquiond

We of course want to keep this axis which is a major originality of Toccata. The main topics are:

Permanent researchers: S. Boldo, S. Conchon, J.-C. Filliâtre, A. Guéneau, C. Marché, G. Melquiond, A. Paskevich

This axis covers applications in general. The applications we currently have in mind are:

The application domains we target involve safety-critical software, that is where a high-level guarantee of soundness of functional execution of the software is wanted. Currently our industrial collaborations or impact mainly belong to the domain of transportation: aerospace, aviation, railway, automotive.

Generally speaking, we believe that our increasing industrial impact is a representative success for our general goal of spreading deductive verification methods to a larger audience, and we are firmly engaged into continuing such kind of actions in the future.

Through the creation of the ProofInUse joint lab in 2014, with AdaCore company, we have a growing impact on the community of industrial development of safety-critical applications written in Ada. See the web page for a an overview of AdaCore's customer projects, in particular those involving the use of the SPARK Pro tool set. This impact involves both the use of Why3 for generating VCs on Ada source codes, and the use of Alt-Ergo for performing proofs of those VCs.

The impact of ProofInUse can also be measured in term of job creation: the first two ProofInUse engineers, D. Hauzar and C. Fumex, employed initially on Inria temporary positions, have now been hired on permanent positions in AdaCore company. It is also interesting to notice that this effort allowed AdaCore company to get new customers, in particular the domains of application of deductive formal verification went beyond the historical domain of aerospace: application in automotive, cyber-security, health (artificial heart).

Impactful results were produced in the context of the CoLiS project for the formal analysis of Debian packages. A first important step was the version 2 of the design of the CoLiS language done by B. Becker, C. Marché and other co-authors 40, that includes a modified formal syntax, a extended formal semantics, together with the design of concrete and symbolic interpreters. Those interpreters are specified and implemented in Why3, proved correct (following the initial approach for the concrete interpreter published in 2018 65 and an approach for symbolic interpretation 39), and finally extracted to OCaml code.

To make the extracted code effective, it must be linked together with a library that implements a solver for feature constraints 67, and also a library that formally specifies the behavior of basic UNIX utilities. The latter library is documented in details in a research report 66.

A third result is a large verification campaign running the CoLiS toolbox on all the packages of the current Debian distribution. The results of this campaign were reported in another article 38 that was submitted to TACAS conference in 2020, and finally presented in the 2021 edition. The most visible side effect of this experiment is the discovery of bugs: more than 150 bug reports have been filled against various Debian packages.

The current continuation of the ProofInUse joint lab is under the form of a ProofInUse Consortium with an extension at a larger perimeter than Ada applications, that continue in collaboration with AdaCore company. We are collaborating with the TrustInSoft company for the verification of C and C++ codes, including in the recent past the use of Why3 to design verified and reusable C libraries (CIFRE PhD thesis). We also collaborate with Mitsubishi Electric R&D Centre Europe in Rennes for a specific usage of Why3 for verifying embedded devices (logic controllers). The recent best paper award at the FMICS conference 742 is a result of this last collaboration. These three collaboration share similar interests in he consortium, in particular via similar usage of Why3, and in particular for the generation of counterexamples when proofs fail. Plenary meetings are organized to encourage discussions and collaborations. These meetings are open to other friends of the Consortium, such as the CEA-list which develops the Frama-C environment, that uses Why3, and OCamlPro which maintains the solver Alt-Ergo.

Our research activities make use of standard computers for developing
software and developing formal proofs. We have no use of specific
large size computing resources. Though, we are making use of external
services for continuous integration. A continuous integration
methodology for mature software like Why3 is indeed mandatory for
ensuring a safe software engineering process for maintenance and
evolution. We make the necessary efforts to keep the energy
consumption of such a continuous integration process as low as
possible.

Ensuring the reproducibility of proofs in formal verification is essential. It is thus mandatory to replay such proofs regularly to make sure that our changes in our software do not loose existing proofs. For example, we need to make sure that the case studies in formal verification that we present in our gallery are reproducible. We also make the necessary efforts to keep the energy consumption for replaying proofs low, by doing it only when necessary.

As widely accepted nowadays, the major sources of environmental impact of research is travel to international conferences by plane, and renewal of electronic devices. The number of travels we made in 2022 remained very low with respect to previous years, of course because of the Covid pandemic, and the fact that many conferences were now proposed online participation. We intend to continue limiting the environmental impact of our travels. Concerning renewal of electronic devices, that is mainly laptops and monitors, we have always been careful on keeping them usable for as long time as possible.

Our research results aims at improving the quality of software, in particular in mission-critical contexts. As such, making software safer is likely to reduce the necessity for maintenance operations and thus reducing energy costs.

Our efforts are mostly towards ensuring the safety of functional behavior of software, but we also increasingly consider the verification of their time or memory consumption. Reducing those would naturally induce a reduction in energy consumption.

Our research never involve any processing of personal data, and
consequently we have no concern about preserving individual privacy,
and no concern with respect to the RGPD (Règlement Général sur
la Protection des Données).

In the past years, increasingly more Computer Science topics have been
introduced in the high school curricula. This evolution resulted in an
increasing need for teachers with high skills in that domain. The
French ministry of education has decided in 2021 to create a new
discipline in the concours de l'agrégation, which is the most
selective competition for recruiting high school teachers. To prepare
the first round of this recruiting competition that took place in
2022,
Sylvie
Boldo has been nominated as president of the competition
committee. Notice that she was the only full-time researcher to
chair an agrégation committee this year. She will continue to
chair the recruiting committee in 2023.

The team wishes to thank the Inria Evaluation Committee (Commission d'Évaluation) for its outstanding efforts, in 2022 as in previous years, in defending the interests of the research community, keeping us thoroughly informed about topics relevant to the scientific life, and upholding the moral and intellectual values we are collectively proud of and which define our institute.

CoqInterval is a library for the proof assistant Coq.

It provides several tactics for proving theorems on enclosures of real-valued expressions. The proofs are performed by an interval kernel which relies on a computable formalization of floating-point arithmetic in Coq.

The Marelle team developed a formalization of rigorous polynomial approximation using Taylor models in Coq. In 2014, this library has been included in CoqInterval.

The Flocq library for the Coq proof assistant is a comprehensive formalization of floating-point arithmetic: core definitions, axiomatic and computational rounding operations, high-level properties. It provides a framework for developers to formally verify numerical applications.

Flocq is currently used by the CompCert verified compiler to support floating-point computations.

Coq version 8.14 integrates many usability improvements, as well as an important change in the core language. The main changes include:

- The internal representation of match has changed to a more space-efficient and cleaner structure, allowing the fix of a completeness issue with cumulative inductive types in the type-checker. The internal representation is now closer to the user-level view of match, where the argument context of branches and the inductive binders "in" and "as" do not carry type annotations.

- A new "coqnative" binary performs separate native compilation of libraries, starting from a .vo file. It is supported by coq_makefile.

- Improvements to typeclasses and canonical structure resolution, allowing more terms to be considered as classes or keys.

- More control over notation declarations and support for primitive types in string and number notations.

- Removal of deprecated tactics, notably omega, which has been replaced by a greatly improved lia, along with many bug fixes.

- New Ltac2 APIs for interaction with Ltac1, manipulation of inductive types and printing.

Many changes and additions to the standard library in the numbers, vectors and lists libraries. A new signed primitive integers library Sint63 is available in addition to the unsigned Uint63 library.

Creusot is a tool for deductive verification of Rust code. It allows you to annotate your code with specifications, invariants and assertions and then verify them formally and automatically, proving, mathematically, that your code satisfies your specifications.

Creusot works by translating Rust code to WhyML, the verification and specification language of Why3. Users can then leverage the full power of Why3 to (semi)-automatically discharge the verification conditions.

Continuation-passing style allows us to devise an extremely economical abstract syntax for a generic algorithmic language. This syntax is flexible enough to naturally express conditionals, loops, (higher-order) function calls, and exception handling. It is type-agnostic and state-agnostic, which means that we can combine it with a wide range of type and effect systems.

In this work 31, Paskevich shows how programs written in the continuation-passing style can be augmented in a natural way with specification annotations, ghost code, and side-effect discipline. He defines the rules of verification condition generation for this syntax, and shows that the resulting formulas are nearly identical to what traditional approaches, like the weakest precondition calculus, produce for the equivalent algorithmic constructions. This amounts to a minimalistic yet versatile abstract syntax for annotated programs for which one can compute verification conditions without sacrificing their size, legibility, and amenability to automated proof, compared to more traditional methods. This makes it an excellent candidate for internal code representation in program verification tools.

Rust is a fairly recent programming language for system programming,
bringing static guarantees of memory safety through a strong
ownership policy. This feature opens promising advances for
deductive verification of Rust code. The project underlying the PhD
thesis of X. Denis, supervised by J.-H. Jourdan and C. Marché, is to
propose techniques for the verification of Rust program, using a
translation to a purely-functional language. The challenge of this
translation is the handling of mutable borrows: pointers which
control of aliasing in a region of memory. To overcome this, we used
a technique inspired by prophecy variables to predict the final
values of borrows 56. The underlying foundations
for safety of this approach was published by X. Denis and
J.-H. Jourdan, in a collaborative work with
U. Saarbrücken 19.

This method is implemented in a new standalone tool called
Creusot 17. The specification language of
Creusot features the notion of prophecy mentioned above, which is
central for the specification of behavior of programs performing
memory mutation. Prophecies also permit efficient automated
reasoning for verifying about such programs. Moreover, Rust provides
advanced abstraction features based on a notion of traits,
extensively used in the standard library and in user code. The
support for traits is another main feature of Creusot, because it is
at the heart of its approach, in particular for providing complex
abstraction of the functional behavior of programs
17.

An important step to take further in the applicability of Creusot on
a wide variety of Rust code is to support iterators, which
are ubiquitous and in fact idiomatic in Rust programming (for
example, every for loop is in fact internally desugared
into an iterator). X. Denis and J.-H. Jourdan proposed a new
approach to simplify the specifications of Rust code in presence of
iterators, and to make the proofs more automatic
also 28.

A capability machine is a type of CPU allowing fine-grained
privilege separation using capabilities, machine words that
represent certain kinds of authority.
We present a mathematical model and accompanying proof methods that
can be used for formal verification of functional correctness of
programs running on a capability machine, even when they invoke and
are invoked by unknown (and possibly malicious) code.
We use a program logic called Cerise for reasoning about known code,
and an associated logical relation, for reasoning about unknown
code. The logical relation formally captures the capability safety
guarantees provided by the capability machine. The Cerise program
logic, logical relation, and all the examples considered in the
paper have been mechanized using the Iris program logic framework in
the Coq proof assistant.
We have submitted for publication at the Journal of the ACM an
in-depth, pedagogical introduction to this
methodology 29. We start from minimal
examples and work our way up to new results involving sophisticated
object-capability patterns, demonstrating that the methodology
scales to such reasoning.

In subsequent work, we show that this approach enables the formal
verification of full-system security properties under multiple
attacker models: different security objectives of the full system
can be verified under a different choice of trust boundary
(i.e. under a different attacker model)
20. The proposed verification approach
is modular, and is robust: code outside the trust boundary for
a given security objective can be arbitrary, unverified
attacker-provided code. We instantiate the approach concretely by
extending an existing capability machine model with support for
memory-mapped I/O and we obtain full system, machine-verified
security properties about external effect traces while limiting the
manual verification effort to a small trusted computing base
relevant for the specific property under study.

Integration, just as much as differentiation, is a fundamental
calculus tool that is widely used in many scientific domains.
Formalizing the mathematical concept of integration and the
associated results in a formal proof assistant helps in providing
the highest confidence on the correctness of numerical programs
involving the use of integration, directly or indirectly. By its
capability to extend the (Riemann) integral to a wide class of
irregular functions, and to functions defined on more general spaces
than the real line, the Lebesgue integral is perfectly suited for
use in mathematical fields such as probability theory, numerical
mathematics, and real analysis. In this article, we present the Coq
formalization of

Once the Lebesgue integral is formally defined and the first lemmas are proved, the question of the convenience of the formalization naturally arises. To check it, a useful extension is Tonelli's theorem, stating that the (double) integral of a nonnegative measurable function of two variables can be computed by iterated integrals, and allowing to switch the order of integration. This requires the formal definition and proof in Coq of product sigma-algebras, product measures and their uniqueness, the construction of iterated integrals, up to Tonelli's theorem. We also advertise the Lebesgue induction principle provided by an inductive type for nonnegative measurable functions cite 27, 16.

The Bochner integral is a generalization of the Lebesgue integral, for functions taking their values in a Banach space. Therefore, both its mathematical definition and its formalization in the Coq proof assistant are more challenging 26.

In behavioural specifications of imperative languages, postconditions may
refer to the prestate of the function, usually with an old operator.
Therefore, code performing runtime verification has to record prestate values
required to evaluate the postconditions, typically by copying part of the
memory state, which causes severe verification overhead, both in memory and
CPU time.

In this work 18, Filliâtre and Pascutto consider the problem of efficiently capturing prestates in the context of Ortac, a runtime assertion checking tool for OCaml. Their contribution is a postcondition transformation that reduces the subset of the prestate to copy. They formalize this transformation, and they provide proof that it is sound and improves the performance of the instrumented programs. They illustrate the benefits of this approach with a maze generator. Benchmarks show that unoptimized instrumentation is not practicable, while their transformation restores performances similar to the program without any runtime check.

Several results were produced in the context of the CoLiS project for the formal analysis of Debian packages. A new language was designed by B. Becker, C. Marché and other co-authors 40, including a formal syntax, a formal semantics, and the design of concrete and symbolic interpreters. Those interpreters are specified and implemented in Why3, proved correct (following initial approaches for the concrete interpreter 65 and symbolic interpretation 39), and finally extracted to OCaml code.

To make the extracted code effective, it must be linked together with a library that implements a solver for feature constraints 67, and also a library that formally specifies the behavior of basic UNIX utilities. The latter library was documented in details in a research report 66.

An important result is a large verification campaign running the CoLiS toolbox on all the packages of the Debian distribution. Preliminary results of this campaign were reported in an article 38 presented at TACAS conference in 2021. The most visible side effect of this experiment is the discovery of bugs: more than 150 bugs report have been filled against various Debian packages. A journal paper reporting updated experimental results using an improved implementation of the platform, and on the new Debian stable distribution, was published in 2022 11.

We have bilateral contracts which are closely related to a joint effort called the ProofInUse consortium. The objective of ProofInUse is to provide verification tools, based on mathematical proof, to industry users. These tools are aimed at replacing or complementing the existing test activities, whilst reducing costs.

This consortium is a follow-up of the former LabCom ProofInUse between Toccata and the SME AdaCore, funded by the ANR programme “Laboratoires communs”, from April 2014 to March 2017.

This collaboration is a joint effort of the Inria project-team Toccata and the AdaCore company which provides development tools for the Ada programming language. It is funded by a 5-year bilateral contract from Jan 2019 to Dec 2023.

The SME AdaCore is a software publisher specializing in providing software development tools for critical systems. A previous successful collaboration between Toccata and AdaCore enabled Why3 technology to be put into the heart of the AdaCore-developed SPARK technology.

The objective of ProofInUse-AdaCore is to significantly increase the capabilities and performances of the Spark/Ada verification environment proposed by AdaCore. It aims at integration of verification techniques at the state-of-the-art of academic research, via the generic environment Why3 for deductive program verification developed by Toccata.

This bilateral contract is part of the ProofInUse effort. This collaboration joins efforts of the Inria project-team Toccata and the company Mitsubishi Electric R&D (MERCE) in Rennes. It is funded by a bilateral contract of 3 years and 6 months from Nov 2019 to April 2023.

MERCE has strong and recognized skills in the field of formal methods. In the industrial context of the Mitsubishi Electric Group, MERCE has acquired knowledge of the specific needs of the development processes and meets the needs of the group in different areas of application by providing automatic verification and demonstration tools adapted to the problems encountered.

The objective of ProofInUse-MERCE is to significantly improve on-going MERCE tools regarding the verification of Programmable Logic Controllers and also regarding the verification of numerical C codes.

This bilateral contract is part of the ProofInUse effort. This collaboration joins efforts of the Inria project-team Toccata and the company TrustInSoft in Paris. It is funded by a bilateral contract of 24 months from Dec 2020 to Nov 2022.

TrustInSoft is an SME that offers the TIS-Analyzer environment for analysis of safety and security properties of source codes written in C and C++ languages. A version of TIS-Analyzer is available online, under the name TaaS (TrustInSoft as a Service).

The objective of ProofInUse-TrustInSoft is to integrate Deductive Verification in the platform TIS-Analyzer, under the form of a new plug-in called J-cube. One specific interest resides in the generation of counterexample to help the user in case of proof failure.

Toccata and the company TrustInSoft set up a research action in the context of the national “plan de relance”. It is funded for 24 months from January 2022 to December 2023. The funding covers the leave of R. Rieu-Helft for 80% time as an invited researcher in Toccata.

The objective of this action is to extend the ProofInUse-TrustInSoft collaboration towards two axes: a refinement of the J-cube memory model incorporating a static separation analysis, and the support of the C++ language.

Clément Pascutto started a CIFRE PhD in June 2020, under the supervision of Jean-Christophe Filliâtre (at Toccata) and Thomas Gazagnaire (at Tarides). The subject of the PhD is the dynamic and deductive verification of OCaml programs and its application to distributed data structures.

Léo Andrès started a CIFRE PhD in October 2021, under the supervision of Jean-Christophe Filliâtre (at Toccata) and Pierre Chambart and Vincent Laviron (at OCamlPro). The subject of the PhD is the design, formalization, and implementation of a garbage collector for WebAssembly.

EMC2 project on cordis.europa.eu

Molecular simulation has become an instrumental tool in chemistry, condensed matter physics, molecular biology, materials science, and nanosciences. It will allow to propose de novo design of e.g. new drugs or materials provided that the efficiency of underlying software is accelerated by several orders of magnitude.

The ambition of the EMC2 project is to achieve scientific breakthroughs in this field by gathering the expertise of a multidisciplinary community at the interfaces of four disciplines: mathematics, chemistry, physics, and computer science. It is motivated by the twofold observation that, i) building upon our collaborative work, we have recently been able to gain efficiency factors of up to 3 orders of magnitude for polarizable molecular dynamics in solution of multi-million atom systems, but this is not enough since ii) even larger or more complex systems of major practical interest (such as solvated biosystems or molecules with strongly-correlated electrons) are currently mostly intractable in reasonable clock time. The only way to further improve the efficiency of the solvers, while preserving accuracy, is to develop physically and chemically sound models, mathematically certified and numerically efficient algorithms, and implement them in a robust and scalable way on various architectures (from standard academic or industrial clusters to emerging heterogeneous and exascale architectures).

EMC2 has no equivalent in the world: there is nowhere such a critical number of interdisciplinary researchers already collaborating with the required track records to address this challenge. Under the leadership of the 4 PIs, supported by highly recognized teams from three major institutions in the Paris area, EMC2 will develop disruptive methodological approaches and publicly available simulation tools, and apply them to challenging molecular systems. The project will strongly strengthen the local teams and their synergy enabling decisive progress in the field.

Using computers to formulate conjectures and consolidate proof steps pervades all mathematics fields, even the most abstract. Most computer proofs are produced by symbolic computations, using computer algebra systems. However, these systems suffer from severe, intrinsic flaws, rendering computational correction and verification challenging. The FRESCO project aims to shed light on whether computer algebra could be both reliable and fast. Researchers will disrupt the architecture of proof assistants, which serve as the best tools for representing mathematics in silico, enriching their programming features while preserving their compatibility with their logical foundations. They will also design novel mathematical software that should feature a high-level, performance-oriented programming environment for writing efficient code to boost computational mathematics.

The last twenty years have seen the advent of computer-aided proofs in mathematics and this trend is getting more and more important. They request various levels of numerical safety, from fast and stable computations to formal proofs of the computations. Hovewer, the necessary tools and routines are usually ad hoc, sometimes unavailable, or inexistent. On a complementary perspective, numerical safety is also critical for complex guidance and control algorithms, in the context of increased satellite autonomy. We plan to design a whole set of theorems, algorithms and software developments, that will allow one to study a computational problem on all (or any) of the desired levels of numerical rigor. Key developments include fast and certified spectral methods and polynomial arithmetic, with subsequent formal verifications. There will be a strong feedback between the development of our tools and the applications that motivate it.

The project led by École Normale Supérieure de Lyon (LIP) has started in February 2021 and lasts for 4 years. Partners: Inria (teams Aric, Galinette, Lfant, Marelle, Toccata), École Polytechnique (LIX), Sorbonne Université (LIP6), Université Sorbonne Paris Nord (LIPN), CNRS (LAAS).

A specification language extends a programming language by allowing code and specifications to be written in a single document. Examples include SparkAda, JML, and ACSL, which extend Ada, Java, and C with syntax for specifications.

By offering a specification language to programmers, one
encourages them to document, test, and verify their code as they
write it, not as a separate step that is too easily postponed.
From a technical point of view, the presence of specifications makes
it possible to test or verify each module independently and is the
key to scalability.
From a pragmatic point of view, embedding specifications in the code
allows them to be automatically distributed (via a package
management system) to every programmer; this is the key to
practical adoption.

The GOSPEL project proposes to develop Gospel,
a specification language that extends the programming language
OCaml; to develop an ecosystem of tools based on Gospel; and to
demonstrate and validate these tools via several case studies.

The project led by Inria Paris has started in October 2022 and lasts for 4 years. Partners: Inria Paris (team Cambium), Université Paris-Saclay (LMF), Tarides, Nomadic Labs.

The SecureVal project aims to design new tools, benefiting from new digital technologies, to verify the absence of hardware and software vulnerabilities, and carry out the proof of compliance required.

The project is led by CEA-List, it started in 2022 and lasts for 5 years.