Team Ecuador studies Algorithmic Differentiation (AD) of computer programs, blending :

We aim to produce AD code that can compete with hand-written sensitivity and adjoint programs used in the industry. We implement our algorithms into the tool Tapenade, one of the most popular AD tools at present.

Our research directions :

Algorithmic Differentiation (AD) differentiates
programs. The input of AD is
a source program

Any execution of

where each

which can be mechanically written as a sequence of instructions

The above computation of

Sensitivities, defined for a given direction

This expression is easily computed from right to left, interleaved with the original
program instructions. This is the tangent mode of AD.

Adjoints, defined after transposition (

This expression is most efficiently computed from right to left,
because matrixadjoint mode of AD, most effective for
optimization, data assimilation 28,
adjoint problems 22, or inverse problems.

Adjoint AD builds a very efficient program 24,
which computes the gradient in a time independent from the number of parameters tangent mode
would require running the tangent differentiated program

However, the inverse of their computation order. If the
original program overwrites a part of

Another research issue is to make the AD model cope with the constant evolution of modern language constructs. From the old days of Fortran77, novelties include pointers and dynamic allocation, modularity, structured data types, objects, vectorial notation and parallel programming. We keep developing our models and tools to handle these new constructs.

The most obvious example of a program transformation tool is certainly a compiler. Other examples are program translators, that go from one language or formalism to another, or optimizers, that transform a program to make it run better. AD is just one such transformation. These tools share the technological basis that lets them implement the sophisticated analyses 15 required. In particular there are common mathematical models to specify these analyses and analyze their properties.

An important principle is abstraction: the core of a compiler
should not bother about syntactic details of the compiled program.
The optimization and code generation phases must be independent
from the particular input programming language. This is generally achieved
using language-specific front-ends, language-independent middle-ends,
and target-specific back-ends.
In the middle-end, analysis can concentrate on the semantics
of a reduced set of constructs. This analysis operates
on an abstract representation of programs made of one
call graph, whose nodes are themselves flow graphs whose
nodes (basic blocks) contain abstract syntax trees for the individual
atomic instructions.
To each level are attached symbol tables, nested to capture scoping.

Static program analysis can be defined on this internal representation,
which is largely language independent. The simplest analyses on trees can be
specified with inference rules 18, 25, 16.
But many data-flow analyses are more complex, and better defined on graphs than on trees.
Since both call graphs and flow graphs may be cyclic, these global analyses will be solved iteratively.
Abstract Interpretation 19 is a theoretical framework to
study complexity and termination of these analyses.

Data flow analyses must be carefully designed to avoid or control
combinatorial explosion. At the call graph level, they can run bottom-up or top-down,
and they yield more accurate results when they take into account the different
call sites of each procedure, which is called context sensitivity.
At the flow graph level, they can run forwards or backwards, and
yield more accurate results when they take into account only the possible
execution flows resulting from possible control, which is called flow sensitivity.

Even then, data flow analyses are limited, because they are static and thus have very
little knowledge of actual run-time values. Far before reaching the very theoretical limit of
undecidability, one reaches practical limitations to how much information one can infer
from programs that use arrays 32, 20 or pointers.
Therefore, conservative over-approximations must be made, leading to
derivative code less efficient than ideal.

Scientific Computing provides reliable simulations
of complex systems. For example it is possible to simulate
the steady or unsteady 3D air flow around a plane that captures the physical phenomena
of shocks and turbulence. Next comes optimization,
one degree higher in complexity because it repeatedly simulates and
applies gradient-based optimization steps until an optimum is reached.
The next sophistication is robustness, that detects undesirable solutions which,
although maybe optimal, are very sensitive to uncertainty on design parameters or
on manufacturing tolerances. This makes second derivatives come into play.
Similarly Uncertainty Quantification can use second derivatives to evaluate how uncertainty on
the simulation inputs imply uncertainty on its outputs.

To obtain this gradient and possibly higher derivatives,
we advocate adjoint AD (cf3.1)
of the program that discretizes and solves the direct system.
This gives the exact gradient of the discrete function
computed by the program, which is quicker and more sound than differentiating
the original mathematical equations 22.
Theoretical results 21 guarantee convergence
of these derivatives when the direct program converges.
This approach is highly mechanizable. However, it requires
careful study and special developments of the AD model 26, 30
to master possibly heavy memory usage.
Among these additional developments, we promote in particular
specialized AD models for Fixed-Point iterations 23, 17,
efficient adjoints for linear algebra operators such as solvers, or exploitation
of parallel properties of the adjoint code.

Algorithmic Differentiation of programs gives sensitivities or gradients, useful for instance for :

A CFD program computes the flow around a shape, starting from a number of inputs that define the shape and other parameters. On this flow one can define optimization criteria e.g. the lift of an aircraft. To optimize a criterion by a gradient descent, one needs the gradient of the criterion with respect to all inputs, and possibly additional gradients when there are constraints. Adjoint AD is the most efficient way to compute these gradients.

Inverse problems aim at estimating the value of hidden parameters from other measurable values, that depend on the hidden parameters through a system of equations. For example, the hidden parameter might be the shape of the ocean floor, and the measurable values of the altitude and velocities of the surface. Figure 1 shows an example of an inverse problem using the glaciology code ALIF (a pure C version of ISSM 27) and its AD-adjoint produced by Tapenade.

One particular case of inverse problems is data assimilation 28
in weather forecasting or in oceanography.
The quality of the initial state of the simulation conditions the quality of the
prediction. But this initial state is not well known. Only some
measurements at arbitrary places and times are available.
A good initial state is found by solving a least squares problem
between the measurements and a guessed initial state which itself must verify the
equations of meteorology. This boils down to solving an adjoint problem,
which can be done though AD 31.
The special case of 4Dvar data assimilation is particularly challenging.
The 4th dimension in “4D” is time, as available measurements are distributed
over a given assimilation period. Therefore the least squares mechanism must be
applied to a simulation over time that follows the time evolution model.
This process gives a much better estimation of the initial state, because
both position and time of measurements are taken into account.
On the other hand, the adjoint problem involved is more complex,
because it must run (backwards) over many time steps.
This demanding application of AD justifies our efforts in
reducing the runtime and memory costs of AD adjoint codes.

Simulating a complex system often requires solving a system of Partial Differential Equations.
This can be too expensive, in particular for real-time simulations.
When one wants to simulate the reaction of this complex system to small perturbations around a fixed
set of parameters, there is an efficient approximation: just suppose that the system
is linear in a small neighborhood of the current set of parameters. The reaction of the system
is thus approximated by a simple product of the variation of the parameters with the
Jacobian matrix of the system. This Jacobian matrix can be obtained by AD.
This is especially cheap when the Jacobian matrix is sparse.
The simulation can be improved further by introducing higher-order derivatives, such as Taylor
expansions, which can also be computed through AD.
The result is often called a reduced model.

Some approximation errors can be expressed by an adjoint state. Mesh adaptation can benefit from this. The classical optimization step can give an optimization direction not only for the control parameters, but also for the approximation parameters, and in particular the mesh geometry. The ultimate goal is to obtain optimal control parameters up to a precision prescribed in advance.

Our research has an impact on environmental research: in Earth sciences, our gradients are used in inverse problems, to determine key properties in oceanography, glaciology, or climate models. For instance they determine basal friction coefficients of glaciers that are necessary to simulate their future evolution. Another example is to locate sources and sinks of CO2 by coupling atmospheric models and remote measurements.

Here we describe new or upgraded software.

Tapenade implements the results of our research about models and static analyses for AD. Tapenade can be downloaded and installed on most architectures. Alternatively, it can be used as a web server. Higher-order derivatives can be obtained through repeated application.

Tapenade performs sophisticated data-flow analysis, flow-sensitive and context-sensitive, on the complete source program to produce an efficient differentiated code. Analyses include Type-Checking, Read-Write analysis, and Pointer analysis. AD-specific analyses include the so-called Activity analysis, Adjoint Liveness analysis, and TBR analysis.

For applications that are parallelized for multi-core CPUs or GPUs using OpenMP, it is desirable to also compute the gradients in parallel. We extended the AD model of Tapenade (source transformation, association by address, storage on tape of intermediate values) towards correct and efficient differentiation of OpenMP parallel worksharing loops, one of the most commonly used OpenMP features, in tangent-linear and adjoint mode. This work was published in ACM TOMS 12.

The major issue raised by the adjoint mode is the transformation of variable reads into adjoint variable overwrites, more accurately into increments. While there is no parallel conflict between two reads, two concurrent increments can cause a data race, unless they are both atomic. Classical automated detection of independence is as always limited. We propose to gather information about the memory access patterns of the original code, which is assumed correct and therefore free of data races, and to reuse this information into a theorem prover with which we check the safety of the shared memory accesses of the adjoint code.

A poster was accepted for presentation at PPoPP'22. An article is in preparation.

In collaboration with ONERA, we study extension of Tapenade to the parallel constructs of CUDA. The industrial objective is to include Adjoint AD natively into the successor of ONERA's “Elsa” CFD platform. Our research objective is to extend our adjoint AD model to CUDA code. While this extension bears similarity with our work on OpenMP, several specific aspects of CUDA deserve a specific treatment. For instance the stack storage mechanism central to our adjoint AD model must be redesigned to take into accout the memory limitations of GPU code sections.

This year, we delivered to ONERA a version of Tapenade that can parse and regenerate CUDA C source, and that can differentiate in tangent mode a few of the test cases provided by ONERA.

A joint article is in preparation about the general architecture of ONERA's new CFD solver, that includes a section on integrating AD into the compilation chain and on the needed adaptions of Tapenade.

Data-Flow reversal is at the heart of our model of Source-Transformation Adjoint Algorithmic Differentiation (Adjoint ST-AD). The presence of addresses, pointers, and pointer arithmetics in the target language poses challenges to Data-Flow reversal, which we can deal with in languages such as C. However, when the target language uses Garbage Collection (GC), for instance Java or Python, the notion of address that is used by Data-Flow reversal disappears. Moreover, GC is asynchronous and does not appear explicitly in the source. We studied an extension to the model of Adjoint ST-AD suitable for a language with GC. We validated this approach on a Java implementation of a simple Navier-Stokes solver. We compared performance with alternative models of Adjoint AD, such as Overloading-based AD (e.g. ADOL-C) which by design could handle GC more easily.

An article has been written and is currently under review with ACM TOMS. A research report has been published 14.

We support users with their first experiments of Algorithmic Differentiation of large codes. This concerned two codes this year.

One application was done by Shreyas Gaikwad, University of Texas at Austin, PhD student supervised by Patrick Heimbach. His goal is to produce the adjoint of glaciology codes such as SICOPOLIS and the glaciology component of the MIT GCM. Both are Fortran 90 codes that have been previously differentiated with OpenAD, the former AD tool developed by Argonne National Lab. Krishna Narayanan provided crucial help and expertise, as he had been in charge of the previous differentiation with OpenAD. Indeed, differentiation with Tapenade did not encounter bugs in the tool itself. It rather underlined interface and documentation difficulties to apply special recommended strategies e.g. for adjoint AD of linear solvers.

The other code this year is CTFEM, an element of the code suite developed by the CASTOR team of INRIA and University of Nice for the ITER project. CTFEM is a plasma simulation code written in Fortran90. Together with Hervé Guillard, we applied Tapenade to produce the adjoint code of CTFEM. In addition to several Tapenade bugs, now fixed, we encountered two more interesting issues. One is that CTFEM introduces memory aliasing at a few locations. Classically, memory aliasing is adverse to adjoint AD and should be avoided. In general, the AD tool emits a warning message when potential aliasing is detected. It unfortunately failed to do so in a few particular cases. The other issue is about array notation: Fortran90 actual parameters of calls that are arrays are in principle passed by reference, allowing the called procedure to modify the actual parameter. This is also true for array sections, i.e. actual parameters of the form T(0:10:2). On the other hand, an array section that uses an indirection such as T(ind(0:10:2)) appears to be passed by value, although we found no literature on the subject. Tapenade now points out this adverse situation, which can be solved by a simple local code rewrite.

The progress in highly accurate schemes for compressible flows on unstructured meshes (together with advances in massive parallelization of these schemes) allows to solve problems previously out of reach. The four-years programme Norma, associating:

is supported by the French ANR and by the Russian Science Foundation. Norma is a cooperation on the subject of the extension of Computational AeroAcoustics methods to the simulation the noise emission by rotating machines (helicopters, future aerial taxis, unmanned aerial vehicles, wind turbines...). The tasks of INRIA in this Russian-French cooperation program are:

Among this year's results:

Modeling turbulence is an essential aspect of CFD. The purpose of our work in hybrid RANS/LES (Reynolds Averaged Navier-Stokes / Large Eddy Simulation) is to develop new approaches for industrial applications of LES-based analyses. In the applications targeted (aeronautics, hydraulics), the Reynolds number can be as high as several tens of millions, far too high for pure LES models. However, certain regions in the flow can be predicted better with LES than with usual statistical RANS (Reynolds averaged Navier-Stokes) models. These are mainly vortical separated regions as assumed in one of the most popular hybrid models, the hybrid Detached Eddy Simulation (DES) model. Here, “hybrid” means that a blending is applied between LES and RANS. An important difference between a real life flow and a wind tunnel or basin is that the turbulence of the flow upstream of each body is not well known.

The development of hybrid models, in particular DES in the litterature, has raised the question of the domain of validity of these models. According to theory, these models should not be applied to flow involving laminar boundary layers (BL). But industrial flows are complex flows and often present regions of laminar BL, regions of fully developed turbulent BL and regions of non-equilibrium vortical BL. It is then mandatory for industrial use that the new hybrid models give a reasonable prediction for all these types of flow. We concentrated on evaluating the behavior of hybrid models for laminar BL and for vortical wakes. While less predictive than pure LES on laminar BL, some hybrid models still give reasonable predictions for rather low Reynolds numbers.

During the first phase of Norma, Montpellier and Moscow are computing a series of initial test cases in order to control the consistancy of the results produced by the two platforms of CFD, namely Noisette for Moscow, and Aironum for Montpellier.

A communication in seminar was presented by Florian Miralles 29

The physical problem addressed by Norma involves a computational domain made of at least two components having different rotative motions. The numerical problem of their combination gave birth to many specialized schemes, such as the so-called sliding method, chimera method, immersed boundary method (IBM). In concertation with Moscow, Montpellier is introducing a novel IBM in the CFD code Aironum. The Ecuador team is studying in cooperation with Lemma engineering (Sophia Antipolis) a novel sliding/chimera method.

After many decades of research which we suumarized in 11, approximation of unstructured meshes have become the common practice in CFD.
High order approximations for compressible flows on unstructured meshes
are facing many constraints that increase their complexity
i.e. their computational cost.
This is clear for the largest class of approximation, the class of

Reducing approximation errors as much as possible by changing the mesh is a particular kind of optimal control problem. We formulate it exactly this way when we look for the optimal metric of the mesh, which minimizes a user-specified functional (goal-oriented mesh adaptation). In that case, the usual methods of optimal control apply, using adjoint states that can be produced by Algorithmic Differentiation.

This year, we extended mesh adaptation methods to the simulation of rotating machines is a special unsteady periodic flow. We also continued our new analysis for h-p anisotropic mesh adaptation.

The team has a contract with ONERA to assist in the Algorithmic Differentiation of ONERA's new CFD platform “SoNICS”. The main objective is adjoint AD of the CUDA parts of the SoNICS source. This contract will finance the work of a development engineer for two years. The contract also includes support to the ONERA development team on advanced use of AD, e.g., fixed-point adjoints and binomial checkpointing.

Ecuador participates in the Joint Laboratory for Exascale Computing (JLESC) together with colleagues at Argonne National Laboratory.

We gave a short presentation at the last JLESC meeting, about research on Adjoint AD of OpenMP. This is a joint work with Jan Hueckelheim, Argonne National Lab.

Together with the MCS division of Argonne National Lab, the Ecuador team co-organized a 2-days tutorial “Automatic Differentiation as a Tool for Computational Science” at the SIAM CSE conference, on March 1-5. Laurent Hascoët gave three presentations there.

Laurent Hascoët is on the organizing commitee of the EuroAD Workshops on Algorithmic Differentiation (www.autodiff.org), taking place twice a year, with exceptions. Organization rotates between RWTH Aachen, University of Jena, Humboldt University Berlin, INRIA Sophia-Antipolis, and Argonne National Lab. The 24th EuroAD workshop was organized by the Ecuador team, remotely, on December 14-16.

Laurent Hascoët was on the program committee of the “Differentiable Programming Workshop” at NeurIPS2021, December 6-14.