EN FR
EN FR


Section: Research Program

Algorithmic Differentiation

Participants : Laurent Hascoët, Valérie Pascual, Ala Taftaf.


Glossary
algorithmic differentiation

(AD, aka Automatic Differentiation) Transformation of a program, that returns a new program that computes derivatives of the initial program, i.e. some combination of the partial derivatives of the program's outputs with respect to its inputs.

adjoint

Mathematical manipulation of the Partial Differential Equations that define a problem, obtaining new differential equations that define the gradient of the original problem's solution.

checkpointing

General trade-off technique, used in adjoint AD, that trades duplicate execution of a part of the program to save some memory space that was used to save intermediate results.


Algorithmic Differentiation (AD) differentiates programs. The input of AD is a source program P that, given some Xn, returns some Y=F(X)m, for a differentiable F. AD generates a new source program P' that, given X, computes some derivatives of F [6] .

Any execution of P amounts to a sequence of instructions, which is identified with a composition of vector functions. Thus, if

P runs { I 1 ; I 2 ; I p ; } , F then is f p f p - 1 f 1 , (1)

where each fk is the elementary function implemented by instruction Ik. AD applies the chain rule to obtain derivatives of F. Calling Xk the values of all variables after instruction Ik, i.e. X0=X and Xk=fk(Xk-1), the Jacobian of F is

F ' ( X ) = f p ' ( X p - 1 ) . f p - 1 ' ( X p - 2 ) . . f 1 ' ( X 0 ) (2)

which can be mechanically written as a sequence of instructions Ik'. This can be generalized to higher level derivatives, Taylor series, etc. Combining the Ik' with the control of P yields P', and therefore this differentiation is piecewise.

In practice, many applications only need cheaper projections of F'(X) such as:

  • Sensitivities, defined for a given direction X˙ in the input space as:

    F ' ( X ) . X ˙ = f p ' ( X p - 1 ) . f p - 1 ' ( X p - 2 ) . . f 1 ' ( X 0 ) . X ˙ . (3)

    This expression is easily computed from right to left, interleaved with the original program instructions. This is the tangent mode of AD.

  • Adjoints, defined after transposition (F'*), for a given weighting Y¯ of the outputs as:

    F ' * ( X ) . Y ¯ = f 1 ' * ( X 0 ) . f 2 ' * ( X 1 ) . . f p - 1 ' * ( X p - 2 ) . f p ' * ( X p - 1 ) . Y ¯ . (4)

    This expression is most efficiently computed from right to left, because matrix×vector products are cheaper than matrix×matrix products. This is the adjoint mode of AD, most effective for optimization, data assimilation  [32] , adjoint problems  [27] , or inverse problems.

Adjoint AD builds a very efficient program  [29] . This adjoint program will compute the gradient in a time independent from the number of parameters n, and which is only a small multiple of the run-time of P. In contrast, computing the same gradient with the tangent mode would require running the tangent differentiated program n times.

However, the Xk are required in the inverse of their computation order. If the original program overwrites a part of Xk, the differentiated program must restore Xk before it is used by fk+1'*(Xk). Therefore, the central research problem of adjoint AD is to make the Xk available in reverse order at the cheapest cost, using strategies that combine storage, repeated forward computation from available previous values, or even inverted computation from available later values.

Another research issue is to make the AD model cope with the constant evolution of modern language constructs. From the old days of Fortran77, novelties include pointers and dynamic allocation, modularity, structured data types, objects, vectorial notation and parallel programming. We keep developing our models and tools to handle these new constructs.