Section: New Results
Certified computing and computer algebra
Polynomial system solving
Polynomial system solving is a core topic of computer algebra. While the worst-case complexity of this problem is known to be hopelessly large, the practical complexity for large families of systems is much more reasonable. Progress has been made in assessing precise complexity estimates in this area.
First, M. Bardet (U. Rouen), J.-C. Faugère (PolSys team), and B. Salvy studied the complexity of Gröbner bases computations, in particular in the generic situation where the variables are in simultaneous Noether position with respect to the system. They gave a bound on the number of polynomials of each degree in a Gröbner basis computed by Faugère’s F5 algorithm in this generic case for the grevlex ordering (which is also a bound on the number of polynomials for a reduced Gröbner basis), and used it to bound the exponent of the complexity of the F5 algorithm [35] .
Next, a fundamental problem in computer science is to find all the common zeroes of
Linear differential equations
Creative telescoping algorithms compute linear differential equations satisfied by multiple integrals with parameters. Together with A. Bostan and P. Lairez (SpecFun team), B. Salvy described a precise and elementary algorithmic version of the Griffiths–Dwork method for the creative telescoping of rational functions. This leads to bounds on the order and degree of the coefficients of the differential equation, and to the first complexity result which is simply exponential in the number of variables. One of the important features of the algorithm is that it does not need to compute certificates. The approach is vindicated by a prototype implementation [15] .
In [2] , B. Salvy proved with A. Bostan (SpecFun team) and K. Raschel (U. Tours)
that the sequence
With F. Johansson and M. Kauers (RISC, Linz, Austria), M. Mezzarobba presented in [23] a new algorithm for computing hyperexponential solutions of ordinary linear differential equations with polynomial coefficients. The algorithm relies on interpreting formal series solutions at the singular points as analytic functions and evaluating them numerically at some common ordinary point. The numerical data is used to determine a small number of combinations of the formal series that may give rise to hyperexponential solutions.
Exact linear algebra
Transforming a matrix over a field to echelon form, or decomposing the matrix
as a product of simpler matrices that reveal the rank profile, is a fundamental building block of computational
exact linear algebra. For such tasks the best previously available algorithms were either rank sensitive
(i.e., of complexity expressed in terms of the exponent of matrix multiplication and the rank of the input matrix)
or in place (i.e., using essentially no more memory that what is needed for matrix multiplication).
In [6] C.-P. Jeannerod, C. Pernet, and A. Storjohann (U. Waterloo, Canada)
have proposed algorithms that are both rank sensitive and in place. These algorithms required to introduce a matrix factorization
of the form
Certified multiple-precision evaluation of the Airy Ai function
The series expansion at the origin of the Airy function
Standardization of interval arithmetic
The IEEE 1788 working group is devoted to the standardization of interval arithmetic. V. Lefèvre and N. Revol are very active in this group. This year is the last year granted by IEEE for the preparation of a draft text of the standard. 2014 will be devoted to a ballot on the whole text, first by the standardization working group and then by a group of experts appointed by IEEE. In 2013, the definition of interval literals, of constructors, and of input and output has been adopted. The work now concentrates on portions of the final text [42] .
Parallel product of interval matrices
The problem considered here is the multiplication of two matrices with interval coefficients. Parallel implementations by N. Revol and Ph. Théveny [10] compute results that satisfy the inclusion property, which is the fundamental property of interval arithmetic, and offer good performances: the product of two interval matrices is not slower than 15 times the product of two floating-point matrices.
Numerical reproducibility
What is called numerical reproducibility is the problem of getting the same result when the scientific computation is run several times, either on the same machine or on different machines. In [43] , the focus is on interval computations using floating-point arithmetic: N. Revol identifies implementation issues that may invalidate the inclusion property, and presents several ways to preserve this inclusion property. This work has also been presented at several conferences [30] , [29] , [31] .