2023Activity reportProject-TeamMATHEXP

RNSR: 202224256Z
  • Research center Inria Saclay Centre
  • Team name: Computer algebra, experimental mathematics, and interactions
  • Domain:Algorithmics, Programming, Software and Architecture
  • Theme:Algorithmics, Computer Algebra and Cryptology


Computer Science and Digital Science

  • A8.1. Discrete mathematics, combinatorics
  • A8.3. Geometry, Topology
  • A8.4. Computer Algebra
  • A8.5. Number theory

Other Research Topics and Application Domains

  • B9.5.2. Mathematics
  • B9.5.3. Physics

1 Team members, visitors, external collaborators

Research Scientists

  • Frédéric Chyzak [Team leader, INRIA, Senior Researcher, HDR]
  • Alin Bostan [INRIA, Senior Researcher, HDR]
  • Guy Fayolle [Inria, Emeritus]
  • Pierre Lairez [INRIA, Researcher]

Post-Doctoral Fellows

  • Ricardo Thomas Buring [INRIA, Post-Doctoral Fellow, from Sep 2023]
  • Claudia Fevola [INRIA, Post-Doctoral Fellow, from Nov 2023]
  • Catherine St-Pierre [INRIA, Post-Doctoral Fellow, from Jun 2023]

PhD Students

  • Hadrien Brochet [INRIA]
  • Alexandre Goyer [Ministère Education, from Sep 2023]
  • Alexandre Goyer [INRIA, until Aug 2023]
  • Alexandre Guillemot [INRIA, from Oct 2023]
  • Alaa Ibrahim [INRIA]
  • Pingchuan Ma [AMSS, from Jun 2023]
  • Hadrien Notarantonio [UNIV PARIS SACLAY]
  • Raphaël Pagès [UNIV BORDEAUX]
  • Eric Pichon-Pharabod [UNIV PARIS SACLAY]
  • Sergey Yurkevich [UNIV VIENNE, until Aug 2023]

Interns and Apprentices

  • Geoffrey Datchanamourtty [INRIA, Intern, from Feb 2023 until Jul 2023]
  • Alexandre Guillemot [INRIA, Intern, from Mar 2023 until Aug 2023]
  • Aleksandr Storozhenko [INRIA, Intern, from Jun 2023 until Sep 2023]

Administrative Assistant

  • Bahar Carabetta [INRIA]

Visiting Scientist

  • Catherine St-Pierre [INRIA, from Feb 2023 until Apr 2023]

External Collaborator

  • Philippe Dumas [retired]

2 Overall objectives

“Experimental mathematics” is the study of mathematical phenomena by computational means. “Computer algebra” is the art of doing effective and efficient exact mathematics on a computer. The MATHEXP team develops both themes in parallel, in order to discover and prove new mathematical results, often out of reach for classical human means. It is our strong belief that modern mathematics will benefit more and more from computer tools. We ambition to provide mathematical users with appropriate algorithmic theories and implementations.

Besides the classification by mathematical and methodological axes to be presented in §3, MATHEXP's research falls into four interconnected categories, corresponding to four different ways to produce science. The raison d'être of the team is solving core questions that arise in the practice of experimental mathematics. Through the experimental mathematics approach, we aim at applications in diverse areas of mathematics and physics. All rests on computer algebra, in its symbolic and seminumerical aspects. Lastly, software development is a significant part of our activities, with the aim of enabling cutting-edge applications and disseminating our tools. Each of these four levels is reflected in the thematic axes of the research program.

2.1 Experimental mathematics

In science, observation and experiment play an important role in formulating hypotheses. In mathematics, this role is shadowed by the primacy of deductive proofs, which turn hypotheses into theorems, but it is no less important. The art of looking for patterns, of gathering computational evidence in support of mathematical assertions, lies at the heart of experimental mathematics, promoted by Euler, Gauss and Ramanujan. These prominent mathematicians spent much of their time doing computations in order to refine their intuitions and to explore new territories before inventing new theories. Computations led them to plausible conjectures, by an approach similar to those used in natural sciences. Nowadays, experimental mathematics has become a full-fledged field, with prominent promoters like Bailey and Borwein. In their words  55, experimental mathematics is “the methodology of doing mathematics that includes the use of computation for

  • gaining insight and intuition,
  • discovering new patterns and relationships,
  • using graphical displays to suggest underlying mathematical principles,
  • testing and especially falsifying conjectures,
  • exploring a possible result to see if it is worth formal proof,
  • suggesting approaches for formal proof,
  • replacing lengthy hand derivations with computer-based derivations,
  • confirming analytically derived results.”

2.2 Foundations of computer algebra

At a fundamental level, we manipulate several kinds of algebraic objects that are characteristic of computer algebra: arbitrary-precision numbers (big integers and big floating-point numbers, typically with dozens of thousands of digits), polynomials, matrices, differential and recurrence operators. The first three items form the common ground of computer algebra  99. We benefit from years of research on them and from broadly used efficient software: general-purpose computer-algebra systems like Maple, Magma, Mathematica, Sage, Singular; and also special-purpose libraries like Arb, Fgb, Flint, Msolve, NTL. Current developments, whether software implementation, algorithm design or new complexity analyses, directly impact us. The fourth kind of algebraic objects, differential and recurrence operators, is more specific to our research and we concentrate our efforts on it. There, we try to understand the basic operations in terms of computational complexity. Complexity is also our guide when we recombine basic operations into elaborate algorithms. In the end, we want fast implementations of efficient algorithms.

Here are some of the typical questions we are interested in:

  • Do some of the solutions of a linear ordinary differential equation (ODE) satisfy a simpler ODE? This relates to the problem of factoring differential operators.
  • Is a given linear partial differential equation (PDE) a consequence of a set of other PDEs? This relates to the problem of computing Gröbner bases in a differential setting.
  • Given a solution f(x,y) of a system of linear PDEs, how to compute differential equations for f(x,0) or 01f(x,y)dy? This falls into the realm of symbolic integration questions.
  • Given a linear ODE with initial condition at 0, how to evaluate numerically the unique solution at 1 with thousands of digits of precision? This is the gist of our seminumerical methods.

2.3 Applications

Getting involved in applications is both an objective and a methodology. The applications shape the tools that we design and foster their dissemination.

Combinatorics is a longstanding application of computer algebra, and conversely, computer algebra has a deep impact on the field. The study of random walks in lattices, first motivated by statistical physics and queueing theory, features prominent examples of experimental mathematics and computer-assisted proofs. Our main collaborators in combinatorics are Mireille Bousquet-Mélou (Université de Bordeaux), Stephen Melczer (University of Waterloo) and Kilian Raschel (Université d'Angers).

Probability theory. Apart from the already mentioned interest in random walks, which is a classical topic in probability theory, and on which we have an expert, Guy Fayolle, in our group, the main applications we have in mind are to integrals arising from: 2D fluctuation theory (generalizing arc-sine laws in 1D); moments of the quadrant occupation time for the planar Brownian motion; persistence probability theory (survival functions of first passage time for real stochastic processes); volumes of structured families of polytopes also arising in polyhedral geometry and combinatorics. Our main interactions on these topics are with Gerold Alsmeyer (U. Münster), Dan Betea (KU Leuven), and Thomas Simon (U. Lille).

Number theory, and especially diophantine approximation, are also fields with longstanding users of computer algebra tools. For example, the recently discovered sequence of integrals

4 - 2 i 4 + 2 i ( x - 4 + 2 i ) 4 n ( x - 4 - 2 i ) 4 n ( x - 5 ) 4 n ( x - 6 + 2 i ) 4 n ( x - 6 - 2 i ) 4 n x 6 n + 1 ( x - 10 ) 6 n + 1 d x , n 0 ,

whose analysis leads to the best known measure of irrationality of π, can hardly be found by hand  152. Yet, the discovery and the proof of such a result requires sophisticated tools from experimental mathematics. Our main collaborators in number theory are Boris Adamczewski (Université Lyon 1), Xavier Caruso (Université de Bordeaux), Stéphane Fischler (Université Paris Saclay), Tanguy Rivoal (Université Grenoble Alpes), Wadim Zudilin (University Nijmegen). Mahler equations are other aspects of number theory, in relation to automata theory, and appear in several of our research axes. Philippe Dumas, in our group, and Boris Adamczewski, already mentioned, have long been experts in this topic.

In algebraic geometry, in spite of tremendous theoretical achievements, it is a challenge to apply general theories to specific examples. We focus on putting into practice transcendental methods through symbolic integration and seminumerical methods. Our main collaborators are Emre Sertöz (Max Planck Institute for Mathematics) and Duco van Straten (Gutenberg University).

In statistical physics, the Ising model, and its generalization, the Potts model, are classical in the study of phase transitions. Although the Ising model with no magnetic field is one of the most important exactly solved models in statistical mechanics (Onsager won the Nobel prize 1968 for this), its magnetic susceptibility continues to be an unsolved aspect of the model. In absence of an exact closed form, the susceptibility is approached analytically, via the singularities of some multiple integrals with parameters. Experimental mathematics is a key tool in their study. Our main collaborators are Jean-Marie Maillard (SU, LPTMC) and Tony Guttmann (U. Melbourne).

In quantum mechanics, turning theories into predictions requires the computation of Feynman integrals. For example, the reference values of experiments carried out in particle accelerators are obtained in this way. The analysis of the structure of Feynman integrals benefits from tools in experimental mathematics. Our main collaborator in this field is Pierre Vanhove (CEA, IPhT).

2.4 Software

We ambition to provide efficient software libraries that perform the core tasks that we need in experimental mathematics. We target especially four tasks of general interest: algebraic algorithms for manipulating systems of linear PDEs, univariate and multivariate guessing, symbolic integration, and seminumerical integration.

For several reasons, we want to stay away from a development model that is too tied to commercial computer algebra systems. Firstly, they restrict dissemination and interoperability. Secondly, they do not offer the level of control that we need to implement these foundations efficiently. Concretely, we will develop open-source libraries in C++ for the most fundamental tasks in our research area. Computer algebra systems, like Sagemath or Maple, are good at coordinating primitive algorithms, but too high-level to implement them efficiently. We seek solid software foundations that provide the primitive algorithms that we need. This is necessary to implement the new higher-level algorithms that we design, but also to reach a performance level that enables new applications. Still, we will strive to expose our libraries to the prominent computer-algebra systems, especially Maple and Sagemath, used by many colleagues.

Besides, there is a growing interest in the programming language Julia for computer algebra, as shown by the Oscar project. We already internally use Julia and occasionally some of the libraries Oscar is build upon, and we want to promote this young ecosystem. It is very attractive to contribute to it, but on the flip side of the coin, it is too young to offer the same usability as Maple, or even Sagemath. So there is an assumed element of risk taking in our intent to also make our libraries available to Julia.

3 Research program

3.1 Algebraic algorithms for multivariate systems of equations

At large, MATHEXP deals with algebraic and seminumerical methods. This part goes through the fundamental aspects of the algebraic side. As opposed to numerical analysis where numerical evaluations underlie the basic algorithms, algebraic methods manipulate functions through functional equations. Depending on the context, different kinds of functional equations are appropriate. Algebraic functions are handled through polynomial equations and the classical theory of polynomial systems. To deal with integrals, systems of linear partial differential equations (PDEs) are appropriate. In combinatorics and number theory appears the need for non-linear ordinary differential equations (ODEs). We also consider other kinds of functional equations more related to discrete structures, namely linear recurrence relations, q-analogues and Mahler equations.

The various types of functional equations raise similar questions: is a given equation consequence of a set of other equations? What are the solutions of a certain type (polynomial, rational, power series, etc.)? What is the local behavior of the solutions? Algorithms to solve these problems support an important part of our research activity.

3.1.1 Holonomic systems of linear PDEs

One of the major data structure that we consider are systems of linear PDEs with polynomial coefficients. A system that has a finite dimensional solution space is called holonomic and a function that is solution of a holonomic system is called holonomic too. The theory of holonomy is important because it allows for an algebraic theory of analysis and integration (on this aspect see also §3.2). The basic objects of holonomy theory are linear differential operators, that are some sort of quasicommutative polynomials, and ideals in rings of linear differential operators, called Weyl algebras. In this aspect, holonomy theory is analogue to the theory of polynomial systems, where the basic objects are commutative polynomials and ideals in polynomial rings. Some of the important concepts, for example the concept of Gröbner basis, are also similar. Gröbner bases are a way to describe all the consequences of a set of equations.

As much as Gröbner bases in polynomial rings are the backbone of effective commutative algebra, Gröbner bases in Weyl algebras of differential operators are the backbone of effective holonomy theory, which includes integration. In a commutative setting, there has been a long way from the early work of Buchberger to today's state-of-the-art polynomial system solving libraries 47. We will develop a similar enterprise in the noncommutative setting of Weyl algebras. It will unlock a lot of applications of holonomy theory.

Following the commutative case, progress in a differential context will come from an appropriate theory and efficient data structures. We will first develop a matrix approach to handle simultaneous reduction of differential operators as the F4 algorithm for the polynomial case 90. The real challenge here is more practical than theoretical. It is not difficult to come with some F4 algorithm in the differential case. But will it be efficient? From the experience of modern Gröbner engines in the commutative case, we know that efficient implementation of simultaneous reduction requires a significant amount of low-level programming to deal with sparse matrices with a special structure. We also know that many choices, irrelevant to the mathematical theory, strongly influence the running times. The noncommutativity of differential operators adds extra complications, whose consequences are still to be understood at this level. We want to reuse, as much as possible, the specialized linear algebra libraries that have been developed in the polynomial context 66, 47, but we may have to elude the densification of products induced by noncommutativity.

On a more theoretical aspect, one step further in the analysis is that the possible analogues of the F5 algorithm 91 are not fully explored in a differential setting. We may expect not only faster algorithms, but also new algorithms for operating on holonomic functions (Weyl closure for example, see §3.1.2). Rafael Mohr started a PhD thesis in the team on using F5 for computing equidimensional decompositions in the commutative case.

3.1.2 Desingularization of PDEs

Among the structural properties of systems of linear differential or difference equations with polynomial coefficients, the question of understanding and simplifying their singularity structure pops up regularly. Indeed, an equation or a system of equations may exhibit singularities that no solution have, which are then called apparent singularities. Desingularization is a process of simplifying a -finite system by getting rid of its apparent singularities. This is done at the cost of increasing the order of equations, thus, the dimension of their solution space. The univariate setting has been well studied over time, including in computer algebra for its computational aspects 35, 34. This led to the notion of order-degree curve 73, 74, 71: a given function can cancel an ODE or ORE (ordinary recurrence equation) of small order with a certain coefficient degree, and also other ODEs or OREs of higher orders, possibly with smaller coefficient degrees. In certain applications, the ODE or ORE of minimal order may be too large to be obtained by direct calculations. It appears that the total size of the equations, that is, the product of order by degree, can be more relevant to optimize the speed of algorithms. This is a phenomenon that we observed first in relation to algebraic series 58, and we want to promote further this idea of trading minimality of order for minimality of total size, with the goal of improved speed. On the other hand, apparent singularities have been defined only recently in the multivariate holonomic case 72.

Our project includes developing good notions and fast heuristic methods for the desingularization of a -finite system, first in the differential case, where it is expected to be easier, then in the case of recurrence operators.

Moreover, fast algorithms will be obtained for testing the separability of special functions: in a nutshell, this problem is to decide whether the solutions to a given system also satisfy linear differential or difference equations in a single variable, and algorithmically this corresponds to obtaining structured multiples of operators with a structure similar to that for desingularization.

In the multivariate case, the operation of saturating an ideal in the Weyl algebra by factoring out (and removing) all polynomial factors on the left is known under the name of Weyl closure. This relates to desingularization as the Weyl closure of an ideal contains all desingularized operators. Weyl closure also is a relative of the radical of an ideal in commutative algebra: given an ideal of linear differential operators, its Weyl closure is the (larger) ideal of all operators that annihilate any function solution to the initial ideal. Computing Weyl closure applies to symbolic integration, and algorithms exist to compute it 149, 148, although they are slow in practice. Weyl closure also plays an important role in applications to the theory of special functions, e.g., in the study of GKZ-systems (a.k.a. A-hypergeometric systems) 124, and in relation to Fischer distribution and maximum likelihood estimation in statistics 38, 100. Algorithms for Weyl closure should then be obtained, by basing on desingularization as a subtask.

3.1.3 Well-foundedness of divide-and-conquer recurrence systems

Converting a linear Mahler equation with polynomial coefficients (see §3.3.3) into a constraint on the coefficient sequence of its series solutions results in a recurrence between coefficients indexed with rational numbers, which must be interpreted to be zero at noninteger indices. The recurrence can be replaced with a system of recurrences by cases depending on residues modulo some power of the base b. The literature also alternatively introduces recurrences with indices expressed with floor/ceiling functions, typically so for fine complexity analysis of divide-and-conquer algorithms. For sequences that can be recognized by automata (“automatic sequences”) and their generalizations (“b-regular sequences”), it is natural to consider a system of recurrences on several sequences, with a property of closure under certain operations of taking subsequences: restricting to even indices, or odd indices, or more generally indices with a given residue modulo the base b. This variety of representations calls for algorithms to be able to convert from one another, to check the consistency of a given system of recurrences, and to identify those terms of the sequence that determine all others (which are typically not just a few first terms). In the continuation of 78 that developed a Gröbner-bases theory as a pre-requisite for this goal, we will address those problems of conversion and well-foundedness.

3.1.4 Software

Software development is a real challenge, regarding the symbolic manipulation of linear PDEs. While symbolic integration has gained more and more recognition, its execution is still reserved to experts. Providing a highly efficient software library with functionalities that come as close as possible to the actual integrals, rather than some idealized form, will foster adoption and applications. In the past, the lack of solid software foundations has been an obstacle in implementing newly developed algorithms and in disseminating our work. It was the case, for example, for our work on binomial sums 62, or the computation of volumes 118, where having to use an integration algorithm implemented in Magma has been a major obstacle.

What is lacking is a complete tool chain integrating the following three layers:

  1. the computation of Gröbner bases of holononomic systems, as discussed in §3.1.1;
  2. the basic algorithms for manipulating holonomic systems, such as the desingularization discussed in §3.1.2 but also the classical aspects of symbolic integration;
  3. the algorithms relevant for applications, including all the aspects covered in §3.2.

The first layer of the toolchain will be developed in C++ for performance but also to open the way to an integration in free computer algebra systems, like Sagemath or Macaulay2. We will benefit from years of experience of the community and close colleagues in implementing Gröbner basis algorithms in the commutative case. The third layer of the toolchain should be easily accessible for the users, so at least available in Sagemath. Some of our current software development, related to the second layer, already happens in Julia (as part of R. Mohr's PhD work).

3.2 Symbolic integration with parameters

Among common operations on functions, integration is the most delicate. For example, differentiation transforms functions of a certain kind into functions of the same kind; integration does not. For this reason, integration is also expressive: it is an essential tool for defining new functions or solving equations, not to mention the ubiquitous Fourier transform and its cousins. Integration is the fundamental reason why holonomic functions are so important: integrals of holonomic functions are holonomic. Algorithms to perform this operation enable many applications, including: various kinds of coefficient extractions in combinatorics, families of parametrized integrals in mathematical physics, proofs of irrationality in number theory, and computations of moments in optimization.

Given a function F(𝐭,𝐱) of two blocks of variables 𝐭=t1,,ts and 𝐱=x1,,xn, and an integration domain Ω(𝐭)n, how to compute the function

G ( 𝐭 ) = Ω ( 𝐭 ) F ( 𝐭 , 𝐱 ) d 𝐱 ?

Concretely, F(𝐭,𝐱) is described by a system of linear PDEs with polynomial coefficients, Ω(𝐭) is given by polynomial inequalities, and we want a system of PDEs describing G(𝐭). Note here the presence of parameters which makes it possible to describe the result of integration with PDEs. When there are no parameters, the result is a numerical constant. Even though we deal with them in an entirely different way (see §3.5), we still mostly rely on symbolic integration with parameters.

From the algebraic and computational point of view, integration has several analogues. Discrete sums are the prominent example, but there are also q-analogues, Mahlerian functions, and some others. At large, algorithms for symbolic integration, or its analogues, perform a sort of elimination in a ring of differential operators. There are some links with elimination theory and related algorithms as developed for the study of polynomial systems of equations.

Symbolic integration is an historical focus of MATHEXP's founding members with many significant contributions. Compared to our previous activities, we want to put more emphasis on software development. We are at a point where the theory is well understood but the lack of efficient implementations hinders many applications. Naturally, this effort will rest on the results obtained in §3.1.

3.2.1 Integrals with boundaries

The algebraic aspects of symbolic integration are best understood when the integration domain has no boundary: typically n or a topological cycle in n. Indeed, in this context we have the so-called telescopic relation which states that the integral of a derivative vanishes: for example, if H(𝐭,𝐱) is rapidly decreasing, then

n H x i d 𝐱 = 0 .

It gives a nice algebraic flavor to the problem of symbolic integration and reduces it to the study of the quotient space /x1++xn, where  is a suitable function space containing the integrand. A large part of the algorithms developed so far focuses on this case. Yet, many applications do not fit in this idealized setting. For example, Beukers' proof of the irrationality of ζ(3)50 uses the two integrals

γ R d x d y d z and [ 0 , 1 ] 3 R d x d y d z , where R ( t , x , y , z ) = 1 1 - ( 1 - x y ) z - t x y z ( 1 - x ) ( 1 - y ) ( 1 - z ) .

The first one, where the integration domain is some complex cycle γ, is well handled by current algorithms. The second is not, and this is unsatisfactory for further applications of symbolic integration in number theory. In this particular case, we may think of an algorithm that would reduce the integration on the cube to an integration without boundary and an integration on the boundary of the cube. This boundary just consists of 6 squares, which calls for a recursive procedure. Unfortunately, the integration domain touches the poles of the integrand, so operations like integrating only part of a function or integration by parts or differentiation under the integral sign may not be meaningful by lack of integrability. It is not known how to deal with this issue automatically. For more general domains of integration, it is not even clear what kind of recursive procedure can be applied.

The next generation of symbolic integration algorithms must deal with integrals defined on domains with boundaries. The framework of algebraic D-modules seems to be very appropriate and already features some algorithms. But this is not the end of the story, as this line of research has not led yet to efficient implementations. We identified two ways of action to reach this goal. Firstly, existing algorithms  130, 131 put too much emphasis on computing a minimal-order equation for the integral. While this is an interesting property, other kinds of integration algorithms have successfully relaxed this condition. For example, for integrating rational functions, the state-of-the-art algorithm  117 depends on a parameter r>0. The computed equation is minimal only for r large enough, which corresponds to the degeneration rank of some spectral sequence  86. In practice, this has never been an obstacle: most of the time we obtain a minimal equation with a small value of r. For the few remaining cases, we will soon propose a generalized procedure to minimize the equation a posteriori; this will be a consequence of a work on univariate guessing (see §3.4.1) that bases and expands on  63. The algorithm by small values of r applicable in most cases already outperforms previous ones in terms of computational complexity 61 and practical performance, being able to compute integrals that were previously out of reach. We consider it to be a special case of the general algorithm that we want to develop, and a proof of feasibility. However, the effort will be vain without significant progress on the computation of Gröbner bases in Weyl algebras. Fortunately, and this is the second way of action, we think that the framework of algebraic D-modules enables efficient data structures modeled on recent progress in the context of polynomial systems. Progress in this direction (as explained in §3.1.1) will immediately lead to significant improvement for symbolic integration.

3.2.2 Reduction-based creative telescoping

The approach to symbolic integration based on creative telescoping is a definite expertise of the team. Although the approach is difficult to use for integrals with boundaries, it still has many appeals. In particular, it generalizes well to discrete analogues. Recently, the team has initiated the development of a new line of algorithms, called reduction-based. After continuing work, this line has not yet been extended to full generality 57, 109. These recent theoretical developments are not yet reflected in current software packages (only prototype implementations exist) and therefore their practical applicability, and how the algorithms compare, is not yet fully understood. Filling these gaps will be a good starting point for us, but the ultimate goal will be to formulate analogue algorithms for the difference case (summation of holonomic sequences), for the q-case, and for general mixed cases. We expect that these advances in the theory will have a great impact on various applications.

3.2.3 Holonomic moment methods

In applied mathematics, the method of moments provides a computational approach to several important problems involving polynomial functions and polynomial constraints: polynomial optimization, volume estimation, computation of Nash equilibria, ...  122. This method considers infinite-dimensional linear optimization problems over the space of Borel measures on some space n. They admit finite-dimensional relaxations in terms of linear matrix inequalities where a measure μ is represented approximately by a finite number of moments 𝐱αdμ.

From the holonomic point of view, the generating function of the moments of μ — or, equivalently, the characteristic function ϕ(𝐮)=exp(i𝐮·𝐱)dμ — is holonomic for a large class of measures μ (which includes all measures that appear in current applications of the method of moments). This remark already unlocks some applications where the current bottleneck is the computation of many moments: differential equations on ϕ(𝐮) reflect recurrence relations on the moments, and computing the former with symbolic integration will lead to efficient algorithms for computing the moments.

A line of research developed recently 112, 123, 120, 144, 121 focuses on reducing the size of the matrices in the linear matrix inequalities (LMI) involved in the relations by using pushforward measures. For example, let us consider a polynomial f[x1,,xn] and the problem of computing the volume of p[0,1]n|f(p)0. The article  106 solves this problem with a linear program over Borel measures on [0,1]n. Using the pushforward measure, the work  112 reduces to a linear program over measures on , supposedly much easier to solve. However, this comes at the cost of computing the moments μk=[0,1]nfkdx1dxn for increasingly large values of k. While this is an elementary task (it is enough to expand fk), the number of monomials to compute is 1n!(kdeg(f))n, for large k, and this becomes the bottleneck of the method. The computation of the generating series k0μktk using symbolic integration enables the computation of a linear recurrence relation, of size deg(f)n at most, for the moments μk, and we can compute the μ0,,μk in O(kdeg(f)n) arithmetic operations only, or O˜(k2deg(f)n) bit operations. This should be a low-hanging fruit as soon as we have reasonable implementations of symbolic integration on domains with boundaries (see §3.2.1). Naturally, the constant in the big O hides the size of the ODE of which the generating function is solution, and it may be exponential in n. But this is only a worst-case bound, and any nongeneric geometric property will tend to make this ODE smaller.

One step further, we will try to interpret the whole moment method in the holonomic setting. The differential equation for k0μktk not only enables the computation of the moments μk, it somehow encodes all the μk. Recovering numerical values, such as the volume, from this differential equation is akin to the seminumerical algorihtms we know (see §3.5.1). As a next step, we will study how some optimization problems treated by the method of moments behave in this holonomic setting. We think especially of the problems of chance optimization and chance constrained optimization  111: in the former, one maximizes the probability of success over the design parameters; in the latter one optimizes a goal while ensuring that some probability remains low.

3.2.4 New aspects of symbolic integration

Mahlerian telescopers.

Here the aim is to determine the relations satisfied by a solution of a Mahler equation (see §3.3.3). A natural generalization is to search for relations among solutions of different Mahler equations. Our objective is to provide an algorithmic answer to this generalization, for (Laurent) power series yi solutions of inhomogeneous first-order equations of the form yi(zp)+ai(z)yi(z)=bi(z), with coefficients in ¯(z). We will start with the easy case where all ai's are equal to 1. Under this assumption, a theorem of Hardouin and Singer guarantees that there exists an algebraic relation with coefficients in ¯(z) between y1, ..., yn if and only if there exists a Mahlerian telescoper between the bi(z). (This originates in Hardouin's PhD thesis and was generalized in  104.) We will work on making algorithmic such an existence test, and if possible the calculation of such telescopers. For this, we will be inspired by existing algorithms for calculating telescopers for other types of functional operators.

D-algebraicity and elliptic telescoping.

Random walks confined to the quarter plane is a well studied topic, as testified by the book  95. A new algebraic approach, relying on the Galois theory of difference equations, has been introduced in  88 to determine the nature of the generating series of such walks. This approach gives access to the D-algebraicity of the generating functions, that is, to the knowledge of whether they satisfy some differential equations (linear or non-linear). More precisely, D-algebraicity is shown to be equivalent to the fact that a certain telescopic equation, similar to the one appearing in the classical context of creative telescoping, but defined on an elliptic curve attached to the walk model, admits solutions in the function field of that curve. For the moment, the corresponding telescoping equations are solved by hand, in a quite ad-hoc fashion, using case-by-case treatment. We aim at developing a systematic and automatized approach for solving this kind of elliptic creative telescoping problems. To this end, we will import and adapt our algorithmic methods from the classical case to the elliptic framework.

3.2.5 Software

Because of the dependency of the software pertaining to symbolic integration on developments on multivariate systems, our goals related to software on symbolic integration have been described in §3.1.4.

3.3 Computerized classification of functions and numbers

Classifying objects, determining their nature, is often the culmination of a mature theory. But even the best established theories can be impracticable on a concrete instance, either by a lack of effectiveness or by a computational barrier. In both cases, an algorithm is missing: we have to systematize, but also effectivize and automate efficiently. This is what we propose to do, in order to solve classification problems relating to numbers, analytical functions, and combinatorial generating series.

3.3.1 Practical tests of algebraicity and transcendence for holonomic functions

It is an old question addressed by Fuchs in the 1870s of whether one can decide if all solutions of a given linear differential equation are algebraic. Singer showed in  143 that there exists an algorithm which takes as input a linear differential equation with coefficients in [x], and decides in a finite number of steps whether or not it has a full basis of algebraic solutions. If the answer is negative, this does not automatically exclude the possibility that a particular solution is algebraic. (For instance, the linear differential equation (xy')'=0 has not only the algebraic solution 1, but also the transcendental holonomic solution y(x)=log(x).) However, a recent refinement of Singer's 1979 method can be used to solve in principle Stanley's open problem  147: given a holonomic power series y(x) by an annihilating linear differential equation and sufficiently many initial terms in its expansion, decide if y(x) is algebraic or transcendental. Unfortunately, the corresponding algorithm is too slow in practice, because of its high computational complexity1. An interesting question is to find efficient alternatives that are able to answer Stanley's question on concrete difficult examples.

An approach that always works is the algorithmic guess-and-prove paradigm (see §3.4): one guesses a concrete polynomial witness, and then post-certifies it. This method is very robust and works perfectly well, but it may fail on examples with minimal polynomial much larger than the input differential equation. For instance, in an open question by Zagier  151, the input differential equations have order 4, but the (estimated) algebraic degree of the desired solution is 155520, hence much too large to allow the computation of the minimal polynomial. (Note that the estimate is obtained using seminumerical methods evoked in §3.5).

We aim at designing various pragmatic algorithmic methods for proving algebraicity or transcendence in such difficult cases. First, the algebraic nature of the holonomic function is tested heuristically, using a mixture of numeric and p-adic methods (e.g., monodromy estimates and p-curvature computations). In cases where transcendence is conjectured, the method we will develop is an application of the minimization algorithms in §3.4.1: after finding a minimal-order ODE, an analysis of singularities is sufficient to decide transcendence, at least for interesting subclasses of inputs (e.g., a certain class of generating series of binomial sums). In cases where algebraicity is conjectured, we plan to apply computational strategies inspired by effective differential Galois theory and effective invariant theory, in particular by the recent work  42.

3.3.2 Algorithmic determination of algebraic values of E-functions

E-functions are holonomic and entire power series subject to some arithmetic conditions; they generalize the exponential function. The class contains most of the holonomic exponential generating functions in combinatorics and many special functions such as the Airy and the Bessel functions. Given an E-function f represented implicitly by a linear differential equation (and enough initial terms), the question is to determine algorithmically the algebraic numbers α such that f(α) is algebraic. A recent article by Adamczewski and Rivoal  37 proves that the problem is decidable. It relies on important works by Siegel  142, Shidlovskii  141, and Beukers  51. However, the underlying algorithm has no practical applicability. We will obtain an improved version of this algorithm, by accelerating its bottleneck, which consists in computing a linear differential operator of minimal order satisfied by f. This will take advantage of the results obtained in §3.4.1. By continuing the line of work opened in 63, the idea is now to exploit the particular structure of differential equations satisfied by E-functions, and to use bounds produced by calculation on the considered equation rather than theoretical bounds such as “multiplicity lemmas". Our previous improvements will make this algorithm practical. We will also address an extension of the theory that also determines cases of algebraic dependency between evaluations of E-functions 98.

3.3.3 Rational solution of Ricatti-like Mahler equation and hypertranscendence

Mahler equations are functional equations that relate a function f(z) with f(zp), f(zp2), etc., for some integer p>0. The study of Mahler equations is motivated by Mahler's work in transcendence, as well as by the study of automatic sequences, produced by finite automata (see §3.1.3). From a computer algebra perspective, the basic tasks concerning Mahler equations are poorly understood, compared to differential or recurrence equations.

Roques designed an algorithm for the computation of the Galois group of Mahler equations of order 2  139. This group reflects the algebraic relations between the solutions. So its computation is relevant in transcendence theory. Roques' algorithm relies on deciding the existence of rational solutions to some nonlinear Mahler equations that are analogues of Riccati differential equations. For this task, Roques proposes an algorithm reminiscent of Petkovšek's algorithm  134, with an exponential arithmetic complexity as it has to iterate through all monic factors of well-identified polynomials. Building on recent progress in the linear case 77, we want to obtain a polynomial-time algorithm for this decidability problem, or at least one that is not exponentially sensitive to the degree of the polynomial coefficients of the equation.

An application of this work will be a new algorithm to decide the differential transcendence of solutions of Mahler equations of order 2, following a criterion given by Dreyfus, Hardouin and Roques (see 87, 139). This would make it possible to prove new results about some classical Mahler functions and the relations between them. An example will be to reprove and extend the hypertranscendence of the solutions to the Mahler equation satisfied by the generating series of the Stern sequence  85.

3.3.4 Algorithmic determination of algebraic values of Mahler functions

We aim at studying the special values of Mahler functions, going through the search for algebraic values and more generally for algebraic relations between values. We will resume the analysis of the algorithm in  36, to highlight its computational limitations, before optimizing its subtasks. We are thinking in particular of the rationality test, for which an algorithm was given in  44 and another of better complexity has appeared recently 77, and of the search for minimal equations, for which structured linear algebra techniques must allow practical efficiency.

3.3.5 Efficient resolution of functional equations with 1 catalytic variable

In enumerative combinatorics, many classes of objects have generating functions that satisfy functional equations with “catalytic” variables, relating the complete function with the partial functions obtained by specializing the catalytic variables. For equations with a single catalytic variable, either linear or nonlinear, solutions are invariably algebraic. This is a consequence of Popescu's theorem on Artin approximation with nested conditions  137, a deep result in commutative algebra. However, the proof of this qualitative result is not constructive. Hence, to go further, towards quantitative results, different approaches are needed. Bousquet-Mélou and Jehanne proposed in  64 a method which applies in principle to any equation of the form P(F(t;x),F1(t),...,Fk(t),t,x)=0, where x is a (single) catalytic variable, that admits a unique solution (F,F1,...,Fk)[x][[t]]×[[t]]k. The method is based on a systematic constructive approach, which first derives from the functional equation a (highly structured) algebraic elimination problem over (t) with 3k unknowns and 3k polynomial equations, whose degree is linear in the degree δ of the input functional equation. The problem is already nontrivial for k=1, but most interesting combinatorial applications require k>1, and current methods are only able to tackle functional equations with small values of k (at most 3) and small total degree δ (at most 4). We will provide unified, systematic and robust algorithms for computing polynomial equations exhibiting the algebraicity of solutions for functional equations with one catalytic variable, building on  64. The ideal goal is to be able to exploit the geometry and symmetries of the elimination problems arising from the approach in  64. The final objective is to produce efficient implementations that can be used by combinatorialists in order to solve their functional equations with one catalytic variable in a click.

3.3.6 Classification of solutions for functional equations with 2 catalytic variables

When several catalytic variables are involved, Popescu's theorem does not hold anymore. The solutions are not necessarily algebraic anymore, and even holonomy is not guaranteed, even in the linear case.

In the linear case, our the main objective is to fully automatize the resolution of linear equations with two catalytic variables coming from lattice walk questions, when the walk model admits Tutte invariants and decoupling functions. A first nontrivial challenge will be to produce a new computer-assisted proof of algebraicity for the famous Gessel model, different in spirit from the first proof 60: instead of guess-and-proof, we will be inspired by the recent “human” proofs in  65, 46 relying on Tutte invariants. There are several nontrivial subproblems, both on the mathematical and algorithmic sides. One of them is to determine if a model admits invariants and decoupling functions, and if so, to compute them. A first step in this direction was recently done by Buchacher, Kauers and Pogudin  69, in the simpler case when one looks for polynomials instead of rational functions.

In the nonlinear case with two catalytic variables, few results exist, and almost no general theory. These equations occur systematically when counting planar maps equipped with an additional structure, for instance a colouring (or, a spanning tree, a self-avoiding walk, etc.). On this side, the study will be of a more prospective nature. However, we envision the resolution of several challenges. A first objective will be to test various guess-and-prove methods on Tutte's equation  150 satisfied by the generating function of properly q-colored triangulations of the sphere. Any kind of progress on it will be an important success–for instance, proving algebraicity of the solution by a computer-driven approach even for particular values of q such as q=2 and q=3. A second objective will be to automatize the strategy based on Tutte invariants employed by Bernardi and Bousquet-Mélou, and to solve the more general equation (Potts model on planar maps) in an automated fashion. This is interesting already for q=2; in this case, proofs already exist in  45, but they use various ad-hoc tricks. We aim at solving the conjectures in  45 for q=3, concerning the enumeration of properly 3-colored near-cubic maps, by any combination of methods (guess-and-prove, geometric-driven elimination for structured polynomial systems, Tutte invariants).

3.3.7 Deciding integrality of a sequence

Given enough terms of a sequence, it is possible to reconstruct a linear recurrence relation of which the sequence is a solution, if there is one. For example, with the nine numbers 1, 3, 13, 63, 321, 1683, 8989, 48639 and 265729, one can reconstruct the recurrence relation (n+1)un-(6n+9)un+1+(n+2)un+2=0 for the Delannoy numbers. We would also like to be able to reconstruct the closed form un=k=0nnkn+kk, because it reveals arithmetic information absent from the recurrence, such as the integrality of the numbers un. The search for a closed form can start by obtaining candidates in a heuristic way, since the summation algorithms make it possible to rigorously prove or disprove a posteriori that the reconstructed closed form is indeed correct.

3.3.8 Algorithmic resolution of Padé-approximation problems

Most of the proofs of irrationality of some constant c construct a sequence of rational numbers approximating c with a tight control on the growth of the denominator. Typically, c=F(1) for some holonomic function F, and approximations of F by rational functions may lead to rational approximations of c, by evaluating at 1. Good candidates for approximating F are the Padé approximants of F, originating in Hermite's work  107. But approximations that actually lead to interesting Diophantine results are rare gems. More recently, a general course of action has emerged 52, 146, 97 to deal with the case of multiple zeta values (MZV). It is based on the simultaneous approximation of polylogarithm functions by rational functions. We are looking to automate this approach and to extend its field of application.

We will use computer-assisted symbolic and numerical computations for the construction of a relevant Padé-approximation problem. Then, the resolution of the problem must be automated. This is fundamentally a computational problem in a holonomic setting. The natural approach here is guess-and-prove: we first guess what could be a closed-form formula for the solution by computing explicitly the solutions for some fixed values of n, then we prove that the guess indeed leads to a solution (which must be unique if the original problem is well-posed). The last step will typically use symbolic integration and Gröbner bases. Similar guess-and-prove approaches in a holonomic setting already gave several Diophantine results 152 but Padé approximation has not been tackled yet in this way.

3.3.9 Software

Our future algorithm for computing a linear differential equation of minimal order satisfied by a given holonomic function will be implemented and made available to users. This may include the application to the determination of algebraic values of E-functions. We will do the same concerning linear Mahler equations of minimal order satisfied by given Mahler functions, and concerning the determination of their algebraic values. Our work on solving equations with catalytic variables has started rather recently, so it is still too early to decide the form that related software should take, but we definitely ambition to provide combinatorialists with an implementation that exhibits the algebraic and/or differential equations they are after.

3.4 Guess-and-prove

Pólya has theorized and popularized a “guess-and-prove” approach to mathematics in remarkable books  136, 135. It has now became an essential ingredient in experimental mathematics, whose power is highly enhanced when used in conjunction with modern computer algebra algorithms. This paradigm is a key stone in recent spectacular applications in experimental mathematics, such as 60115, 116. The first half (the guessing part) is based on a “functional interpolation” phase, which consists in recovering equations starting from (truncations of) solutions. The second half (the proving part) is based on fast manipulations (e.g., resultants and factorization) with exact algebraic objects (e.g., polynomials and differential operators).

In what follows we mostly focus on the guessing phase. It is called algebraic approximation  67 or differential approximation  113, depending on the type of equations to be reconstructed. For instance, differential approximation is an operation to get an ODE likely to be satisfied by a given approximate series expansion of an unknown function. This kind of reconstruction technique has been used at least since the 1970s by physicists  102, 103, 110, under the name recurrence relation method, for investigating critical phenomena and phase transitions in statistical physics. Modern versions are based on subtle algorithms for Hermite–Padé approximants  43; efficient differential and algebraic guessing procedures are implemented in most computer algebra systems.

In the following subsections, we describe improvements that we will work on.

3.4.1 Univariate guessing


A first task is to optimize the search for the minimal-order ODE satisfied by a given holonomic series. Feasibility is already known from the recent  37, but the corresponding algorithm is not efficient in practice, because it relies on pessimistic degree bounds and on pessimistic multiplicity estimates. We will design and implement a much more efficient minimization algorithm, which will combine efficient differential guessing with a dynamic computation of tight degree bounds.


“Multiplicity lemmas” are theorems concluding that an expression representing a formal power series is exactly zero under the weaker assumption that the expression is zero when truncated to some order. In general, the expression is a differential polynomial in a series, but interesting subcases are non-differential polynomials, to test algebraicity, and linear differential expressions, to test holonomicity. In good situations, multiplicity lemmas turn guessing into a proving method or even a decision algorithm. A particularly nice form of a multiplicity lemma is available for polynomial expressions 56, and a similar result exists for linear ODEs 49. We will implement such bounds as proving procedures, and we will generalize the approach to other kinds of expressions, e.g., expressions in divided-difference operators that appear in combinatorics, e.g., in map enumeration  64.


Generating functions appear in a variety of classes of increasing complexity, in relation to the equations they satisfy. A third subtask relates to the search for an element in a lower complexity class inside the solution set of a higher complexity class. For instance, can a linear or some other combination of non-holonomic series be holonomic? Can a linear combination of holonomic series be algebraic, or even rational? A promising ongoing result, obtained incidentally in the work on Riccati-type solutions for Mahler equations (see §3.3.3), performs a similar guessing by a suitable search for constrained Hermite–Padé approximants after computing the whole module of approximants. But the main expected impact of the approach would be for differential analogues, and we will strive to generalize the approach, taking advantage of the formal analogy between many types of linear operators.

Preparing data.

As guessing often requires to first prepare a lot of data, developing fast expansion algorithms for classes of equations is also related to guessing. In this direction, we plan to design a fast algorithm for the high-order expansion of a DD-finite series (i.e., series satisfying linear differential equations with holonomic coefficients). The complexity of the homologue problem for a linear ODE with series coefficients is quasi-linear in the truncation order; that for a linear ODE with polynomial coefficients is just linear. For DD-finite series, we plan to interlace the two approaches without first expanding the series coefficients of the input equation to the wanted order, so as to avoid a large constant and a logarithmic factor.

3.4.2 Multivariate guessing

Multivariate aspects of guessing relate to activities that we plan to develop as a means of strengthening scientific collaborations with colleagues in Paris (PolSys, Sorbonne U.) and Linz (Johannes Kepler University Linz, Austria). How soon the research happens will depend on how interaction with those colleagues evolves.

Trading order for degree.

An established technique in the univariate case is known as “trading order for degree”. It is based on the observation that minimal order operators tend to have very high degree, while operators of slightly higher order often have much smaller degrees and are therefore easier to guess. A candidate for the minimal order operator is then obtained as greatest common right divisor of two guessed operators of nonminimal order. We will extend this successful technique to the multivariate case. The desired output in this case is a Gröbner basis of a zero-dimensional annihilating ideal. The coefficients of the Gröbner basis elements are high-degree polynomials, and the idea is, as in the univariate case, not to guess them directly, but to guess ideal elements of smaller total size and to compute the Gröbner basis of them. As Gröbner basis computations can be costly, the alternative operators will clearly already have to be “close” to a Gröbner basis in order for the idea to be beneficial. The questions are: what should close to a Gröbner basis mean, how close should the operators be chosen, how much degree drop can be expected then, and how do the answers to these questions depend on the monomial order?

Exploiting nested structures.

In another direction, we plan to exploit the generalized Hankel structure of the matrices that appear when modeling linear recurrence relations guessing through linear algebra. Regarding relations with constant coefficients, this finds applications in polynomial system solving through the spFGLM algorithm  92, 93 for finding a lexicographic Gröbner basis. The linear system is block-Hankel with blocks sharing the same structure, and this recursive structure has the same depth as the number of variables. Yet, up to now, only one layer of the structure is handled using fast univariate polynomial arithmetic, then the other ones are dealt with by noting that the matrix has a quasi-Hankel structure and using fast algorithms for this type of matrix 59. However, the displacement rank of this matrix is not small; hence, not taking into account the full structure of the matrix is suboptimal. This is related to  48 for computing linear recurrence relations with constant coefficients using polynomial arithmetic and  128 for computing multivariate Padé approximants. Analogously, the linear system modeled for guessing linear recurrence relations with polynomial coefficients is highly structured. It is the concatenation of matrices as above, yet these matrices are not independent, as they are all built from the same sequence. Even in the univariate case, the Beckermann–Labahn algorithm is not able to exploit this extra structure in order to be quasi-optimal in the input size. Hence, we would like to investigate how to do so.

In addition to the structure in the modeling, we want to exploit the structure of the sequences that come from applications. For instance, in the enumeration of lattice walks, the nonzero terms often lie in a cone and a lattice, and they are invariant under the action of a finite group. The goal is to take this structure into account in order to build smaller systems for the guessing, and to avoid the generation of more sequence terms than necessary.

3.4.3 Software

We will implement fast algorithms for computing Hermite–Padé approximants of various types 43. This will include modular integers, integers (via modular reconstruction), simple approximants, and simultaneous approximants. With such a fast, robust implementation at hand, we will also be able to address the guessing of algebraic differential equations (ADE), going beyond the linear case. Our use of state-of-the-art algorithms for computing approximants (including the “superfast” one) will ensure that we outperform earlier implementations such as Guess (by Hebisch and Rubey) and GuessFunc (by Pantone). We will also develop a variant of trading order for degree for the nonlinear setting. Our implementation will automate the critical selection of derivatives, powers, and coefficient degrees needed to reconstruct an ADE.

3.5 Seminumerical methods in computer algebra

The methods in this research axis deal directly with numbers but, following Knuth 114, they are properly called seminumerical because they lie on the borderline between symbolic and numeric computations. While numerical methods process numerical data and generate further numerical data, our seminumerical methods process exact data, generate high-precision numerical data and reconstruct exact data. In this perspective, the basic unit is not the IEEE-754 floating-point number, but arbitrary precision numbers, typically several thousand decimal places, sometimes more. The crux is not numerical stability, but computational complexity as the number of significant digits goes to infinity. When a number is known at such a high precision, it reveals fundamental structures: rationality, algebraicity, relations with other constants, etc. High-precision computation is a recurring useful tool in the field of experimental mathematics  40. In some situations, it enables a guess-and-prove approach. In some others, we are unable to step from “guess” to “prove” but overwhelming numerical evidence is enough to shape a conviction. A celebrated example is the experimental discovery of the BBP formula for π  41 (that was proved after its initial guessing). More recently, all the conjectures (some of which became theorems) about multiple zeta values, a hot topic in number theory and mathematical physics, start from high-precision numerical data.

3.5.1 Seminumerical algorithms for linear differential equations

We promote linear differential equations as a data structure to represent and compute with functions (see §3.1). In truth, this data structure represents functions up to finitely many constants. It determines a global behavior but misses the pointwise aspect. Seminumerical methods combine both. They are an important tool for experimental mathematics because they can give strong indications about the nature of a function in very general situations (see §3.3.1).


Alexandre Goyer and Raphaël Pagès started a PhD thesis on the factorization of differential operators. It is a fundamental operation for solving linear differential equations, or, at least, elucidate the nature of the solutions. Goyer considers seminumerical methods. They rely on numerical evaluations of the solutions of the differential operators to guess numerically a factorization. High precision makes it possible to reconstruct the factors exactly, and a simple multiplication certifies the computation. Pagès considers a discrete analogue of numerical evaluation: reduction modulo a prime number.

Effective analytic continuation.

The main tool for computing high-precision evaluations of functions or integrals is effective analytic continuation of solutions of linear differential equations. It is a form of numerical ODE solver, specialized for linear equations and able to carry out high precision all along the continuation path.

Numerical ODE solvers are a very classical topic in numerical analysis 70, with popular methods, like Runge–Kutta or multistep methods. A much less known family of symbolic-numeric algorithms, that we could call rigorous Taylor methods, originates from works of the Chudnovskys' in the 1980s and 1990s 76, 75 and has later been developed by van der Hoeven  108 and Mezzarobba  125, 126. This family of algorithms only handles linear ODEs with polynomial coefficients, which is precisely the nature of ODEs arising in the context of this document. But contrary to classical methods, they provide very strong guarantees even in difficult situations, especially rigorous error bounds and correct behavior at singular points, all very desirable features in experimental mathematics. Furthermore, they feature a quasi-optimal complexity with respect to precision, meaning that one can compute easily with thousands digits of precision: computing twice as many digits takes roughly twice as much time. This contrasts with fixed-order methods, which cannot reach such precision. For example, to compute 10,000 digits, the classical order four Runge–Kutta method would need typically 102500 steps. This quest for precision is important and crucial in experimental mathematics and theoretical physics  40.

Yet, as advanced as these algorithms may well be, they struggle with the huge ODEs coming from our applications. The reason is easily explained: most algorithms and implementations are designed for small operators and large precision and focuses on a quasilinear complexity with respect to precision. Our situation is quite opposite, with large ODEs and comparatively modest precision. It may be interesting to consider quadratic-time algorithms, with respect to precision, if the complexity with respect to the size of the ODE gets better. This is a really blocking issue that must be addressed to enable new applications. To solve the problem, we will endeavor to provide new software that pays attention to implement algorithms for all regimes of degrees and orders but moderate precision.

3.5.2 Period computation

Periods are numerical integrals that can be computed to high precision with symbolic-numeric integration, even though current algorithms are far from enough to tackle real applications in algebraic geometry, beyond the case of curves. Algorithms for computing periods of curves are mature 83, 129, 127, 80, 68 and have been used, for example, for the computation of the endomorphism ring of genus 2 curves in the LMFDB 81. Algorithms in higher dimension are only emerging 89, 82, 140. Their current status does not make them suitable for many applications. Firstly, they are limited in generality. The articles 89, 82 deal with special double coverings of 2 or 3, with a low precision, while 140 deals with smooth projective hypersurfaces. In terms of efficiency, we are only able to treat some lucky quartic surfaces (and some very special quintic surfaces or cubic threefolds) for which the underlying ODEs are not too big.

With current methods, we managed to compute the periods of 180 000 quartic surfaces defined by sparse polynomials 119. This corpus of quartic surfaces was discovered by a random walk. Actually, we are not able to compute (in a reasonable amount of time) the periods of a given quartic surface. So we resorted to a random walk guided by ease of computation. This hinders severely the applicability. Yet, this shows the feasibility of transcendental continuation to obtain algebraic invariants that are currently unreachable by any other mean.

The seminumerical algorithms that we develop open perspectives in algebraic geometry. Some integrals with algebraic origin, called periods, convey some interesting algebraic invariants. High-precision computation may unravel them where purely algebraic methods fail 119. These algebraic invariants are crucial to determine the fine structure of algebraic varieties. We aim at designing algorithms to compute periods efficiently for varieties of general interest, in particular K3 surfaces, quintic surfaces, Calabi–Yau threefolds and cubic fourfolds.

3.5.3 Scattering amplitudes in quantum field theory

In quantum field theory, Feynman integrals appear when computing scattering amplitudes with perturbative methods. In practice, computing Feynman integrals is the most effective way to obtain predictions from a quantum field theory. Precise prediction requires higher-order perturbative terms leading to more complex integrals and daunting computational challenges. For example, 39 reports on the methods used, the difficulties encountered and the limitations met when computing precision calculation for teraelectronvolt collisions in the Large Hadron Collider (LHC).

As far as mathematics is concerned, Feynman integrals are periods. Although this makes the evaluation of Feynman integrals look like just a special case of symbolic-numeric integration, it would be naive to pretend that our methods apply without effort: it is clear that the computations are so challenging that only specialized methods may succeed. Current methods include sector decomposition 145 (where the integration domain is decomposed in smaller pieces on which traditional numerical integral algorithms perform well) and the use of differential equations 105 in a similar fashion to what we propose here, namely the symbolic computation of integrals with a parameter combined with numerical ODE solving. In the longer term, we expect that an efficient toolbox to deal with holonomic ideals would improve computations with Feynman integrals. It is however too early to say.

In the short term, the experimental mathematics toolbox that we want to develop may be useful to understand the geometry underlying some Feynman integrals. The typical outcome is simple analytic formulas 54, 53 allowing for fast and precise computations. In this context, identifying key algebraic invariants before engaging further mathematical thinking is crucial. For example, a key fact in the analysis of a three-loop graph by 53 is the generic member of some family of K3 surfaces having Picard rank 19. For other graphs appear cubic fourfolds which we cannot investigate numerically at the moment. An expected outcome of the previously exposed objectives is the computation of the periods of such varieties. This is a first step towards a more systematic development of this interface with high-energy physics.

3.5.4 Software

Solid software foundations for effective analytic continuation (see §3.5.1) will be important for the other tasks in this section. We use currently the part of the package ore_algebradeveloped by Marc Mezzarobba, but it is a bottleneck for several algorithms. The plan for the software development (improvement of ore_algebra, or whole new package) is not fixed yet: it depends on the nature of the algorithmic ideas that will emerge.

4 Application domains

As already expressed in §2.3, our natural application domains are:

  • Combinatorics,
  • Probability theory,
  • Number theory,
  • Algebraic geometry,
  • Statistical physics,
  • Quantum mechanics.

5 Highlights of the year

5.1 A whole French year 2023 dedicated to computer algebra

The year 2023 has seen a lot of international meetings in France, in the framework of the cycle of research schools and workshops Recent Trends in Computer Algebra 2023: 1 week in Luminy, 3 weeks in Lyon, 6 weeks in Paris, for the formal program, plus a 3-month presence at Institut Henri Poincaré while international visitors where present.

This has led to a huge involvement of the team, both because some of the members have been part of the organizers (Alin Bostan for the whole event and two weeks; Pierre Lairez for four weeks), and because most of the team members have attended at least half of the 10 weeks of events.

The team has contributed almost 25 k to the funding the events, thanks to the ERC project “10000 DIGITS” and to the ANR project De rerum natura. This has induced an increased activity of our assistant, Bahar Carabetta.

6 New results

Participants: Alin Bostan, Hadrien Brochet, Frédéric Chyzak, Guy Fayolle, Pierre Lairez, Rafael Mohr, Hadrien Notarantonio, Raphaël Pagès, Eric Pichon-Pharabod, Catherine St-Pierre, Sergey Yurkevich.

6.1 Effective algebraicity for solutions of systems of functional equations with one catalytic variable

In 18, Notarantonio and Yurkevich studied systems of n1 discrete differential equations of order k1 in one catalytic variable and provided a constructive and elementary proof of algebraicity of their solutions. This yielded effective bounds and a systematic method for computing the minimal polynomials. Their approach is a generalization of the pioneering work by Bousquet-Mélou and Jehanne (2006).

6.2 Fast algorithms for discrete differential equations

Discrete Differential Equations (DDEs) are functional equations that relate polynomially a power series F(t,u) in t with polynomial coefficients in a “catalytic” variable u and the specializations, say at u=1, of F(t,u) and of some of its partial derivatives in u. DDEs occur frequently in combinatorics, especially in map enumeration. If a DDE is of fixed-point type then its solution F(t,u) is unique, and a general result by Popescu (1986) implies that F(t,u) is an algebraic power series. Constructive proofs of algebraicity for solutions of fixed-point type DDEs were proposed by Bousquet-Mélou and Jehanne (2006). In 2022, the team had initiated a systematic algorithmic study of such DDEs of order 1 in a collaboration with Sorbonne University. In 15, Bostan and Notarantonio, together with Safey El Din (Sorbonne U.), generalized this study to DDEs of arbitrary order. First, they proposed nontrivial extensions of algorithms based on polynomial elimination and on the guess-and- prove paradigm. Second, they designed two brand-new algorithms that exploit the special structure of the underlying polynomial systems. Last, but not least, they reported on implementations that were able to solve highly challenging DDEs with a combinatorial origin.

6.3 Systems of discrete differential equations, constructive algebraicity of the solutions

In 32, Notarantonio and Yurkevich studied systems of n1, not necessarily linear, discrete differential equations (DDEs) of order k1 with one catalytic variable. They provided a constructive and elementary proof of algebraicity of the solutions of such equations. This part of the present article can be seen as a generalization of the pioneering work by Bousquet-Mélou and Jehanne (2006) who settled down the case n=1. Moreover, Notarantonio and Yurkevich obtained effective bounds for the algebraicity degrees of the solutions and provided an algorithm for computing annihilating polynomials of the algebraic series. Finally, they carried out a first analysis in the direction of effectivity for solving systems of DDEs in view of practical applications.

6.4 Reflected Brownian motion in a non convex cone

Guy Fayolle, in collaboration with S. Franceschi (Télécom SudParis, Institut Polytechnique de Paris) and K. Raschel (CNRS, Université d'Angers, LAREMA), studies the stationary reflected Brownian motion in a non-convex wedge, which, compared to its convex analogue model, has been much rarely analyzed in the probabilistic literature. Two approaches are proposed for the three-quarter plane.

  1. In  94, it was proved that the stationary distribution can be found by solving a two-dimensional vector boundary value problem (BVP) on a single curve (a hyperbola) for the associated Laplace transform. The reduction to this kind of vector BVP seems to be quite new in the literature. As a matter of comparison, a single boundary condition is sufficient in the convex case. When the parameters of the model (drift, reflection angles and covariance matrix) are symmetric with respect to the bisector line of the cone, the model is reducible to a standard reflected Brownian motion in a convex cone. The authors additionally constructed a one-parameter family of distributions, which surprisingly provides for any wedge (convex or not) a particular example of stationary distribution of a reflected Brownian motion.
  2. The main result in 9 is to show that the stationary distribution can be obtained by solving a boundary value problem of the same kind as the one encountered in the quarter plane, up to various dualities and symmetries. The idea is to start from Fourier (and not Laplace) transforms, allowing to get a functional equation for a single function of two complex variables.

6.5 A Markovian analysis of IEEE 802.11 broadcast transmission networks with buffering and back-off stages

Following up their previous analysis 96, G. Fayolle and P. Mühlethaler analyzed in 29 the so-called back-off technique of the IEEE 802.11 protocol in broadcast mode with waiting queues. In contrast to existing models, packets arriving when a station (or node) is in back-off state are not discarded, but are stored in a buffer of infinite capacity. As in previous studies, the key point of their analysis hinges on the assumption that the time on the channel is viewed as a random succession of transmission slots, whose duration corresponds to the length of a packet, and mini-slots during which the back-off of the station is decremented. These events occur independently, with given probabilities. The state of a node is represented by a three-dimensional Markov chain in discrete-time, formed by the back-off counter, the number of packets at the station, and the back-off stage. The stationary behaviour can be explicitly solved. In particular, Fayolle and Mühlethaler obtained stability (ergodicity) conditions and interpreted them in terms of maximum throughput.

6.6 On the representability of sequences as constant terms

A constant term sequence is a sequence of rational numbers whose n-th term is the constant term of Pn(x)Q(x), where P(x) and Q(x) are multivariate Laurent polynomials. While the generating functions of such sequences are invariably diagonals of multivariate rational functions, and hence special period functions, it is a famous open question, raised by Don Zagier, to classify those diagonals which are constant terms. In 4, Alin Bostan and Sergey Yurkevich, in collaboration with Armin Straub (University of South Alabama, USA), we provided such a classification in the case of sequences satisfying linear recurrences with constant coefficients. They further considered the case of hypergeometric sequences and, for a simple illustrative family of hypergeometric sequences, classified those that are constant terms.

6.7 An algorithmic approach to Rupert's problem

A polyhedron 𝐏3 has Rupert's property if a hole can be cut into it, such that a copy of 𝐏 can pass through this hole. There are several works investigating this property for some specific polyhedra: for example, it is known that all 5 Platonic and 9 out of the 13 Archimedean solids admit Rupert's property. A commonly believed conjecture states that every convex polyhedron is Rupert. In the paper 13 by Sergey Yurkevich in collaboration with Jakob Steininger (U. Vienna, Austria) it is proved that Rupert's problem is algorithmically decidable for polyhedra with algebraic coordinates. The authors also designed a probabilistic algorithm which can efficiently prove that a given polyhedron is Rupert. Using this algorithm they not only confirmed this property for the known Platonic and Archimedean solids, but also proved it for one of the remaining Archimedean polyhedra and many others. Moreover, they significantly improved on almost all known Nieuwland numbers and conjectured, based on statistical evidence, that the Rhombicosidodecahedron is in fact not Rupert.

6.8 Algebraicity of hypergeometric functions with arbitrary parameters

In 30, Sergey Yurkevich, together with Florian Fürnsinn (U. Vienna), provide a complete classification of the algebraicity of (generalized) hypergeometric functions with no restriction on the set of their parameters. Their characterization relies on the interlacing criteria of Christol (1987) and Beukers-Heckman (1989) for globally bounded and algebraic hypergeometric functions, however in a more general setting which allows arbitrary complex parameters with possibly integral differences. They also showcase the adapted criterion on a variety of different examples.

6.9 Fast computation of the N-th term of a q-holonomic sequence and applications

In 1977, Strassen invented a famous baby-step/giant-step algorithm that computes the factorial N! in arithmetic complexity quasi-linear in N. In 1988, the Chudnovsky brothers generalized Strassen’s algorithm to the computation of the N-th term of any holonomic sequence in essentially the same arithmetic complexity. In 5, Alin Bostan and Sergey Yurkevich designed q-analogues of these algorithms. They first extend Strassen’s algorithm to the computation of the q-factorial of N, then Chudnovskys' algorithm to the computation of the N-th term of any q-holonomic sequence. Both algorithms work in arithmetic complexity quasi-linear in N; surprisingly, they are simpler than their analogues in the holonomic case. They provide a detailed cost analysis, in both arithmetic and bit complexity models. Moreover, they describe various algorithmic consequences, including the acceleration of polynomial and rational solving of linear q-differential equations, and the fast evaluation of large classes of polynomials, including a family recently considered by Nogneng and Schost.

6.10 A short proof of a non-vanishing result by Conca, Krattenthaler and Watanabe

In their 2009 paper Regular sequences of symmetric polynomials, Aldo Conca, Christian Krattenthaler and Junzo Watanabe needed to prove, as an intermediate result, the fact that all terms of a family of binomial sums indexed by an integer h are non-zero, except for h=3. The proof in their paper (Appendix, pp. 190–199) performs a long and quite intricate 3-adic analysis. In 2, Alin Bostan proposes a shorter and elementary proof.

6.11 Persistence probabilities and Mallows-Riordan polynomials

Mallows-Riordan polynomials, sometimes also called inversion polynomials, form a family of polynomials with integer coefficients appearing in many counting problems in enumerative combinatorics. They are also connected with the cumulant generating function of the classical log-normal distribution in probability theory. In 1 Alin Bostan, together with his probabilist co-authors Gerold Alsmeyer (U. Münster), Kilian Raschel (CNRS, U. Angers) and Thomas Simon (U. Lille), provide a probabilistic interpretation of the Mallows-Riordan polynomials that is not only quite different from the classical connection with the log-normal distribution, but in fact also rather unexpected. More precisely, they establish exact formulae in terms of Mallows-Riordan polynomials for the persistence probabilities of a class of order-one autoregressive processes with symmetric uniform innovations. These exact formulae then lead to precise asymptotics of the corresponding persistence probabilities. The connection of the Mallows-Riordan polynomials with the volumes of certain polytopes is also discussed. Two further results provide general factorizations of AR(1) models with continuous symmetric innovations, one for negative and one for positive drift. The second factorization extends a classical universal formula of Sparre Andersen for symmetric random walks.

6.12 Continued fractions, orthogonal polynomials and Dirichlet series

Using an experimental mathematics approach, Alin Bostan together with Frédéric Chapoton (CNRS, IRMA Strasbourg) obtained in 23 new relations between the Dirichlet series for certain periodic coefficients and the moments of certain families of orthogonal polynomials. In addition to the classical hypergeometric orthogonal polynomials, of Racah type and continuous dual Hahn, a new similar family of orthogonal polynomials was discovered.

6.13 Minimization of differential equations and algebraic values of E-functions

A power series being given as the solution of a linear differential equation with appropriate initial conditions, minimization consists in finding a non-trivial linear differential equation of minimal order having this power series as a solution. This problem exists in both homogeneous and inhomogeneous variants; it is distinct from, but related to, the classical problem of factorization of differential operators. Recently, minimization has found applications in Transcendental Number Theory, more specifically in the computation of non-zero algebraic points where Siegel's E-functions take algebraic values. In 3 Alin Bostan together with Bruno Salvy (Inria, ENS Lyon) and Tanguy Rivoal (CNRS, IF Grenoble) present algorithms for these questions and discuss implementation and experiments.

6.14 A sharper multivariate Christol's theorem with applications to diagonals and Hadamard products

In 21, Alin Bostan, together with Boris Adamczewski (ICJ, Lyon) and Xavier Caruso (IMB, Bordeaux), provide a new proof of the multivariate version of Christol's theorem about algebraic power series with coefficients in finite fields, as well as of its extension to perfect ground fields of positive characteristic obtained independently by Denef and Lipshitz, Sharif and Woodcok, and Harase. Their new proof is elementary, effective, and allows for much sharper estimates. They discuss various applications of such estimates, in particular to a problem raised by Deligne concerning the algebraicity degree of reductions modulo p of diagonals of multivariate algebraic power series with integer coefficients.

6.15 Algebraic solutions of linear differential equations: an arithmetic approach

Given a linear differential equation with coefficients in (x), an important question is to know whether its full space of solutions consists of algebraic functions, or at least if one of its specific solutions is algebraic. These questions are treated in 22, Alin Bostan, together with Xavier Caruso (IMB, Bordeaux) and Julien Roques (ICJ, Lyon). After presenting motivating examples coming from various branches of mathematics, they advertise in an elementary way a beautiful local-global arithmetic approach to these questions, initiated by Grothendieck in the late sixties. This approach has deep ramifications and leads to the still unsolved Grothendieck-Katz p-curvature conjecture.

6.16 Refined product formulas for Tamari intervals

In 24, Alin Bostan and Frédéric Chyzak, together with Vincent Pilaud (CNRS & LIX, Palaiseau) provided short product formulas for the f-vectors of the canonical complexes of the Tamari lattices and of the cellular diagonals of the associahedra.

6.17 Beating binary powering for polynomial matrices

The Nth power of a polynomial matrix of fixed size and degree can be computed by binary powering as fast as multiplying two polynomials of linear degree in N. When Fast Fourier Transform (FFT) is available, the resulting complexity is softly linear in N, i.e. linear in N with extra logarithmic factors. In 14, Alin Bostan and Sergey Yurkevich, together with Vincent Neiger (Sorbonne U.), show that it is possible to beat binary powering, by an algorithm whose complexity is purely linear in N, even in absence of FFT. The key result making this improvement possible is that the entries of the Nth power of a polynomial matrix satisfy linear differential equations with polynomial coefficients whose orders and degrees are independent of N. Similar algorithms are proposed for two related problems: computing the Nth term of a C-finite sequence of polynomials, and modular exponentiation to the power N for bivariate polynomials.

6.18 On the q-analogue of Pólya's theorem

In 6, Alin Bostan and Sergey Yurkevich answer a question posed by Michael Aissen in 1979 about the q-analogue of a classical theorem of George Pólya (1922) on the algebraicity of (generalized) diagonals of bivariate rational power series. In particular, they prove that the answer to Aissen’s question, in which he considers q as a variable, is negative in general. Moreover, they show that when q is a complex number, the answer is positive if and only if q is a root of unity.

6.19 Positivity certificates for linear recurrences

In 17, Alaa Ibrahim, together with Bruno Salvy (ARIC team), consider linear recurrences with polynomial coefficients of Poincaré type and with a unique simple dominant eigenvalue. They give an algorithm that proves or disproves positivity of solutions provided the initial conditions satisfy a precisely defined genericity condition. For positive sequences, the algorithm produces a certificate of positivity that is a data-structure for a proof by induction. This induction works by showing that an explicitly computed cone is contracted by the iteration of the recurrence.

6.20 A signature-based algorithm for computing the nondegenerate locus of a polynomial system

Polynomial system solving arises in many application areas to model non-linear geometric properties. In such settings, polynomial systems may come with degeneration which the end-user wants to exclude from the solution set. The nondegenerate locus of a polynomial system is the set of points where the codimension of the solution set matches the number of equations. Computing the nondegenerate locus is classically done through ideal-theoretic operations in commutative algebra such as saturation ideals or equidimensional decompositions to extract the component of maximal codimension. By exploiting the algebraic features of signature-based Gröbner basis algorithms the authors design an algorithm which computes a Gröbner basis of the equations describing the closure of the nondegenerate locus of a polynomial system, without computing first a Gröbner basis for the whole polynomial system.

This is a work of Pierre Lairez, together with Christian Eder, Rafael Mohr nad Mohab Safey El Din 8.

6.21 Algorithms for minimal Picard-Fuchs operators of Feynman integrals

In even space-time dimensions the multi-loop Feynman integrals are integrals of rational function in projective space. By using an algorithm that extends the Griffiths–Dwork reduction for the case of projective hypersurfaces with singularities, the authors derive Fuchsian linear differential equations, the Picard–Fuchs equations, with respect to kinematic parameters for a large class of massive multi-loop Feynman integrals. With this approach the authors obtain the differential operator for Feynman integrals to high multiplicities and high loop orders. Using recent factorisation algorithms the authors give the minimal order differential operator in most of the cases studied in this paper. Amongst our results are that the order of Picard–Fuchs operator for the generic massive two-point n-1-loop sunset integral in two-dimensions is 2n-n+1n+12 supporting the conjecture that the sunset Feynman integrals are relative periods of Calabi–Yau of dimensions n-2. The Authors have checked this explicitly till six loops. As well, the authors obtain a particular Picard–Fuchs operator of order 11 for the massive five-point tardigrade non-planar two-loop integral in four dimensions for generic mass and kinematic configurations, suggesting that it arises from K3 surface with Picard number 11. The Authors determine as well Picard–Fuchs operators of two-loop graphs with various multiplicities in four dimensions, finding Fuchsian differential operators with either Liouvillian or elliptic solutions.

This is a work of Pierre Lairez, together with Pierre Vanhove, published in 2023 12.

6.22 Factoring differential operators on algebraic curves in positive characteristic

Factorisation of linear differential operators and systems in positive characteristic have been studied before by van der Put in 138 and Thomas Cluzeau in 79 where both made use of the p-curvature to find a first factorisation of differential operators and systems respectively. Unfortunately the p-curvature alone is not enough to completely factor operators as a product of irreducible differential operators when its characteristic polynomial is not square free.

In particular to this point no algorithm where yet known to factor central operators in polynomial time.

In 132, Raphaël Pagès presented a refinement of this method to factor differential operators on algebraic curves of positive characteristic relying on tools of algebraic geometry such as Riemann-Roch spaces and Picard 0 group which should solve this issue. Additionally this work should allow for a better control of the size of the output of the factorisation.

R. Pagès will defend his PhD in Spring 2024 and is preparing a corresponding submission.

6.23 Reduction-based creative telescoping for definite summation of D-finite functions

Creative telescoping is an algorithmic method introduced by Zeilberger in the 1990s to compute parametrized definite sums. It proceeds by synthesizing special summands, called certificates, which have the specific property to telescope; correspondingly, this determines a recurrence equation, called telescoper, satisfied by the definite sum. Hadrien Brochet and Bruno Salvy (ARIC team) described a creative telescoping algorithm that computes telescopers for definite sums of D-finite sequences as well as the associated certificates in a compact form. Their algorithm relies on a discrete analogue of the generalized Hermite reduction, or equivalently, on a generalization of the Abramov–Petkovšek reduction. They provide a Maple implementation with good timings on a variety of examples. An article was submitted this year 26.

6.24 p-adic algorithm for bivariate Gröbner bases

Catherine St-Pierre and Éric Schost (U. Waterloo) presented a p-adic algorithm to recover the lexicographic Gröbner basis 𝒢 of an ideal in [x,y] with a generating set in [x,y]. The algorithm shows a complexity that is less than cubic in terms of the dimension of [x,y]/𝒢 and is softly linear in the height of its coefficients. Schost and St-Pierre observed that previous results of Lazard's that use Hermite normal forms to compute Gröbner bases for ideals with two generators can be generalised to a set of any finite number of generators. They used the result to obtain a bound on the height of the coefficients of 𝒢, and to control the probability of choosing a good prime p to build the p-adic expansion of 𝒢.

A long version of this manuscript 19 has been submitted.

6.25 Algorithms in intersection theory in the plane

In her PhD thesis, Catherine St-Pierre presented an algorithm to find the local structure of intersections of plane curves. More precisely, she addressed the question of describing the scheme of the quotient ring of a bivariate zero-dimensional ideal I𝕂[x,y], i.e. finding the points (maximal ideals of 𝕂[x,y]/I) and describing the regular functions on those points. A natural way to address this problem is via Gröbner bases as they reduce the problem of finding the points to a problem of factorisation, and the sheaf of rings of regular functions can be studied with those bases through the division algorithm and localisation. Let I𝕂[x,y] be an ideal generated by , a subset of 𝔸[x,y] with 𝔸𝕂 and 𝕂 a field. St-Pierre and Éric Schost (U. Waterloo) presented an algorithm that features a quadratic convergence to find a Gröbner basis of I or its primary component at the origin. The thesis presents an 𝔪-adic Newton iteration to lift the lexicographic Gröbner basis of any finite intersection of zero-dimensional primary components of I if 𝔪𝔸 is a good maximal ideal. It relies on a structural result about the syzygies in such a basis due to Conca and Valla 33, from which arises an explicit map between ideals in a stratum (or Gröbner cell) and points in the associated moduli space. The thesis also qualified what makes a maximal ideal 𝔪 suitable for her filtration. When the field 𝕂 is large enough, endowed with an Archimedean or ultrametric valuation, and admits a fraction reconstruction algorithm, this result is used to give a complete 𝔪-adic algorithm to recover 𝒢, the Gröbner basis of I. Inspired by previous results of Lazard's that use Hermite normal forms to compute Gröbner bases for ideals with two generators can be generalised to a set of n generators. This result is then used to obtain a bound on the height of the coefficients of 𝒢 and to control the probability of choosing a good maximal ideal 𝔪𝔸 to build the 𝔪-adic expansion of 𝒢. Inspired by Pardue 133, a constructive proof is given to characterise a Zariski open set of GL 2(𝕂) (with action on 𝕂[x,y]) that changes coordinates in such a way as to ensure the initial term ideal of a zero-dimensional I becomes Borel-fixed when |𝕂| is sufficiently large. This sharpened her analysis to obtain, when 𝔸= or 𝔸=k[t], a complexity less than cubic in terms of the dimension of [x,y]/𝒢 and softly linear in the height of the coefficients of 𝒢. The resulting method is also adapted to extract x,y-primary component of I. She also discussed the transition towards other primary components via linear mappings, called untangling and tangling, introduced by van der Hoeven and Lecerf 84. The two maps form one isomorphism to find points with an isomorphic local structure and, at the origin, bind them. Together with Hyun, Melczer and Schost (U. Waterloo), she gave a slightly faster tangling algorithm and discussed new applications of these techniques. They showed how to extend these ideas to bivariate settings and gave a bound on the arithmetic complexity for certain algebras.

The thesis was realised at the University of Waterloo, and the redaction was completed during St-Pierre's time with MATHEXP. Catherine St-Pierre successfully defended in April 2023.

6.26 Motivic geometry of 2-loop Feynman integrals

The Tardigrade graph is a two-loop Feynman graph. It describes a family of K3 surfaces, obtained as quartic hypersurfaces in 3 with six A1 singularities generically. By means of a Lefschetz fibration of the hypersurface, Eric Pichon-Pharabod, together with Charles Doran, Andrew Harder and Pierre Vanhove provided 28 a description of the homology of the K3 surface that is explicit enough to perform numerical integration on. In particular, this allows computing numerical approximations of the periods of the K3 surface with rigorous bounds, with sufficient precision to recover some algebraic invariants of the generic family. The authors recovered the generic Picard rank of this family of K3 surfaces. Furthermore, they obtained an explicit embedding of the Picard lattice in the full homology lattice, where the components of the singular fibres are identified.

6.27 Estimating major merger rates and spin parameters ab initio via the clustering of critical events

At large scales, the mass configuration of space takes a web-like structure consisting of nodes, connected by filaments, themselves connected by walls, separated by voids. This structure and its evolution with time have an impact on the zoology of galaxies one may find at a given point. In 27, Eric Pichon-Pharabod, together with Corentin Cadiou, Christophe Pichon and Dmitri Pogosyan provided a model to predict this evolution ab initio. They applied it to recover the probability distribution function of satellite-merger separation, the distribution of mergers with respect to peak rarity, and they analysed the typical spin brought by mergers.

6.28 Effective homology and periods of complex projective hypersurfaces

In 31, Pierre Lairez, Eric Pichon-Pharabod, together with Pierre Vanhove (IPhT) provided an algorithm to compute an effective description of the homology of complex projective hypersurfaces relying on Picard-Lefschetz theory. Next, they used this description to compute high-precision numerical approximations of the periods of the hypersurface. This is an improvement over existing algorithms as this new method allows for the computation of periods of smooth quartic surfaces in an hour on a laptop, which could not be attained with previous methods. The general theory presented in this paper can be generalised to varieties other than just hypersurfaces, such as elliptic fibrations as showcased on an example coming from Feynman graphs. The algorithm comes with a SageMath implementation.

6.29 A direttissimo algorithm for equidimensional decomposition

In 16, Pierre Lairez and Rafael Mohr, together with Christian Eder (U. Kaiserslautern), and Mohab Safey El Din (Sorbonne U.) described a recursive algorithm that decomposes an algebraic set into locally closed equidimensional sets, i.e. sets which each have irreducible components of the same dimension. At the core of this algorithm, they combined ideas from the theory of triangular sets, a.k.a. regular chains, with Gröbner bases to encode and work with locally closed algebraic sets. Equipped with this, their algorithm avoids projections of the algebraic sets that are decomposed and certain genericity assumptions frequently made when decomposing polynomial systems, such as assumptions about Noether position. This makes it produce fine decompositions on more structured systems where ensuring genericity assumptions often destroys the structure of the system at hand. Practical experiments demonstrate its efficiency compared to state-of-the-art implementations.

6.30 Integer sequences, differential operators and algebraic power series

Sergey Yurkevich successfully completed his PhD studies in July 2023. The defense of his thesis 20 took place in Vienna and the committee consisted of Andreas Cap (president), Gilles Villard, Wadim Zudilin (reviewers), Alin Bostan, Stéphane Fischler, Herwig Hauser (examiners).

7 Partnerships and cooperations

7.1 International initiatives

7.1.1 Participation in other International Programs


Participants: Alin Bostan (PI), Frédéric Chyzak.

  • Title:
    Efficient Algorithms for Guessing, inequaLitiEs and Summation
  • Partner Institution(s):
    • JKU (Linz), Austria
    • RICAM (Linz), Austria
    • Inria (Saclay), France
    • Sorbonne Univ. (Paris), France
  • Date/Duration: March 2023 – March 2027
  • Additionnal info/keywords:
    This is a PRCI ANR/FWF project between two computer algebra teams in France and two computer algebra teams in Austria. The Austrian PI is Manuel Kauers from JKU Linz. The goal is to work together on four axes: structured and multivariate guessing, inequalities and D-finiteness, creative telescoping and applications in combinatorics, number theory and theoretical physics. The obtained funding is of 770,000 euros in total, a major part of which will be used to fund 4 PhD theses.
PhD project of Yurkevich
  • Title:
    Integer sequences, algebraic series and differential operators.
  • Partner Institution(s):
    • University of Vienna, Austria.
  • Date/Duration: September 2020 – August 2023
  • Additionnal info/keywords:
    The PhD thesis project of Sergey Yurkevich is a cotutelle with the University of Vienna (Austria). The supervisors are Alin Bostan on the French side and Herwig Hauser on the Austrian side. The investigation topic covers on the one hand integer sequences naturally arising in various scientific disciplines such as number theory, combinatorics and physics, and on the other hand solutions to special kinds of differential equations.

7.2 European initiatives

7.2.1 Horizon Europe

  • ERC Starting Grant “10000 DIGITS”. This project led by Pierre Lairez spans for five years starting from April 2022. It funds three PhD theses and three 2-year post-doctoral positions. Its goal is to develop algorithms and software to compute with high precision integrals with a geometric origin, especially periods of algebraic varieties, and to tackle applications in diophantine approximation, quantum field theory, and optimization.
  • Postdoctoral fellowship from MathInGreaterParis. Claudia Fevola obtained a two-year postdoctoral fellowship hosted by MATHEXP and funded by the MathInGreaterParis Fellowship Programme, cofunded by Marie Sklodowska-Curie Actions H2020-MSCA-COFUND-2020.

7.3 National initiatives

  • De rerum natura. This project, set up by the team, was accepted this year and will be funded until 2024. It gathers over 20 experts from four fields: computer algebra; the Galois theories of linear functional equations; number theory; combinatorics and probability. Our goal is to obtain classification algorithms for number theory and combinatorics, particularly so for deciding irrationality and transcendence. (Permanent members with pm listed: Bostan, Chyzak (PI), Lairez.)
  • ifference. This project, led by Olivier Bournez (Lix), started in November 2020. Its objective is to consider a novel approach in between the two worlds: discrete-oriented computations on the one side and differential equations on the other side. We aims at providing new insights on classical complexity theory, computability and logic through this prism and at introducing new perspectives in algorithmic methods for differential equations solving and computer science applications. (Permanent members with pm listed: Bostan, Chyzak.)

7.4 Regional initiatives

  • Alin Bostan is co-leader of the Amadeus (Campus France) bilateral project “Integer Sequences arising in Number Theory, Combinatorics and Physics” between France and Austria. The Austrian co-leader is Herwig Hauser (U. Vienna, Austria).

8 Dissemination

Participants: Alin Bostan, Hadrien Brochet, Frédéric Chyzak, Guy Fayolle, Alexandre Guillemot, Alaa Ibrahim, Pierre Lairez, Hadrien Notarantonio, Raphaël Pagès, Eric Pichon-Pharabod, Sergey Yurkevich.

8.1 Promoting scientific activities

8.1.1 Scientific events: organisation

General chair, scientific chair
  • Alin Bostan is part of the Scientific advisory board of the conference series Effective Methods in Algebraic Geometry (MEGA).
  • Pierre Lairez is part of the executive committee of the 2024 edition of the conference series Effective Methods in Algebraic Geometry (MEGA).
  • Since 2020, for a period of 5 years, Alin Bostan is member of the steering committee of the Journées Nationales de Calcul Formel (JNCF), the annual meeting of the French computer algebra community.
  • Alin Bostan is part of the scientific committee of the conference Combinatorics and Arithmetic for Physics: Tenth Anniversary Edition held at IHES from Nov. 15–17, 2023.
  • Alin Bostan is part of the scientific committee of the GDR EFI (“Functional Equations and Interactions”) dependent on the mathematical institute (INSMI) of CNRS. The goal of this GDR is to bring together various research communities in France working on functional equations in fields of computer science and mathematics.
Member of the organizing committees

8.1.2 Scientific events: selection


8.1.3 Journal

Member of the editorial boards
Reviewer - reviewing activities

8.1.4 Invited talks

8.1.5 Scientific expertise

  • Alin Bostan was part of the hiring committee for an Assistant Professor (Maitre de Conférences) position in computer science at UGA-LJK, Grenoble U., March – April, 2023.
  • Alin Bostan was part of the hiring committee for an Assistant Professor (Maitre de Conférences) position in computer science at LIP6, Sorbonne U., April – May, 2023.
  • Frédéric Chyzak has been a member of a recruitment committee for a full-time Monge assistant professor in computer science at École Polytechnique.

8.1.6 Research administration

  • Alin Bostan is one of the two members of the scientific search committee of the Inria Saclay Center.
  • Frédéric Chyzak is a member of the commission of users of computer resources (CUMI) at the Inria Saclay Center and of the commission managing the mentoring program at the Inria Saclay Center.
  • Frédéric Chyzak is also a mentor in the mentoring program.
  • Since 2023, Frédéric Chyzak is an elected member of the evaluation commission (CE) of Inria.
  • Guy Fayolle is scientific advisor and associate researcher at the Centre for Robotics (Mines Paris PSL).
  • Guy Fayolle is a member of the working group WG 7.3: Computer System Modeling of the International Federation for Information Processing (IFIP).
  • Hadrien Notarantonio is one of the 8 elected representatives of the doctoral school EDMH.

8.2 Teaching - Supervision - Juries

  • Bachelor:
    • Hadrien Brochet, Topology and multivariable calculus (MAA202), 64h, 3rd semester Bachelor Polytechnique, France.
    • Alaa Ibrahim, Cours d'analyse, 48h, L1, Université de Lyon 1, France.
    • Alexandre Guillemot, Linear Algebra (MAA101), 64h, 1st semester, Bachelor Polytechnique, France.
    • Hadrien Notarantonio, Structure de données (LU2IN006), 64h, 2nd semester, Sorbonne Université, France.
    • Raphaël Pagès, Mathématiques générales and Outils Mathématiques, 90h, L1, Université de Bordeaux, France.
    • Eric Pichon-Pharabod, Topology and multivariable calculus (MAA202), 64h, 3rd semester Bachelor Polytechnique, France.
    • Eric Pichon-Pharabod, Introduction to Analysis (MAA102), 64h, 1st semester Bachelor Polytechnique, France.
  • Master:
    • Alin Bostan, Algorithmes efficaces en calcul formel, 22.5h, M2, MPRI, France.
    • Alin Bostan, Modern Algorithms for Symbolic Summation and Integration, 18h, M2, ENS Lyon, France.
    • Pierre Lairez, Algorithmes efficaces en calcul formel, 12h, M2, MPRI, France.
    • Pierre Lairez, Competitive programming (INF473A), TD, 40h, M2, École polytechnique, France.
    • Pierre Lairez, Les bases de la programmation et de l'algorithmique (INF411), TD, 40h, M1, École polytechnique, France.

8.2.1 Supervision

  • Master interships:
    • Alin Bostan co-supervised together with Bruno Salvy (Inria, ENS Lyon) the Master thesis of Louis Gaillard on the topic “Bounds for output size of algorithms in computer algebra”.
    • Alin Bostan co-supervised together with Tanguy Rivoal (Institut Fourier, Grenoble) the Master thesis of Jordan Béraud on the topic “Factorisation d'opérateurs différentiels”.
    • Pierre Lairez supervised the master thesis of Alexandre Guillemot on the topic “Certified algebraic path tracking and braid computation”.
  • PhD theses:
    • Alin Bostan co-supervised together with Herwig Hauser (U. Vienna, Austria) the PhD thesis of Sergey Yurkevich on “Integer sequences arising in number theory, combinatorics and physics” (defended on July 6, 2023).
    • Alin Bostan co-supervises together with Xavier Caruso (CNRS, IMB Bordeaux) the PhD thesis of Raphaël Pagès on “Algorithms for factoring linear differential operators in positive characteristic”.
    • Alin Bostan co-supervises together with Mohab Safey El Din (Sorbonne U.) and Bruno Salvy (Inria, ENS Lyon) the PhD thesis of Alaa Ibrahim on “Automated proofs of inequalities between special sequences and functions”.
    • Alin Bostan and Frédéric Chyzak co-supervise together with Mohab Safey El Din (Sorbonne U.) the PhD thesis of Hadrien Notarantonio on “Geometry-driven algorithms for the efficient solving of combinatorial functional equations”.
    • Frédéric Chyzak co-supervises together with Marc Mezzarobba (CNRS, LIX) the PhD thesis of Alexandre Goyer on “Symbolic-numeric algorithms in differential algebra”.
    • Frédéric Chyzak and Pierre Lairez co-supervise the PhD thesis of Hadrien Brochet on “Algorithms for D-modules”.
    • Frédéric Chyzak co-supervises together with Shaoshi Chen (CAS, Beijing) the PhD thesis of Pingchuan Ma. Pingchuan Ma was visiting France for 6 months this year.
    • Pierre Lairez co-supervises together with Pierre Vanhove (CEA, IPhT) the PhD thesis of Eric Pichon-Pharabod on “Periods in algebraic geometry: computation and application to Feynman's integrals”.
    • Pierre Lairez co-supervises together with Christian Eder (TU Kaiserslautern) and Mohab Safey El Din (Sorbonne U.) the PhD thesis of Rafael Mohr on “Equidimensional decomposition algorithms with signature bases”.
    • Pierre Lairez supervises the PhD thesis of Alexandre Guillemot on “Effective topology of complex algebraic varieties”.

8.2.2 Juries

  • Alin Bostan has served as a reviewer of the Habilitation thesis of Michael Wallner, Combinatorial analysis of directed acyclic graphs, Young tableaux, and lattice paths, U. Vienna, Austria.
  • Alin Bostan has served as a reviewer in the mid-PhD examination of Subhayan Saha, Lower bounds and reconstruction algorithms for arithmetic circuits, ENS Lyon, Apr. 5, 2023.
  • Alin Bostan was examiner in the Habilitation thesis defense jury of François Olivier, Effective formal resolution of systems of algebraic differential equations, Ecole polytechnique, May 11, 2023.
  • Alin Bostan has served as a reviewer in the mid-PhD examination of Hippolyte Signargout, Calcul exact avec des matrices quasiséparables et des matrices polynomiales à structure de déplacement, ENS Lyon, May 15, 2023.
  • Alin Bostan has served as a reviewer in the mid-PhD examination of Damien Vidal, Analyse algébrique des schémas cryptographiques post-quantiques, Amiens, Sept. 15, 2023.
  • Alin Bostan was examiner in the Habilitation thesis defense jury of Jérémy Berhomieu, Contributions to polynomial system solving: Recurrences and Gröbner bases, Sorbonne U., September 21, 2023.
  • Alin Bostan has served as a reviewer of the PhD thesis of Hippolyte Signargout, Calcul exact avec des matrices quasiséparables et des matrices polynomiales à structure de déplacement, ENS Lyon, October 5, 2023. He had previously served as a reviewer in his mid-PhD examination on May 15, 2023.
  • Guy Fayolle was external examiner for the PhD defense of Vasilii Goriachkin (Critical Scaling in Particle Systems and Random Graphs), defended at Lund University (Sweden).

8.3 Scientific animation of the project-team

The team runs a monthly seminar, jointly with colleagues in Sorbonne Université, in alternation in Palaiseau and Paris, and open to remote attendance in hybrid mode. The organizers for our team are Pierre Lairez and Hadrien Notarantonio. Because of the huge activity induced by the semester Recent Trends in Computer Algebra 2023, we decided to reduce the activity in our seminar during the past year. Nevertheless, we could have 9 talks, including some by international visiting colleagues.

9 Scientific production

9.1 Publications of the year

International journals

International peer-reviewed conferences

Edition (books, proceedings, special issue of a journal)

  • 19 proceedingsÉ.Éric SchostC.Catherine St-Pierrep-adic algorithm for bivariate Gröbner bases.ISSAC 2023: International Symposium on Symbolic and Algebraic Computation 2023ACMJuly 2023, 508-516HALDOIback to text

Doctoral dissertations and habilitation theses

  • 20 thesisS.Sergey Yurkevich. Integer sequences, differential operators and algebraic power series.Université Paris-Saclay; Universität WienJuly 2023HALback to text

Reports & preprints

9.2 Cited publications

  • 33 articleA.A. Conca and G.G. Valla. Canonical Hilbert-Burch matrices for ideals of k[x,y].Michigan Mathematical Journal572008, 157 -- 172URL: https://doi.org/10.1307/mmj/1220879402DOIback to text
  • 34 articleS. A.S. A. Abramov, M. A.M. A. Barkatou and M.M. van Hoeij. Apparent singularities of linear difference equations with polynomial coefficients.1722006, 117--133DOIback to text
  • 35 inproceedingsS. A.Sergei A. Abramov and M.Mark van Hoeij. Desingularization of linear difference operators with polynomial coefficients.ISSAC~'99Conference proceedingsACM1999DOIback to text
  • 36 articleB.Boris Adamczewski and C.Colin Faverjon. Méthode de Mahler, transcendance et relations linéaires: aspects effectifs.3022018, 557--573back to text
  • 37 articleB.Boris Adamczewski and T.Tanguy Rivoal. Exceptional values of E-functions at algebraic points.5042018, 697--908back to textback to text
  • 38 articleM. F.Michael F. Adamer, A. C.András C. Lőrincz, A.-L.Anna-Laura Sattelberger and B.Bernd Sturmfels. Algebraic analysis of rotation data.112December 2020, 189--211DOIback to text
  • 39 reportS.S. Badger, J.J. Bendavid, V.V. Ciulli, A.A. Denner, R.R. Frederix, M.M. Grazzini, J.J. Huston, M.M. Schönherr, K.K. Tackmann, J.J. Thaler, C.C. Williams, J. R.J. R. Andersen, K.K. Becker, M.M. Bell, J.J. Bellm, E.E. Bothmann, R.R. Boughezal, J.J. Butterworth, S.S. Carrazza, M.M. Chiesa, L.L. Cieri, M.M. Duehrssen-Debling, G.G. Falmagne, S.S. Forte, P.P. Francavilla, M.M. Freytsis, J.J. Gao, P.P. Gras, N.N. Greiner, D.D. Grellscheid, G.G. Heinrich, G.G. Hesketh, S.S. Höche, L.L. Hofer, T.-J.T.-J. Hou, A.A. Huss, J.J. Isaacson, A.A. Jueid, S.S. Kallweit, D.D. Kar, Z.Z. Kassabov, V.V. Konstantinides, F.F. Krauss, S.S. Kuttimalai, A.A. Lazapoulos, P.P. Lenzi, Y.Y. Li, J. M.J. M. Lindert, X.X. Liu, G.G. Luisoni, L.L. Lönnblad, P.P. Maierhöfer, D.D. Maître, A. C.A. C. Marini, G.G. Montagna, M.M. Moretti, P. M.P. M. Nadolsky, G.G. Nail, D.D. Napoletano, O.O. Nicrosini, C.C. Oleari, D.D. Pagani, C.C. Pandini, L.L. Perrozzi, F.F. Petriello, F.F. Piccinini, S.S. Plätzer, I.I. Pogrebnyak, S.S. Pozzorini, S.S. Prestel, C.C. Reuschle, J.J. Rojo, L.L. Russo, P.P. Schichtel, S.S. Schumann, A.A. Siódmok, P.P. Skands, D.D. Soper, G.G. Soyez, P.P. Sun, F. J.F. J. Tackmann, E.E. Takasugi, S.S. Uccirati, U.U. Utku, L.L. Viliani, E.E. Vryonidou, B. T.B. T. Wang, B.B. Waugh, M. A.M. A. Weber, J.J. Winter, K. P.K. P. Xie, C.-P.C.-P. Yuan, F.F. Yuan, K.K. Zapp and M.M. Zaro. Les Houches 2015: Physics at TeV colliders standard model working group report.May 2016back to text
  • 40 articleD.David Bailey and J. M.J. M. Borwein. High-precision numerical integration: progress and challenges.4672011, 741--754DOIback to textback to text
  • 41 articleD.David Bailey, P.Peter Borwein and S.Simon Plouffe. On the rapid computation of various polylogarithmic constants.662181997, 903--913DOIback to text
  • 42 inproceedingsM.Moulay Barkatou, T.Thomas Cluzeau, L.Lucia Di Vizio and J.-A.Jacques-Arthur Weil. Computing the Lie algebra of the differential Galois group of a linear differential system.ISSAC~'2016Conference proceedingsACM2016, 63--70back to text
  • 43 articleB.Bernhard Beckermann and G.George Labahn. A uniform approach for the fast computation of matrix-type Padé approximants.1531994, 804--823DOIback to textback to text
  • 44 articleJ. P.Jason P. Bell and M.Michael Coons. Transcendence tests for Mahler functions.14532017, 1061--1070back to text
  • 45 articleO.Olivier Bernardi and M.Mireille Bousquet-Mélou. Counting colored planar maps: algebraicity results.10152011, 315--377URL: https://doi.org/10.1016/j.jctb.2011.02.003DOIback to textback to text
  • 46 articleO.Olivier Bernardi, M.Mireille Bousquet-Mélou and K.Kilian Raschel. Counting quadrant walks via Tutte's invariant method.12021DOIback to text
  • 47 inproceedingsJ.Jérémy Berthomieu, C.Christian Eder and M.Mohab Safey El Din. Msolve: a library for solving polynomial systems.ISSAC~'21Conference proceedingsACMJuly 2021, 51--58DOIback to textback to text
  • 48 inproceedingsJ.Jérémy Berthomieu and J.-C.Jean-Charles Faugère. A polynomial-division-based algorithm for computing linear recurrence relations.ISSAC~'18Conference proceedings2018, 79--86back to text
  • 49 articleD.Daniel Bertrand and F.Frits Beukers. Équations différentielles linéaires et majorations de multiplicités.1811985, 181--192back to text
  • 50 articleF.F. Beukers. A note on the irrationality of (2) and (3).1131979, 268--272back to text
  • 51 articleF.F. Beukers. A refined version of the Siegel-Shidlovskii theorem.16312006, 369--379URL: https://doi.org/10.4007/annals.2006.163.369DOIback to text
  • 52 incollectionF.F. Beukers. Padé-approximations in number theory.Padé Approximation and its Applications, Amsterdam 1980888Lecture Notes in Math.Springer1981, 90--99back to text
  • 53 articleS.Spencer Bloch, M.Matt Kerr and P.Pierre Vanhove. A Feynman integral via higher normal functions.151122015, 2329--2375DOIback to textback to text
  • 54 articleS.Spencer Bloch and P.Pierre Vanhove. The elliptic dilogarithm for the sunset graph.1482015, 328--364DOIback to text
  • 55 bookJ.Jonathan Borwein and D.David Bailey. Mathematics by experiment.Plausible reasoning in the 21st centuryA K Peters2008, xii+377back to text
  • 56 bookA.A. Bostan, F.F. Chyzak, M.M. Giusti, R.R. Lebreton, G.G. Lecerf, B.B. Salvy and É.É. Schost. Algorithmes efficaces en calcul formel.686 pagesCreateSpace2017back to text
  • 57 inproceedingsA.Alin Bostan, F.Frédéric Chyzak, P.Pierre Lairez and B.Bruno Salvy. Generalized Hermite reduction, creative telescoping and definite integration of D-finite functions.ISSAC~'18Conference proceedingsACM2018, 95--102DOIback to text
  • 58 inproceedingsA.Alin Bostan, F.Frédéric Chyzak, G.Grégoire Lecerf, B.Bruno Salvy and É.Éric Schost. Differential equations for algebraic functions.ISSAC~'07Conference proceedings2007, 25--32back to text
  • 59 articleA.A. Bostan, C.-P.C.-P. Jeannerod, C.C. Mouilleron and É.É. Schost. On matrices with displacement structure: generalized operators and faster algorithms.3832017, 733--775URL: https://doi.org/10.1137/16M1062855DOIback to text
  • 60 articleA.A. Bostan and M.M. Kauers. The complete generating function for Gessel walks is algebraic.1389With an appendix by Mark van Hoeij2010, 3063--3078DOIback to textback to text
  • 61 inproceedingsA.Alin Bostan, P.Pierre Lairez and B.Bruno Salvy. Creative telescoping for rational functions using the Griffiths–Dwork method.ISSAC~'13Conference proceedingsACM2013, 93--100DOIback to text
  • 62 articleA.Alin Bostan, P.Pierre Lairez and B.Bruno Salvy. Multiple binomial sums.80part 22017, 351--386URL: https://doi.org/10.1016/j.jsc.2016.04.002DOIback to text
  • 63 articleA.Alin Bostan, T.Tanguy Rivoal and B.Bruno Salvy. Explicit degree bounds for right factors of linear differential operators.5312020, 53--62DOIback to textback to text
  • 64 articleM.Mireille Bousquet-Mélou and A.Arnaud Jehanne. Polynomial equations with one catalytic variable, algebraic series and map enumeration.9652006, 623--672back to textback to textback to textback to text
  • 65 articleM.Mireille Bousquet-Mélou. Square lattice walks avoiding a quadrant.1442016, 37--79back to text
  • 66 inproceedingsB.Brice Boyer, C.Christian Eder, J.-C.Jean-Charles Faugère, S.Sylvian Lachartre and F.Fayssal Martani. GBLA: Gröbner basis linear algebra package.ISSAC~'16Conference proceedingsACM2016, 135--142DOIback to text
  • 67 articleR.R. Brak and A. J.A. J. Guttmann. Algebraic approximants: a new method of series analysis.23241990, L1331--L1337DOIback to text
  • 68 articleN.Nils Bruin, J.Jeroen Sijsling and A.Alexandre Zotine. Numerical computation of endomorphism rings of Jacobians.2113th Algorithmic Number Theory Symposium2019, 155--171DOIback to text
  • 69 inproceedingsM.Manfred Buchacher, M.Manuel Kauers and G.Gleb Pogudin. Separating variables in bivariate polynomial ideals.ISSAC~'20Conference proceedingsACM2020, 54--61DOIback to text
  • 70 bookJ. C.J. C. Butcher. Numerical methods for ordinary differential equations.John Wiley & Sons2016, xxiii+513DOIback to text
  • 71 incollectionS.Shaoshi Chen, M.Maximilian Jaroschek, M.Manuel Kauers and M. F.Michael F. Singer. Desingularization explains order-degree curves for Ore operators.ISSAC~'13Conference proceedingsACM2013, 157--164back to text
  • 72 articleS.Shaoshi Chen, M.Manuel Kauers, Z.Ziming Li and Y.Yi Zhang. Apparent singularities of D-finite systems.952019, 217--237DOIback to text
  • 73 incollectionS.Shaoshi Chen and M.Manuel Kauers. Order-degree curves for hypergeometric creative telescoping.ISSAC~'12Conference proceedingsACM2012, 122--129back to text
  • 74 articleS.Shaoshi Chen and M.Manuel Kauers. Trading order for degree in creative telescoping.4782012, 968--995DOIback to text
  • 75 incollectionD. V.David V. Chudnovsky and G. V.Gregory V. Chudnovsky. Computer algebra in the service of mathematical physics and number theory.Computers in Mathematics (Stanford, CA, 1986)125Lecture Notes in Pure and Appl. Math.Dekker1990, 109--232back to text
  • 76 inproceedingsD. V.David V. Chudnovsky and G. V.Gregory V. Chudnovsky. Computer assisted number theory with applications.Number theory (New York, 1984--1985)1240Lecture Notes in MathematicsSpringer1987, 1--68DOIback to text
  • 77 articleF.Frédéric Chyzak, T.Thomas Dreyfus, P.Philippe Dumas and M.Marc Mezzarobba. Computing solutions of linear Mahler equations.873142018, 2977--3021back to textback to text
  • 78 inproceedingsF.Frédéric Chyzak and P.Philippe Dumas. A Gröbner-basis theory for divide-and-conquer recurrences.ISSAC~'20Conference proceedingsACMJuly 2020DOIback to text
  • 79 inproceedingsT.Thomas Cluzeau. Factorization of differential systems in characteristic p.Proceedings of the 2003 International Symposium on Symbolic and Algebraic ComputationACM, New York2003, 58--65URL: https://doi.org/10.1145/860854.860875DOIback to text
  • 80 articleE.Edgar Costa, N.Nicolas Mascot, J.Jeroen Sijsling and J.John Voight. Rigorous computation of the endomorphism ring of a Jacobian.883172019, 1303--1339DOIback to text
  • 81 articleJ.John Cremona. The L-functions and modular forms database project.1662016, 1541--1553DOIback to text
  • 82 articleS.Sławomir Cynk and D.Duco van Straten. Periods of rigid double octic Calabi-Yau threefolds.12312019, 243--258DOIback to textback to text
  • 83 articleB.Bernard Deconinck and M.Mark van Hoeij. Computing Riemann matrices of algebraic curves.152-153Special issue to honor Vladimir ZakharovMay 2001, 28--46DOIback to text
  • 84 inproceedingsJ.Joris van Der Hoeven and G.Grégoire Lecerf. Composition modulo powers of polynomials.Proceedings of the 2017 ACM on International Symposium on Symbolic and Algebraic Computation2017, 445--452back to text
  • 85 articleK.Karl Dilcher and K. B.Kenneth B Stolarsky. A polynomial analogue to the Stern sequence.3012007, 85--103back to text
  • 86 articleA.Alexandru Dimca. On the de Rham cohomology of a hypersurface complement.11341991, 763--771DOIback to text
  • 87 articleT.Thomas Dreyfus, C.Charlotte Hardouin and J.Julien Roques. Hypertranscendance of solutions of Mahler equations.202018, 2209--2238back to text
  • 88 articleT.Thomas Dreyfus, C.Charlotte Hardouin, J.Julien Roques and M. F.Michael F. Singer. On the nature of the generating series of walks in the quarter plane.21312018, 139--203DOIback to text
  • 89 miscA.-S.Andreas-Stephan Elsenhans and J.Jörg Jahnel. Real and complex multiplication on K3 surfaces via period integration.February 2018back to textback to text
  • 90 articleJ.-C.Jean-Charles Faugère. A new efficient algorithm for computing Gröbner bases (F 4 ).1391-3Effective methods in algebraic geometry (Saint-Malo, 1998)1999, 61--88DOIback to text
  • 91 inproceedingsJ.-C.Jean-Charles Faugère. A new efficient algorithm for computing Gröbner bases without reduction to zero (F 5 ).ISSAC~'02Conference proceedingsACM2002, 75--83back to text
  • 92 inproceedingsJ.-C.J.-Ch. Faugère and C.Ch. Mou. Fast algorithm for change of ordering of zero-dimensional Gröbner bases with sparse multiplication matrices.ISSAC~'11Conference proceedings2011, 115--122back to text
  • 93 articleJ.-C.J.-Ch. Faugère and C.Ch. Mou. Sparse FGLM algorithms.8032017, 538--569DOIback to text
  • 94 articleG.Guy Fayolle, S.Sandro Franceschi and K.Kilian Raschel. On the stationary distribution of reflected Brownian motion in a non-convex wedge.Markov Processes And Related Fields28532 pages, 39 ref.December 2022, 32HALback to text
  • 95 bookG.Guy Fayolle, R.Roudolf Iasnogorodski and V.Vadim Malyshev. Random walks in the quarter plane.40Probability Theory and Stochastic ModellingSpringer2017, xvii+248URL: https://doi.org/10.1007/978-3-319-50930-3DOIback to text
  • 96 articleG.Guy Fayolle and P.Paul Mühlethaler. A Markovian analysis of IEEE 802.11 broadcast transmission networks with buffering.Probab. Engrg. Inform. Sci.3032016, 326--344URL: https://doi.org/10.1017/S0269964816000036DOIback to text
  • 97 articleS.Stéphane Fischler and T.Tanguy Rivoal. Approximants de Padé et séries hypergéométriques équilibrées.82102003, 1369--1394DOIback to text
  • 98 miscS.S. Fischler and T.T. Rivoal. Effective algebraic independence of values of E-functions.Preprint arXiv2019back to text
  • 99 bookJ.Joachim von zur Gathen and J.Jürgen Gerhard. Modern computer algebra.Cambridge University Press, Cambridge2013, xiv+795URL: https://doi.org/10.1017/CBO9781139856065DOIback to text
  • 100 articleP.Paul Görlach, C.Christian Lehn and A.-L.Anna-Laura Sattelberger. Algebraic analysis of the hypergeometric function 1 F 1 of a matrix argument.6222021, 397--427DOIback to text
  • 101 articleD. Y.D. Yu. Grigor'ev. Complexity of factoring and calculating the GCD of linear ordinary differential operators.1011990, 7--37DOIback to text
  • 102 articleA. J.A. J. Guttmann and G. S.G. S. Joyce. On a new method of series analysis in lattice statistics.591972, 81--84DOIback to text
  • 103 articleA. J.A. J. Guttmann. On the recurrence relation method of series analysis.871975, 1081--1088back to text
  • 104 articleC.Charlotte Hardouin and M. F.Michael F. Singer. Differential Galois theory of linear difference equations.34222008, 333--377URL: http://dx.doi.org/10.1007/s00208-008-0238-zDOIback to text
  • 105 articleJ. M.Johannes M. Henn. Lectures on differential equations for Feynman integrals.48152015, 153001DOIback to text
  • 106 articleD.Didier Henrion, J. B.Jean B. Lasserre and C.Carlo Savorgnan. Approximate volume and integration for basic semialgebraic sets.5142009, 722--743DOIback to text
  • 107 articleC.Charles Hermite. Sur la fonction exponentielle.771873, 18--24back to text
  • 108 articleJ.Joris van der Hoeven. Fast evaluation of holonomic functions.21011999, 199--215DOIback to text
  • 109 articleJ.Joris van der Hoeven. Constructing reductions for creative telescoping.2020DOIback to text
  • 110 articleD. L.D. L. Hunter and G. A.G. A. Baker Jr. Methods of series analysis III. Integral approximant methods.1971979, 3808--3821DOIback to text
  • 111 articleA. M.A. M. Jasour, N. S.N. S. Aybat and C. M.C. M. Lagoa. Semidefinite programming for chance constrained optimization over semialgebraic sets.2532015, 1411--1440DOIback to text
  • 112 inproceedingsA. M.Ashkan M. Jasour, A.Andreas Hofmann and B. C.Brian C. Williams. Moment-sum-of-squares approach for fast risk estimation in uncertain environments.2018 IEEE Conference on Decision and Control (CDC)2018, 2445--2451DOIback to textback to text
  • 113 articleM. A.M. A. H. Khan. High-order differential approximants.14922002, 457--468DOIback to text
  • 114 bookD. E.Donald E. Knuth. The art of computer programming. Vol. 2.Seminumerical algorithms, Third edition [of MR0286318]Addison-Wesley, Reading, MA1998back to text
  • 115 articleC.Christoph Koutschan, M.Manuel Kauers and D.Doron Zeilberger. Proof of George Andrews's and David Robbins's q-TSPP conjecture.10862011, 2196--2199DOIback to text
  • 116 articleC.Christoph Koutschan and T.Thotsaporn Thanatipanonda. Advanced computer algebra for determinants.1732013, 509--523DOIback to text
  • 117 articleP.Pierre Lairez. Computing periods of rational integrals.853002016, 1719--1752DOIback to text
  • 118 inproceedingsP.Pierre Lairez, M.Marc Mezzarobba and M.Mohab Safey El Din. Computing the volume of compact semi-algebraic sets.ISSAC~'19Conference proceedingsACM2019, 259--266DOIback to text
  • 119 articleP.Pierre Lairez and E. C.Emre Can Sertöz. A numerical transcendental method in algebraic geometry: computation of Picard groups and related invariants.342019, 559--584DOIback to textback to text
  • 120 unpublishedJ. B.Jean Bernard Lasserre. Connecting optimization with spectral analysis of tri-diagonal matrices.March 2020back to text
  • 121 unpublishedJ. B.Jean Bernard Lasserre, V.Victor Magron, S.Swann Marx and O.Olivier Zahm. Minimizing rational functions: a hierarchy of approximations via pushforward measures.December 2020back to text
  • 122 bookJ. B.Jean Bernard Lasserre. Moments, positive polynomials and their applications.1Series on Optimization and its ApplicationsImperial College PressOctober 2009, xxii+361DOIback to text
  • 123 articleJ. B.Jean Bernard Lasserre. Volume of sublevel sets of homogeneous polynomials.322019, 372--389DOIback to text
  • 124 articleL. F.Laura Felicia Matusevich. Weyl closure of hypergeometric systems.602June 2009, 147--158DOIback to text
  • 125 inproceedingsM.Marc Mezzarobba. NumGfun: a package for numerical and analytic computation and D-finite functions.ISSAC~'10Conference proceedingsACM2010, 139--146DOIback to text
  • 126 miscM.Marc Mezzarobba. Rigorous multiple-precision evaluation of D-finite functions in Sagemath.July 2016back to text
  • 127 articleP.Pascal Molin and C.Christian Neurohr. Computing period matrices and the Abel-Jacobi map of superelliptic curves.883162019, 847--888DOIback to text
  • 128 inproceedingsS.S. Naldi and V.V. Neiger. A divide-and-conquer algorithm for computing Gröbner bases of syzygies in finite dimension.ISSAC~'20Conference proceedings2020, 380--387back to text
  • 129 thesisC.Christian Neurohr. Efficient integration on Riemann surfaces and applications.Ph.D. ThesisUniversität Oldenburg2018back to text
  • 130 articleT.Toshinori Oaku. Algorithms for b-functions, restrictions, and algebraic local cohomology groups of D-modules.1911997, 61--105DOIback to text
  • 131 articleT.Toshinori Oaku and N.Nobuki Takayama. Algorithms for D-modules: restriction, tensor product, localization, and local cohomology groups.1562-32001, 267--308DOIback to text
  • 132 inproceedingsR.Raphaël Pagès. Factoring differential operators over algebraic curves in positive characteristic.ISSAC 2022 - International Symposium on Symbolic and Algebraic ComputationLille, FranceMay 2022, URL: https://hal.science/hal-03759105back to text
  • 133 bookK.K. Pardue. Nonstandard Borel-fixed ideals.Brandeis University1994back to text
  • 134 articleM.Marko Petkovšek. Hypergeometric solutions of linear recurrences with polynomial coefficients.141992, 243--264back to text
  • 135 bookG.G. Pólya. Mathematics and plausible reasoning.Induction and analogy in mathematicsPrinceton U. Press1954, xvi+280back to text
  • 136 bookG.G. Pólya. How to solve it.A new aspect of mathematical methodPrinceton U. Press1945, xxviii+253back to text
  • 137 articleD.Dorin Popescu. General Néron desingularization and approximation.1041986, 85--115URL: https://doi.org/10.1017/S0027763000022698DOIback to text
  • 138 incollectionM.Marius van der Put. Differential equations in characteristic p.971-2Special issue in honour of Frans Oort1995, 227--251URL: http://www.numdam.org/item?id=CM_1995__97_1-2_227_0back to text
  • 139 articleJ.Julien Roques. On the algebraic relations between Mahler functions.37012018, 321--355back to textback to text
  • 140 articleE. C.Emre Can Sertöz. Computing periods of hypersurfaces.883202019, 2987--3022DOIback to textback to text
  • 141 articleA. B.A. B. Šidlovskiĭ. On transcendentality of the values of a class of entire functions satisfying linear differential equations.1051955, 35--37back to text
  • 142 incollectionC. L.Carl L. Siegel. Über einige Anwendungen diophantischer Approximationen [reprint of Abhandlungen der Preußischen Akademie der Wissenschaften. Physikalisch-mathematische Klasse 1929, Nr. 1].On some applications of Diophantine approximations2Quad./Monogr.Ed. Norm., Pisa2014, 81--138back to text
  • 143 inproceedingsM. F.Michael F. Singer. Algebraic solutions of nth order linear differential equations.Proceedings of the Queen's Number Theory Conference, 1979 (Kingston, Ont., 1979)54Queen's Papers in Pure and Appl. Math.Queen's Univ., Kingston, Ont.1980, 379--420back to text
  • 144 articleL.Lucas Slot and M.Monique Laurent. Near-optimal analysis of Lasserre’s univariate measure-based bounds for multivariate polynomial optimization.2020DOIback to text
  • 145 articleA. V.A. V. Smirnov and M. N.M. N. Tentyukov. Feynman integral evaluation by a sector decomposition approach (FIESTA).18052009, 735--746DOIback to text
  • 146 articleV. N.V. N. Sorokin. A transcendence measure for 2 .187121996, 1819--1852back to text
  • 147 articleR. P.R. P. Stanley. Differentiably finite power series.121980, 175--188DOIback to text
  • 148 incollectionH.Harrison Tsai. Algorithms for associated primes, Weyl closure, and local cohomology of D-modules.Local cohomology and its applications (Guanajuato, 1999)226Lecture Notes in Pure and Appl. Math.Dekker2002, 169--194back to text
  • 149 articleH.Harrison Tsai. Weyl closure of a linear differential operator.294-5Special issue on Symbolic Computation in Algebra, Analysis, and Geometry (Berkeley, CA, 1998)2000, 747--775URL: http://dx.doi.org/10.1006/jsco.1999.0400DOIback to text
  • 150 articleW. T.W. T. Tutte. Chromatic sums for rooted planar triangulations: the cases =1 and =2.251973, 426--447URL: https://doi.org/10.4153/CJM-1973-043-3DOIback to text
  • 151 incollectionD.Don Zagier. The arithmetic and topology of differential equations.European Congress of MathematicsEuropean Mathematical Society, Zürich2018, 717--776back to text
  • 152 articleD.Doron Zeilberger and W.Wadim Zudilin. The irrationality measure of is at most 7.103205334137….942020, 407--419DOIback to textback to text
  1. 1It relies, among other costly operations, on factoring differential operators, which is known to be a highly expensive procedure, of complexity (N)O(r4), where is the bitsize of the input operator, r its order, and Nexp(·2r)2r  101. It also relies on deciding whether a non-linear (Ricatti-type) differential equation of order r-1 has an algebraic solution of degree at most M:=(49r)r2; this step itself relies on deciding non-emptiness of a constructible set defined by polynomials in M variables (and potentially huge degrees). It also relies on the famously difficult Abel's problem: given an algebraic function u, decide if y'/y=u has an algebraic solution.