EN FR
EN FR
2022
Activity report
Project-Team
MATHEXP
RNSR: 202224256Z
Research center
Team name:
Computer algebra, experimental mathematics, and interactions
Domain
Algorithmics, Programming, Software and Architecture
Theme
Algorithmics, Computer Algebra and Cryptology
Creation of the Project-Team: 2022 April 01

Keywords

Computer Science and Digital Science

  • A8.1. Discrete mathematics, combinatorics
  • A8.3. Geometry, Topology
  • A8.4. Computer Algebra
  • A8.5. Number theory

Other Research Topics and Application Domains

  • B9.5.2. Mathematics
  • B9.5.3. Physics

1 Team members, visitors, external collaborators

Research Scientists

  • Frédéric Chyzak [Team leader, Inria, Senior Researcher, HDR]
  • Alin Bostan [Inria, Senior Researcher, HDR]
  • Guy Fayolle [Inria, Senior Researcher, Emeritus]
  • Pierre Lairez [Inria, Researcher]

Faculty Member

  • Philippe Dumas [Ministère de l'Education Nationale, Retired]

PhD Students

  • Hadrien Brochet [Inria, from Sep 2022]
  • Alexandre Goyer [Inria]
  • Alaa Ibrahim [Inria, from Oct 2022, Lyon]
  • Rafael Mohr [Technische Universität Kaiserslautern, Kaiserslautern (Germany)]
  • Hadrien Notarantonio [Université Paris Saclay]
  • Raphaël Pagès [Université de Bordeaux, Bordeaux]
  • Eric Pichon-Pharabod [Université Paris Saclay]
  • Sergey Yurkevich [University of Vienna, Vienna (Austria)]

Administrative Assistant

  • Bahar Carabetta [Inria]

2 Overall objectives

“Experimental mathematics” is the study of mathematical phenomena by computational means. “Computer algebra” is the art of doing effective and efficient exact mathematics on a computer. The MATHEXP team develops both themes in parallel, in order to discover and prove new mathematical results, often out of reach for classical human means. It is our strong belief that modern mathematics will benefit more and more from computer tools. We ambition to provide mathematical users with appropriate algorithmic theories and implementations.

Besides the classification by mathematical and methodological axes to be presented in §3, MATHEXP's research falls into four interconnected categories, corresponding to four different ways to produce science. The raison d'être of the team is solving core questions that arise in the practice of experimental mathematics. Through the experimental mathematics approach, we aim at applications in diverse areas of mathematics and physics. All rests on computer algebra, in its symbolic and seminumerical aspects. Lastly, software development is a significant part of our activities, with the aim of enabling cutting-edge applications and disseminating our tools. Each of these four levels is reflected in the thematic axes of the research program.

2.1 Experimental mathematics

In science, observation and experiment play an important role in formulating hypotheses. In mathematics, this role is shadowed by the primacy of deductive proofs, which turn hypotheses into theorems, but it is no less important. The art of looking for patterns, of gathering computational evidence in support of mathematical assertions, lies at the heart of experimental mathematics, promoted by Euler, Gauss and Ramanujan. These prominent mathematicians spent much of their time doing computations in order to refine their intuitions and to explore new territories before inventing new theories. Computations led them to plausible conjectures, by an approach similar to those used in natural sciences. Nowadays, experimental mathematics has become a full-fledged field, with prominent promoters like Bailey and Borwein. In their words  45, experimental mathematics is “the methodology of doing mathematics that includes the use of computation for

  • gaining insight and intuition,
  • discovering new patterns and relationships,
  • using graphical displays to suggest underlying mathematical principles,
  • testing and especially falsifying conjectures,
  • exploring a possible result to see if it is worth formal proof,
  • suggesting approaches for formal proof,
  • replacing lengthy hand derivations with computer-based derivations,
  • confirming analytically derived results.”

2.2 Foundations of computer algebra

At a fundamental level, we manipulate several kinds of algebraic objects that are characteristic of computer algebra: arbitrary-precision numbers (big integers and big floating-point numbers, typically with dozens of thousands of digits), polynomials, matrices, differential and recurrence operators. The first three items form the common ground of computer algebra  86. We benefit from years of research on them and from broadly used efficient software: general-purpose computer-algebra systems like Maple, Magma, Mathematica, Sage, Singular; and also special-purpose libraries like Arb, Fgb, Flint, Msolve, NTL. Current developments, whether software implementation, algorithm design or new complexity analyses, directly impact us. The fourth kind of algebraic objects, differential and recurrence operators, is more specific to our research and we concentrate our efforts on it. There, we try to understand the basic operations in terms of computational complexity. Complexity is also our guide when we recombine basic operations into elaborate algorithms. In the end, we want fast implementations of efficient algorithms.

Here are some of the typical questions we are interested in:

  • Do some of the solutions of a linear ordinary differential equation (ODE) satisfy a simpler ODE? This relates to the problem of factoring differential operators.
  • Is a given linear partial differential equation (PDE) a consequence of a set of other PDEs? This relates to the problem of computing Gröbner bases in a differential setting.
  • Given a solution f(x,y) of a system of linear PDEs, how to compute differential equations for f(x,0) or 01f(x,y)dy? This falls into the realm of symbolic integration questions.
  • Given a linear ODE with initial condition at 0, how to evaluate numerically the unique solution at 1 with thousands of digits of precision? This is the gist of our seminumerical methods.

2.3 Applications

Getting involved in applications is both an objective and a methodology. The applications shape the tools that we design and foster their dissemination.

Combinatorics is a longstanding application of computer algebra, and conversely, computer algebra has a deep impact on the field. The study of random walks in lattices, first motivated by statistical physics and queueing theory, features prominent examples of experimental mathematics and computer-assisted proofs. Our main collaborators in combinatorics are Mireille Bousquet-Mélou (Université de Bordeaux), Stephen Melczer (University of Waterloo) and Kilian Raschel (Université d'Angers).

Probability theory. Apart from the already mentioned interest in random walks, which is a classical topic in probability theory, and on which we have an expert, Guy Fayolle, in our group, the main applications we have in mind are to integrals arising from: 2D fluctuation theory (generalizing arc-sine laws in 1D); moments of the quadrant occupation time for the planar Brownian motion; persistence probability theory (survival functions of first passage time for real stochastic processes); volumes of structured families of polytopes also arising in polyhedral geometry and combinatorics. Our main interactions on these topics are with Gerold Alsmeyer (U. Münster), Dan Betea (KU Leuven), and Thomas Simon (U. Lille).

Number theory, and especially diophantine approximation, are also fields with longstanding users of computer algebra tools. For example, the recently discovered sequence of integrals

4 - 2 i 4 + 2 i ( x - 4 + 2 i ) 4 n ( x - 4 - 2 i ) 4 n ( x - 5 ) 4 n ( x - 6 + 2 i ) 4 n ( x - 6 - 2 i ) 4 n x 6 n + 1 ( x - 10 ) 6 n + 1 d x , n 0 ,

whose analysis leads to the best known measure of irrationality of π, can hardly be found by hand  138. Yet, the discovery and the proof of such a result requires sophisticated tools from experimental mathematics. Our main collaborators in number theory are Boris Adamczewski (Université Lyon 1), Xavier Caruso (Université de Bordeaux), Stéphane Fischler (Université Paris Saclay), Tanguy Rivoal (Université Grenoble Alpes), Wadim Zudilin (University Nijmegen). Mahler equations are other aspects of number theory, in relation to automata theory, and appear in several of our research axes. Philippe Dumas, in our group, and Boris Adamczewski, already mentioned, have long been experts in this topic.

In algebraic geometry, in spite of tremendous theoretical achievements, it is a challenge to apply general theories to specific examples. We focus on putting into practice transcendental methods through symbolic integration and seminumerical methods. Our main collaborators are Emre Sertöz (Max Planck Institute for Mathematics) and Duco van Straten (Gutenberg University).

In statistical physics, the Ising model, and its generalization, the Potts model, are classical in the study of phase transitions. Although the Ising model with no magnetic field is one of the most important exactly solved models in statistical mechanics (Onsager won the Nobel prize 1968 for this), its magnetic susceptibility continues to be an unsolved aspect of the model. In absence of an exact closed form, the susceptibility is approached analytically, via the singularities of some multiple integrals with parameters. Experimental mathematics is a key tool in their study. Our main collaborators are Jean-Marie Maillard (SU, LPTMC) and Tony Guttmann (U. Melbourne).

In quantum mechanics, turning theories into predictions requires the computation of Feynman integrals. For example, the reference values of experiments carried out in particle accelerators are obtained in this way. The analysis of the structure of Feynman integrals benefits from tools in experimental mathematics. Our main collaborator in this field is Pierre Vanhove (CEA, IPhT).

2.4 Software

We ambition to provide efficient software libraries that perform the core tasks that we need in experimental mathematics. We target especially four tasks of general interest: algebraic algorithms for manipulating systems of linear PDEs, univariate and multivariate guessing, symbolic integration, and seminumerical integration.

For several reasons, we want to stay away from a development model that is too tied to commercial computer algebra systems. Firstly, they restrict dissemination and interoperability. Secondly, they do not offer the level of control that we need to implement these foundations efficiently. Concretely, we will develop open-source libraries in C++ for the most fundamental tasks in our research area. Computer algebra systems, like Sagemath or Maple, are good at coordinating primitive algorithms, but too high-level to implement them efficiently. We seek solid software foundations that provide the primitive algorithms that we need. This is necessary to implement the new higher-level algorithms that we design, but also to reach a performance level that enables new applications. Still, we will strive to expose our libraries to the prominent computer-algebra systems, especially Maple and Sagemath, used by many colleagues.

Besides, there is a growing interest in the programming language Julia for computer algebra, as shown by the Oscar project. We already internally use Julia and occasionally some of the libraries Oscar is build upon, and we want to promote this young ecosystem. It is very attractive to contribute to it, but on the flip side of the coin, it is too young to offer the same usability as Maple, or even Sagemath. So there is an assumed element of risk taking in our intent to also make our libraries available to Julia.

3 Research program

3.1 Algebraic algorithms for multivariate systems of equations

At large, MATHEXP deals with algebraic and seminumerical methods. This part goes through the fundamental aspects of the algebraic side. As opposed to numerical analysis where numerical evaluations underlie the basic algorithms, algebraic methods manipulate functions through functional equations. Depending on the context, different kinds of functional equations are appropriate. Algebraic functions are handled through polynomial equations and the classical theory of polynomial systems. To deal with integrals, systems of linear partial differential equations (PDEs) are appropriate. In combinatorics and number theory appears the need for non-linear ordinary differential equations (ODEs). We also consider other kinds of functional equations more related to discrete structures, namely linear recurrence relations, q-analogues and Mahler equations.

The various types of functional equations raise similar questions: is a given equation consequence of a set of other equations? What are the solutions of a certain type (polynomial, rational, power series, etc.)? What is the local behavior of the solutions? Algorithms to solve these problems support an important part of our research activity.

3.1.1 Holonomic systems of linear PDEs

One of the major data structure that we consider are systems of linear PDEs with polynomial coefficients. A system that has a finite dimensional solution space is called holonomic and a function that is solution of a holonomic system is called holonomic too. The theory of holonomy is important because it allows for an algebraic theory of analysis and integration (on this aspect see also §3.2). The basic objects of holonomy theory are linear differential operators, that are some sort of quasicommutative polynomials, and ideals in rings of linear differential operators, called Weyl algebras. In this aspect, holonomy theory is analogue to the theory of polynomial systems, where the basic objects are commutative polynomials and ideals in polynomial rings. Some of the important concepts, for example the concept of Gröbner basis, are also similar. Gröbner bases are a way to describe all the consequences of a set of equations.

As much as Gröbner bases in polynomial rings are the backbone of effective commutative algebra, Gröbner bases in Weyl algebras of differential operators are the backbone of effective holonomy theory, which includes integration. In a commutative setting, there has been a long way from the early work of Buchberger to today's state-of-the-art polynomial system solving libraries 37. We will develop a similar enterprise in the noncommutative setting of Weyl algebras. It will unlock a lot of applications of holonomy theory.

Following the commutative case, progress in a differential context will come from an appropriate theory and efficient data structures. We will first develop a matrix approach to handle simultaneous reduction of differential operators as the F4 algorithm for the polynomial case 79. The real challenge here is more practical than theoretical. It is not difficult to come with some F4 algorithm in the differential case. But will it be efficient? From the experience of modern Gröbner engines in the commutative case, we know that efficient implementation of simultaneous reduction requires a significant amount of low-level programming to deal with sparse matrices with a special structure. We also know that many choices, irrelevant to the mathematical theory, strongly influence the running times. The noncommutativity of differential operators adds extra complications, whose consequences are still to be understood at this level. We want to reuse, as much as possible, the specialized linear algebra libraries that have been developed in the polynomial context 56, 37, but we may have to elude the densification of products induced by noncommutativity.

On a more theoretical aspect, one step further in the analysis is that the possible analogues of the F5 algorithm 80 are not fully explored in a differential setting. We may expect not only faster algorithms, but also new algorithms for operating on holonomic functions (Weyl closure for example, see §3.1.2). Rafael Mohr started a PhD thesis in the team on using F5 for computing equidimensional decompositions in the commutative case.

3.1.2 Desingularization of PDEs

Among the structural properties of systems of linear differential or difference equations with polynomial coefficients, the question of understanding and simplifying their singularity structure pops up regularly. Indeed, an equation or a system of equations may exhibit singularities that no solution have, which are then called apparent singularities. Desingularization is a process of simplifying a -finite system by getting rid of its apparent singularities. This is done at the cost of increasing the order of equations, thus, the dimension of their solution space. The univariate setting has been well studied over time, including in computer algebra for its computational aspects 25, 24. This led to the notion of order-degree curve 63, 64, 61: a given function can cancel an ODE or ORE (ordinary recurrence equation) of small order with a certain coefficient degree, and also other ODEs or OREs of higher orders, possibly with smaller coefficient degrees. In certain applications, the ODE or ORE of minimal order may be too large to be obtained by direct calculations. It appears that the total size of the equations, that is, the product of order by degree, can be more relevant to optimize the speed of algorithms. This is a phenomenon that we observed first in relation to algebraic series 48, and we want to promote further this idea of trading minimality of order for minimality of total size, with the goal of improved speed. On the other hand, apparent singularities have been defined only recently in the multivariate holonomic case 62.

Our project includes developing good notions and fast heuristic methods for the desingularization of a -finite system, first in the differential case, where it is expected to be easier, then in the case of recurrence operators.

Moreover, fast algorithms will be obtained for testing the separability of special functions: in a nutshell, this problem is to decide whether the solutions to a given system also satisfy linear differential or difference equations in a single variable, and algorithmically this corresponds to obtaining structured multiples of operators with a structure similar to that for desingularization.

In the multivariate case, the operation of saturating an ideal in the Weyl algebra by factoring out (and removing) all polynomial factors on the left is known under the name of Weyl closure. This relates to desingularization as the Weyl closure of an ideal contains all desingularized operators. Weyl closure also is a relative of the radical of an ideal in commutative algebra: given an ideal of linear differential operators, its Weyl closure is the (larger) ideal of all operators that annihilate any function solution to the initial ideal. Computing Weyl closure applies to symbolic integration, and algorithms exist to compute it 134, 133, although they are slow in practice. Weyl closure also plays an important role in applications to the theory of special functions, e.g., in the study of GKZ-systems (a.k.a. A-hypergeometric systems) 111, and in relation to Fischer distribution and maximum likelihood estimation in statistics 28, 87. Algorithms for Weyl closure should then be obtained, by basing on desingularization as a subtask.

3.1.3 Well-foundedness of divide-and-conquer recurrence systems

Converting a linear Mahler equation with polynomial coefficients (see §3.3.3) into a constraint on the coefficient sequence of its series solutions results in a recurrence between coefficients indexed with rational numbers, which must be interpreted to be zero at noninteger indices. The recurrence can be replaced with a system of recurrences by cases depending on residues modulo some power of the base b. The literature also alternatively introduces recurrences with indices expressed with floor/ceiling functions, typically so for fine complexity analysis of divide-and-conquer algorithms. For sequences that can be recognized by automata (“automatic sequences”) and their generalizations (“b-regular sequences”), it is natural to consider a system of recurrences on several sequences, with a property of closure under certain operations of taking subsequences: restricting to even indices, or odd indices, or more generally indices with a given residue modulo the base b. This variety of representations calls for algorithms to be able to convert from one another, to check the consistency of a given system of recurrences, and to identify those terms of the sequence that determine all others (which are typically not just a few first terms). In the continuation of 68 that developed a Gröbner-bases theory as a pre-requisite for this goal, we will address those problems of conversion and well-foundedness.

3.1.4 Software

Software development is a real challenge, regarding the symbolic manipulation of linear PDEs. While symbolic integration has gained more and more recognition, its execution is still reserved to experts. Providing a highly efficient software library with functionalities that come as close as possible to the actual integrals, rather than some idealized form, will foster adoption and applications. In the past, the lack of solid software foundations has been an obstacle in implementing newly developed algorithms and in disseminating our work. It was the case, for example, for our work on binomial sums 52, or the computation of volumes 105, where having to use an integration algorithm implemented in Magma has been a major obstacle.

What is lacking is a complete tool chain integrating the following three layers:

  1. the computation of Gröbner bases of holononomic systems, as discussed in §3.1.1;
  2. the basic algorithms for manipulating holonomic systems, such as the desingularization discussed in §3.1.2 but also the classical aspects of symbolic integration;
  3. the algorithms relevant for applications, including all the aspects covered in §3.2.

The first layer of the toolchain will be developed in C++ for performance but also to open the way to an integration in free computer algebra systems, like Sagemath or Macaulay2. We will benefit from years of experience of the community and close colleagues in implementing Gröbner basis algorithms in the commutative case. The third layer of the toolchain should be easily accessible for the users, so at least available in Sagemath. Some of our current software development, related to the second layer, already happens in Julia (as part of R. Mohr's PhD work).

3.2 Symbolic integration with parameters

Among common operations on functions, integration is the most delicate. For example, differentiation transforms functions of a certain kind into functions of the same kind; integration does not. For this reason, integration is also expressive: it is an essential tool for defining new functions or solving equations, not to mention the ubiquitous Fourier transform and its cousins. Integration is the fundamental reason why holonomic functions are so important: integrals of holonomic functions are holonomic. Algorithms to perform this operation enable many applications, including: various kinds of coefficient extractions in combinatorics, families of parametrized integrals in mathematical physics, proofs of irrationality in number theory, and computations of moments in optimization.

Given a function F(𝐭,𝐱) of two blocks of variables 𝐭=t1,,ts and 𝐱=x1,,xn, and an integration domain Ω(𝐭)n, how to compute the function

G ( 𝐭 ) = Ω ( 𝐭 ) F ( 𝐭 , 𝐱 ) d 𝐱 ?

Concretely, F(𝐭,𝐱) is described by a system of linear PDEs with polynomial coefficients, Ω(𝐭) is given by polynomial inequalities, and we want a system of PDEs describing G(𝐭). Note here the presence of parameters which makes it possible to describe the result of integration with PDEs. When there are no parameters, the result is a numerical constant. Even though we deal with them in an entirely different way (see §3.5), we still mostly rely on symbolic integration with parameters.

From the algebraic and computational point of view, integration has several analogues. Discrete sums are the prominent example, but there are also q-analogues, Mahlerian functions, and some others. At large, algorithms for symbolic integration, or its analogues, perform a sort of elimination in a ring of differential operators. There are some links with elimination theory and related algorithms as developed for the study of polynomial systems of equations.

Symbolic integration is an historical focus of MATHEXP's founding members with many significant contributions. Compared to our previous activities, we want to put more emphasis on software development. We are at a point where the theory is well understood but the lack of efficient implementations hinders many applications. Naturally, this effort will rest on the results obtained in §3.1.

3.2.1 Integrals with boundaries

The algebraic aspects of symbolic integration are best understood when the integration domain has no boundary: typically n or a topological cycle in n. Indeed, in this context we have the so-called telescopic relation which states that the integral of a derivative vanishes: for example, if H(𝐭,𝐱) is rapidly decreasing, then

n H x i d 𝐱 = 0 .

It gives a nice algebraic flavor to the problem of symbolic integration and reduces it to the study of the quotient space /x1++xn, where  is a suitable function space containing the integrand. A large part of the algorithms developed so far focuses on this case. Yet, many applications do not fit in this idealized setting. For example, Beukers' proof of the irrationality of ζ(3)40 uses the two integrals

γ R d x d y d z and [ 0 , 1 ] 3 R d x d y d z , where R ( t , x , y , z ) = 1 1 - ( 1 - x y ) z - t x y z ( 1 - x ) ( 1 - y ) ( 1 - z ) .

The first one, where the integration domain is some complex cycle γ, is well handled by current algorithms. The second is not, and this is unsatisfactory for further applications of symbolic integration in number theory. In this particular case, we may think of an algorithm that would reduce the integration on the cube to an integration without boundary and an integration on the boundary of the cube. This boundary just consists of 6 squares, which calls for a recursive procedure. Unfortunately, the integration domain touches the poles of the integrand, so operations like integrating only part of a function or integration by parts or differentiation under the integral sign may not be meaningful by lack of integrability. It is not known how to deal with this issue automatically. For more general domains of integration, it is not even clear what kind of recursive procedure can be applied.

The next generation of symbolic integration algorithms must deal with integrals defined on domains with boundaries. The framework of algebraic D-modules seems to be very appropriate and already features some algorithms. But this is not the end of the story, as this line of research has not led yet to efficient implementations. We identified two ways of action to reach this goal. Firstly, existing algorithms  117, 118 put too much emphasis on computing a minimal-order equation for the integral. While this is an interesting property, other kinds of integration algorithms have successfully relaxed this condition. For example, for integrating rational functions, the state-of-the-art algorithm  104 depends on a parameter r>0. The computed equation is minimal only for r large enough, which corresponds to the degeneration rank of some spectral sequence  75. In practice, this has never been an obstacle: most of the time we obtain a minimal equation with a small value of r. For the few remaining cases, we will soon propose a generalized procedure to minimize the equation a posteriori; this will be a consequence of a work on univariate guessing (see §3.4.1) that bases and expands on  53. The algorithm by small values of r applicable in most cases already outperforms previous ones in terms of computational complexity 51 and practical performance, being able to compute integrals that were previously out of reach. We consider it to be a special case of the general algorithm that we want to develop, and a proof of feasibility. However, the effort will be vain without significant progress on the computation of Gröbner bases in Weyl algebras. Fortunately, and this is the second way of action, we think that the framework of algebraic D-modules enables efficient data structures modeled on recent progress in the context of polynomial systems. Progress in this direction (as explained in §3.1.1) will immediately lead to significant improvement for symbolic integration.

3.2.2 Reduction-based creative telescoping

The approach to symbolic integration based on creative telescoping is a definite expertise of the team. Although the approach is difficult to use for integrals with boundaries, it still has many appeals. In particular, it generalizes well to discrete analogues. Recently, the team has initiated the development of a new line of algorithms, called reduction-based. After continuing work, this line has not yet been extended to full generality 47, 96. These recent theoretical developments are not yet reflected in current software packages (only prototype implementations exist) and therefore their practical applicability, and how the algorithms compare, is not yet fully understood. Filling these gaps will be a good starting point for us, but the ultimate goal will be to formulate analogue algorithms for the difference case (summation of holonomic sequences), for the q-case, and for general mixed cases. We expect that these advances in the theory will have a great impact on various applications.

3.2.3 Holonomic moment methods

In applied mathematics, the method of moments provides a computational approach to several important problems involving polynomial functions and polynomial constraints: polynomial optimization, volume estimation, computation of Nash equilibria, ...  109. This method considers infinite-dimensional linear optimization problems over the space of Borel measures on some space n. They admit finite-dimensional relaxations in terms of linear matrix inequalities where a measure μ is represented approximately by a finite number of moments 𝐱αdμ.

From the holonomic point of view, the generating function of the moments of μ — or, equivalently, the characteristic function ϕ(𝐮)=exp(i𝐮·𝐱)dμ — is holonomic for a large class of measures μ (which includes all measures that appear in current applications of the method of moments). This remark already unlocks some applications where the current bottleneck is the computation of many moments: differential equations on ϕ(𝐮) reflect recurrence relations on the moments, and computing the former with symbolic integration will lead to efficient algorithms for computing the moments.

A line of research developed recently 99, 110, 107, 129, 108 focuses on reducing the size of the matrices in the linear matrix inequalities (LMI) involved in the relations by using pushforward measures. For example, let us consider a polynomial f[x1,,xn] and the problem of computing the volume of p[0,1]n|f(p)0. The article  93 solves this problem with a linear program over Borel measures on [0,1]n. Using the pushforward measure, the work  99 reduces to a linear program over measures on , supposedly much easier to solve. However, this comes at the cost of computing the moments μk=[0,1]nfkdx1dxn for increasingly large values of k. While this is an elementary task (it is enough to expand fk), the number of monomials to compute is 1n!(kdeg(f))n, for large k, and this becomes the bottleneck of the method. The computation of the generating series k0μktk using symbolic integration enables the computation of a linear recurrence relation, of size deg(f)n at most, for the moments μk, and we can compute the μ0,,μk in O(kdeg(f)n) arithmetic operations only, or O˜(k2deg(f)n) bit operations. This should be a low-hanging fruit as soon as we have reasonable implementations of symbolic integration on domains with boundaries (see §3.2.1). Naturally, the constant in the big O hides the size of the ODE of which the generating function is solution, and it may be exponential in n. But this is only a worst-case bound, and any nongeneric geometric property will tend to make this ODE smaller.

One step further, we will try to interpret the whole moment method in the holonomic setting. The differential equation for k0μktk not only enables the computation of the moments μk, it somehow encodes all the μk. Recovering numerical values, such as the volume, from this differential equation is akin to the seminumerical algorihtms we know (see §3.5.1). As a next step, we will study how some optimization problems treated by the method of moments behave in this holonomic setting. We think especially of the problems of chance optimization and chance constrained optimization  98: in the former, one maximizes the probability of success over the design parameters; in the latter one optimizes a goal while ensuring that some probability remains low.

3.2.4 New aspects of symbolic integration

Mahlerian telescopers.

Here the aim is to determine the relations satisfied by a solution of a Mahler equation (see §3.3.3). A natural generalization is to search for relations among solutions of different Mahler equations. Our objective is to provide an algorithmic answer to this generalization, for (Laurent) power series yi solutions of inhomogeneous first-order equations of the form yi(zp)+ai(z)yi(z)=bi(z), with coefficients in ¯(z). We will start with the easy case where all ai's are equal to 1. Under this assumption, a theorem of Hardouin and Singer guarantees that there exists an algebraic relation with coefficients in ¯(z) between y1, ..., yn if and only if there exists a Mahlerian telescoper between the bi(z). (This originates in Hardouin's PhD thesis and was generalized in  91.) We will work on making algorithmic such an existence test, and if possible the calculation of such telescopers. For this, we will be inspired by existing algorithms for calculating telescopers for other types of functional operators.

D-algebraicity and elliptic telescoping.

Random walks confined to the quarter plane is a well studied topic, as testified by the book  83. A new algebraic approach, relying on the Galois theory of difference equations, has been introduced in  77 to determine the nature of the generating series of such walks. This approach gives access to the D-algebraicity of the generating functions, that is, to the knowledge of whether they satisfy some differential equations (linear or non-linear). More precisely, D-algebraicity is shown to be equivalent to the fact that a certain telescopic equation, similar to the one appearing in the classical context of creative telescoping, but defined on an elliptic curve attached to the walk model, admits solutions in the function field of that curve. For the moment, the corresponding telescoping equations are solved by hand, in a quite ad-hoc fashion, using case-by-case treatment. We aim at developing a systematic and automatized approach for solving this kind of elliptic creative telescoping problems. To this end, we will import and adapt our algorithmic methods from the classical case to the elliptic framework.

3.2.5 Software

Because of the dependency of the software pertaining to symbolic integration on developments on multivariate systems, our goals related to software on symbolic integration have been described in §3.1.4.

3.3 Computerized classification of functions and numbers

Classifying objects, determining their nature, is often the culmination of a mature theory. But even the best established theories can be impracticable on a concrete instance, either by a lack of effectiveness or by a computational barrier. In both cases, an algorithm is missing: we have to systematize, but also effectivize and automate efficiently. This is what we propose to do, in order to solve classification problems relating to numbers, analytical functions, and combinatorial generating series.

3.3.1 Practical tests of algebraicity and transcendence for holonomic functions

It is an old question addressed by Fuchs in the 1870s of whether one can decide if all solutions of a given linear differential equation are algebraic. Singer showed in  128 that there exists an algorithm which takes as input a linear differential equation with coefficients in [x], and decides in a finite number of steps whether or not it has a full basis of algebraic solutions. If the answer is negative, this does not automatically exclude the possibility that a particular solution is algebraic. (For instance, the linear differential equation (xy')'=0 has not only the algebraic solution 1, but also the transcendental holonomic solution y(x)=log(x).) However, a recent refinement of Singer's 1979 method can be used to solve in principle Stanley's open problem  132: given a holonomic power series y(x) by an annihilating linear differential equation and sufficiently many initial terms in its expansion, decide if y(x) is algebraic or transcendental. Unfortunately, the corresponding algorithm is too slow in practice, because of its high computational complexity1. An interesting question is to find efficient alternatives that are able to answer Stanley's question on concrete difficult examples.

An approach that always works is the algorithmic guess-and-prove paradigm (see §3.4): one guesses a concrete polynomial witness, and then post-certifies it. This method is very robust and works perfectly well, but it may fail on examples with minimal polynomial much larger than the input differential equation. For instance, in an open question by Zagier  137, the input differential equations have order 4, but the (estimated) algebraic degree of the desired solution is 155520, hence much too large to allow the computation of the minimal polynomial. (Note that the estimate is obtained using seminumerical methods evoked in §3.5).

We aim at designing various pragmatic algorithmic methods for proving algebraicity or transcendence in such difficult cases. First, the algebraic nature of the holonomic function is tested heuristically, using a mixture of numeric and p-adic methods (e.g., monodromy estimates and p-curvature computations). In cases where transcendence is conjectured, the method we will develop is an application of the minimization algorithms in §3.4.1: after finding a minimal-order ODE, an analysis of singularities is sufficient to decide transcendence, at least for interesting subclasses of inputs (e.g., a certain class of generating series of binomial sums). In cases where algebraicity is conjectured, we plan to apply computational strategies inspired by effective differential Galois theory and effective invariant theory, in particular by the recent work  32.

3.3.2 Algorithmic determination of algebraic values of E-functions

E-functions are holonomic and entire power series subject to some arithmetic conditions; they generalize the exponential function. The class contains most of the holonomic exponential generating functions in combinatorics and many special functions such as the Airy and the Bessel functions. Given an E-function f represented implicitly by a linear differential equation (and enough initial terms), the question is to determine algorithmically the algebraic numbers α such that f(α) is algebraic. A recent article by Adamczewski and Rivoal  27 proves that the problem is decidable. It relies on important works by Siegel  127, Shidlovskii  126, and Beukers  41. However, the underlying algorithm has no practical applicability. We will obtain an improved version of this algorithm, by accelerating its bottleneck, which consists in computing a linear differential operator of minimal order satisfied by f. This will take advantage of the results obtained in §3.4.1. By continuing the line of work opened in 53, the idea is now to exploit the particular structure of differential equations satisfied by E-functions, and to use bounds produced by calculation on the considered equation rather than theoretical bounds such as “multiplicity lemmas". Our previous improvements will make this algorithm practical. We will also address an extension of the theory that also determines cases of algebraic dependency between evaluations of E-functions 85.

3.3.3 Rational solution of Ricatti-like Mahler equation and hypertranscendence

Mahler equations are functional equations that relate a function f(z) with f(zp), f(zp2), etc., for some integer p>0. The study of Mahler equations is motivated by Mahler's work in transcendence, as well as by the study of automatic sequences, produced by finite automata (see §3.1.3). From a computer algebra perspective, the basic tasks concerning Mahler equations are poorly understood, compared to differential or recurrence equations.

Roques designed an algorithm for the computation of the Galois group of Mahler equations of order 2  124. This group reflects the algebraic relations between the solutions. So its computation is relevant in transcendence theory. Roques' algorithm relies on deciding the existence of rational solutions to some nonlinear Mahler equations that are analogues of Riccati differential equations. For this task, Roques proposes an algorithm reminiscent of Petkovšek's algorithm  119, with an exponential arithmetic complexity as it has to iterate through all monic factors of well-identified polynomials. Building on recent progress in the linear case 67, we want to obtain a polynomial-time algorithm for this decidability problem, or at least one that is not exponentially sensitive to the degree of the polynomial coefficients of the equation.

An application of this work will be a new algorithm to decide the differential transcendence of solutions of Mahler equations of order 2, following a criterion given by Dreyfus, Hardouin and Roques (see 76, 124). This would make it possible to prove new results about some classical Mahler functions and the relations between them. An example will be to reprove and extend the hypertranscendence of the solutions to the Mahler equation satisfied by the generating series of the Stern sequence  74.

3.3.4 Algorithmic determination of algebraic values of Mahler functions

We aim at studying the special values of Mahler functions, going through the search for algebraic values and more generally for algebraic relations between values. We will resume the analysis of the algorithm in  26, to highlight its computational limitations, before optimizing its subtasks. We are thinking in particular of the rationality test, for which an algorithm was given in  34 and another of better complexity has appeared recently 67, and of the search for minimal equations, for which structured linear algebra techniques must allow practical efficiency.

3.3.5 Efficient resolution of functional equations with 1 catalytic variable

In enumerative combinatorics, many classes of objects have generating functions that satisfy functional equations with “catalytic” variables, relating the complete function with the partial functions obtained by specializing the catalytic variables. For equations with a single catalytic variable, either linear or nonlinear, solutions are invariably algebraic. This is a consequence of Popescu's theorem on Artin approximation with nested conditions  122, a deep result in commutative algebra. However, the proof of this qualitative result is not constructive. Hence, to go further, towards quantitative results, different approaches are needed. Bousquet-Mélou and Jehanne proposed in  54 a method which applies in principle to any equation of the form P(F(t;x),F1(t),...,Fk(t),t,x)=0, where x is a (single) catalytic variable, that admits a unique solution (F,F1,...,Fk)[x][[t]]×[[t]]k. The method is based on a systematic constructive approach, which first derives from the functional equation a (highly structured) algebraic elimination problem over (t) with 3k unknowns and 3k polynomial equations, whose degree is linear in the degree δ of the input functional equation. The problem is already nontrivial for k=1, but most interesting combinatorial applications require k>1, and current methods are only able to tackle functional equations with small values of k (at most 3) and small total degree δ (at most 4). We will provide unified, systematic and robust algorithms for computing polynomial equations exhibiting the algebraicity of solutions for functional equations with one catalytic variable, building on  54. The ideal goal is to be able to exploit the geometry and symmetries of the elimination problems arising from the approach in  54. The final objective is to produce efficient implementations that can be used by combinatorialists in order to solve their functional equations with one catalytic variable in a click.

3.3.6 Classification of solutions for functional equations with 2 catalytic variables

When several catalytic variables are involved, Popescu's theorem does not hold anymore. The solutions are not necessarily algebraic anymore, and even holonomy is not guaranteed, even in the linear case.

In the linear case, our the main objective is to fully automatize the resolution of linear equations with two catalytic variables coming from lattice walk questions, when the walk model admits Tutte invariants and decoupling functions. A first nontrivial challenge will be to produce a new computer-assisted proof of algebraicity for the famous Gessel model, different in spirit from the first proof 50: instead of guess-and-proof, we will be inspired by the recent “human” proofs in  55, 36 relying on Tutte invariants. There are several nontrivial subproblems, both on the mathematical and algorithmic sides. One of them is to determine if a model admits invariants and decoupling functions, and if so, to compute them. A first step in this direction was recently done by Buchacher, Kauers and Pogudin  59, in the simpler case when one looks for polynomials instead of rational functions.

In the nonlinear case with two catalytic variables, few results exist, and almost no general theory. These equations occur systematically when counting planar maps equipped with an additional structure, for instance a colouring (or, a spanning tree, a self-avoiding walk, etc.). On this side, the study will be of a more prospective nature. However, we envision the resolution of several challenges. A first objective will be to test various guess-and-prove methods on Tutte's equation  135 satisfied by the generating function of properly q-colored triangulations of the sphere. Any kind of progress on it will be an important success–for instance, proving algebraicity of the solution by a computer-driven approach even for particular values of q such as q=2 and q=3. A second objective will be to automatize the strategy based on Tutte invariants employed by Bernardi and Bousquet-Mélou, and to solve the more general equation (Potts model on planar maps) in an automated fashion. This is interesting already for q=2; in this case, proofs already exist in  35, but they use various ad-hoc tricks. We aim at solving the conjectures in  35 for q=3, concerning the enumeration of properly 3-colored near-cubic maps, by any combination of methods (guess-and-prove, geometric-driven elimination for structured polynomial systems, Tutte invariants).

3.3.7 Deciding integrality of a sequence

Given enough terms of a sequence, it is possible to reconstruct a linear recurrence relation of which the sequence is a solution, if there is one. For example, with the nine numbers 1, 3, 13, 63, 321, 1683, 8989, 48639 and 265729, one can reconstruct the recurrence relation (n+1)un-(6n+9)un+1+(n+2)un+2=0 for the Delannoy numbers. We would also like to be able to reconstruct the closed form un=k=0nnkn+kk, because it reveals arithmetic information absent from the recurrence, such as the integrality of the numbers un. The search for a closed form can start by obtaining candidates in a heuristic way, since the summation algorithms make it possible to rigorously prove or disprove a posteriori that the reconstructed closed form is indeed correct.

3.3.8 Algorithmic resolution of Padé-approximation problems

Most of the proofs of irrationality of some constant c construct a sequence of rational numbers approximating c with a tight control on the growth of the denominator. Typically, c=F(1) for some holonomic function F, and approximations of F by rational functions may lead to rational approximations of c, by evaluating at 1. Good candidates for approximating F are the Padé approximants of F, originating in Hermite's work  94. But approximations that actually lead to interesting Diophantine results are rare gems. More recently, a general course of action has emerged 42, 131, 84 to deal with the case of multiple zeta values (MZV). It is based on the simultaneous approximation of polylogarithm functions by rational functions. We are looking to automate this approach and to extend its field of application.

We will use computer-assisted symbolic and numerical computations for the construction of a relevant Padé-approximation problem. Then, the resolution of the problem must be automated. This is fundamentally a computational problem in a holonomic setting. The natural approach here is guess-and-prove: we first guess what could be a closed-form formula for the solution by computing explicitly the solutions for some fixed values of n, then we prove that the guess indeed leads to a solution (which must be unique if the original problem is well-posed). The last step will typically use symbolic integration and Gröbner bases. Similar guess-and-prove approaches in a holonomic setting already gave several Diophantine results 138 but Padé approximation has not been tackled yet in this way.

3.3.9 Software

Our future algorithm for computing a linear differential equation of minimal order satisfied by a given holonomic function will be implemented and made available to users. This may include the application to the determination of algebraic values of E-functions. We will do the same concerning linear Mahler equations of minimal order satisfied by given Mahler functions, and concerning the determination of their algebraic values. Our work on solving equations with catalytic variables has started rather recently, so it is still too early to decide the form that related software should take, but we definitely ambition to provide combinatorialists with an implementation that exhibits the algebraic and/or differential equations they are after.

3.4 Guess-and-prove

Pólya has theorized and popularized a “guess-and-prove” approach to mathematics in remarkable books  121, 120. It has now became an essential ingredient in experimental mathematics, whose power is highly enhanced when used in conjunction with modern computer algebra algorithms. This paradigm is a key stone in recent spectacular applications in experimental mathematics, such as 50102, 103. The first half (the guessing part) is based on a “functional interpolation” phase, which consists in recovering equations starting from (truncations of) solutions. The second half (the proving part) is based on fast manipulations (e.g., resultants and factorization) with exact algebraic objects (e.g., polynomials and differential operators).

In what follows we mostly focus on the guessing phase. It is called algebraic approximation  57 or differential approximation  100, depending on the type of equations to be reconstructed. For instance, differential approximation is an operation to get an ODE likely to be satisfied by a given approximate series expansion of an unknown function. This kind of reconstruction technique has been used at least since the 1970s by physicists  89, 90, 97, under the name recurrence relation method, for investigating critical phenomena and phase transitions in statistical physics. Modern versions are based on subtle algorithms for Hermite–Padé approximants  33; efficient differential and algebraic guessing procedures are implemented in most computer algebra systems.

In the following subsections, we describe improvements that we will work on.

3.4.1 Univariate guessing

Minimization.

A first task is to optimize the search for the minimal-order ODE satisfied by a given holonomic series. Feasibility is already known from the recent  27, but the corresponding algorithm is not efficient in practice, because it relies on pessimistic degree bounds and on pessimistic multiplicity estimates. We will design and implement a much more efficient minimization algorithm, which will combine efficient differential guessing with a dynamic computation of tight degree bounds.

Post-certification.

“Multiplicity lemmas” are theorems concluding that an expression representing a formal power series is exactly zero under the weaker assumption that the expression is zero when truncated to some order. In general, the expression is a differential polynomial in a series, but interesting subcases are non-differential polynomials, to test algebraicity, and linear differential expressions, to test holonomicity. In good situations, multiplicity lemmas turn guessing into a proving method or even a decision algorithm. A particularly nice form of a multiplicity lemma is available for polynomial expressions 46, and a similar result exists for linear ODEs 39. We will implement such bounds as proving procedures, and we will generalize the approach to other kinds of expressions, e.g., expressions in divided-difference operators that appear in combinatorics, e.g., in map enumeration  54.

Recombination.

Generating functions appear in a variety of classes of increasing complexity, in relation to the equations they satisfy. A third subtask relates to the search for an element in a lower complexity class inside the solution set of a higher complexity class. For instance, can a linear or some other combination of non-holonomic series be holonomic? Can a linear combination of holonomic series be algebraic, or even rational? A promising ongoing result, obtained incidentally in the work on Riccati-type solutions for Mahler equations (see §3.3.3), performs a similar guessing by a suitable search for constrained Hermite–Padé approximants after computing the whole module of approximants. But the main expected impact of the approach would be for differential analogues, and we will strive to generalize the approach, taking advantage of the formal analogy between many types of linear operators.

Preparing data.

As guessing often requires to first prepare a lot of data, developing fast expansion algorithms for classes of equations is also related to guessing. In this direction, we plan to design a fast algorithm for the high-order expansion of a DD-finite series (i.e., series satisfying linear differential equations with holonomic coefficients). The complexity of the homologue problem for a linear ODE with series coefficients is quasi-linear in the truncation order; that for a linear ODE with polynomial coefficients is just linear. For DD-finite series, we plan to interlace the two approaches without first expanding the series coefficients of the input equation to the wanted order, so as to avoid a large constant and a logarithmic factor.

3.4.2 Multivariate guessing

Multivariate aspects of guessing relate to activities that we plan to develop as a means of strengthening scientific collaborations with colleagues in Paris (PolSys, Sorbonne U.) and Linz (Johannes Kepler University Linz, Austria). How soon the research happens will depend on how interaction with those colleagues evolves.

Trading order for degree.

An established technique in the univariate case is known as “trading order for degree”. It is based on the observation that minimal order operators tend to have very high degree, while operators of slightly higher order often have much smaller degrees and are therefore easier to guess. A candidate for the minimal order operator is then obtained as greatest common right divisor of two guessed operators of nonminimal order. We will extend this successful technique to the multivariate case. The desired output in this case is a Gröbner basis of a zero-dimensional annihilating ideal. The coefficients of the Gröbner basis elements are high-degree polynomials, and the idea is, as in the univariate case, not to guess them directly, but to guess ideal elements of smaller total size and to compute the Gröbner basis of them. As Gröbner basis computations can be costly, the alternative operators will clearly already have to be “close” to a Gröbner basis in order for the idea to be beneficial. The questions are: what should close to a Gröbner basis mean, how close should the operators be chosen, how much degree drop can be expected then, and how do the answers to these questions depend on the monomial order?

Exploiting nested structures.

In another direction, we plan to exploit the generalized Hankel structure of the matrices that appear when modeling linear recurrence relations guessing through linear algebra. Regarding relations with constant coefficients, this finds applications in polynomial system solving through the spFGLM algorithm  81, 82 for finding a lexicographic Gröbner basis. The linear system is block-Hankel with blocks sharing the same structure, and this recursive structure has the same depth as the number of variables. Yet, up to now, only one layer of the structure is handled using fast univariate polynomial arithmetic, then the other ones are dealt with by noting that the matrix has a quasi-Hankel structure and using fast algorithms for this type of matrix 49. However, the displacement rank of this matrix is not small; hence, not taking into account the full structure of the matrix is suboptimal. This is related to  38 for computing linear recurrence relations with constant coefficients using polynomial arithmetic and  115 for computing multivariate Padé approximants. Analogously, the linear system modeled for guessing linear recurrence relations with polynomial coefficients is highly structured. It is the concatenation of matrices as above, yet these matrices are not independent, as they are all built from the same sequence. Even in the univariate case, the Beckermann–Labahn algorithm is not able to exploit this extra structure in order to be quasi-optimal in the input size. Hence, we would like to investigate how to do so.

In addition to the structure in the modeling, we want to exploit the structure of the sequences that come from applications. For instance, in the enumeration of lattice walks, the nonzero terms often lie in a cone and a lattice, and they are invariant under the action of a finite group. The goal is to take this structure into account in order to build smaller systems for the guessing, and to avoid the generation of more sequence terms than necessary.

3.4.3 Software

We will implement fast algorithms for computing Hermite–Padé approximants of various types 33. This will include modular integers, integers (via modular reconstruction), simple approximants, and simultaneous approximants. With such a fast, robust implementation at hand, we will also be able to address the guessing of algebraic differential equations (ADE), going beyond the linear case. Our use of state-of-the-art algorithms for computing approximants (including the “superfast” one) will ensure that we outperform earlier implementations such as Guess (by Hebisch and Rubey) and GuessFunc (by Pantone). We will also develop a variant of trading order for degree for the nonlinear setting. Our implementation will automate the critical selection of derivatives, powers, and coefficient degrees needed to reconstruct an ADE.

3.5 Seminumerical methods in computer algebra

The methods in this research axis deal directly with numbers but, following Knuth 101, they are properly called seminumerical because they lie on the borderline between symbolic and numeric computations. While numerical methods process numerical data and generate further numerical data, our seminumerical methods process exact data, generate high-precision numerical data and reconstruct exact data. In this perspective, the basic unit is not the IEEE-754 floating-point number, but arbitrary precision numbers, typically several thousand decimal places, sometimes more. The crux is not numerical stability, but computational complexity as the number of significant digits goes to infinity. When a number is known at such a high precision, it reveals fundamental structures: rationality, algebraicity, relations with other constants, etc. High-precision computation is a recurring useful tool in the field of experimental mathematics  30. In some situations, it enables a guess-and-prove approach. In some others, we are unable to step from “guess” to “prove” but overwhelming numerical evidence is enough to shape a conviction. A celebrated example is the experimental discovery of the BBP formula for π  31 (that was proved after its initial guessing). More recently, all the conjectures (some of which became theorems) about multiple zeta values, a hot topic in number theory and mathematical physics, start from high-precision numerical data.

3.5.1 Seminumerical algorithms for linear differential equations

We promote linear differential equations as a data structure to represent and compute with functions (see §3.1). In truth, this data structure represents functions up to finitely many constants. It determines a global behavior but misses the pointwise aspect. Seminumerical methods combine both. They are an important tool for experimental mathematics because they can give strong indications about the nature of a function in very general situations (see §3.3.1).

Factorization.

Alexandre Goyer and Raphaël Pagès started a PhD thesis on the factorization of differential operators. It is a fundamental operation for solving linear differential equations, or, at least, elucidate the nature of the solutions. Goyer considers seminumerical methods. They rely on numerical evaluations of the solutions of the differential operators to guess numerically a factorization. High precision makes it possible to reconstruct the factors exactly, and a simple multiplication certifies the computation. Pagès considers a discrete analogue of numerical evaluation: reduction modulo a prime number.

Effective analytic continuation.

The main tool for computing high-precision evaluations of functions or integrals is effective analytic continuation of solutions of linear differential equations. It is a form of numerical ODE solver, specialized for linear equations and able to carry out high precision all along the continuation path.

Numerical ODE solvers are a very classical topic in numerical analysis 60, with popular methods, like Runge–Kutta or multistep methods. A much less known family of symbolic-numeric algorithms, that we could call rigorous Taylor methods, originates from works of the Chudnovskys' in the 1980s and 1990s 66, 65 and has later been developed by van der Hoeven  95 and Mezzarobba  112, 113. This family of algorithms only handles linear ODEs with polynomial coefficients, which is precisely the nature of ODEs arising in the context of this document. But contrary to classical methods, they provide very strong guarantees even in difficult situations, especially rigorous error bounds and correct behavior at singular points, all very desirable features in experimental mathematics. Furthermore, they feature a quasi-optimal complexity with respect to precision, meaning that one can compute easily with thousands digits of precision: computing twice as many digits takes roughly twice as much time. This contrasts with fixed-order methods, which cannot reach such precision. For example, to compute 10,000 digits, the classical order four Runge–Kutta method would need typically 102500 steps. This quest for precision is important and crucial in experimental mathematics and theoretical physics  30.

Yet, as advanced as these algorithms may well be, they struggle with the huge ODEs coming from our applications. The reason is easily explained: most algorithms and implementations are designed for small operators and large precision and focuses on a quasilinear complexity with respect to precision. Our situation is quite opposite, with large ODEs and comparatively modest precision. It may be interesting to consider quadratic-time algorithms, with respect to precision, if the complexity with respect to the size of the ODE gets better. This is a really blocking issue that must be addressed to enable new applications. To solve the problem, we will endeavor to provide new software that pays attention to implement algorithms for all regimes of degrees and orders but moderate precision.

3.5.2 Period computation

Periods are numerical integrals that can be computed to high precision with symbolic-numeric integration, even though current algorithms are far from enough to tackle real applications in algebraic geometry, beyond the case of curves. Algorithms for computing periods of curves are mature 73, 116, 114, 70, 58 and have been used, for example, for the computation of the endomorphism ring of genus 2 curves in the LMFDB 71. Algorithms in higher dimension are only emerging 78, 72, 125. Their current status does not make them suitable for many applications. Firstly, they are limited in generality. The articles 78, 72 deal with special double coverings of 2 or 3, with a low precision, while 125 deals with smooth projective hypersurfaces. In terms of efficiency, we are only able to treat some lucky quartic surfaces (and some very special quintic surfaces or cubic threefolds) for which the underlying ODEs are not too big.

With current methods, we managed to compute the periods of 180 000 quartic surfaces defined by sparse polynomials 106. This corpus of quartic surfaces was discovered by a random walk. Actually, we are not able to compute (in a reasonable amount of time) the periods of a given quartic surface. So we resorted to a random walk guided by ease of computation. This hinders severely the applicability. Yet, this shows the feasibility of transcendental continuation to obtain algebraic invariants that are currently unreachable by any other mean.

The seminumerical algorithms that we develop open perspectives in algebraic geometry. Some integrals with algebraic origin, called periods, convey some interesting algebraic invariants. High-precision computation may unravel them where purely algebraic methods fail 106. These algebraic invariants are crucial to determine the fine structure of algebraic varieties. We aim at designing algorithms to compute periods efficiently for varieties of general interest, in particular K3 surfaces, quintic surfaces, Calabi–Yau threefolds and cubic fourfolds.

3.5.3 Scattering amplitudes in quantum field theory

In quantum field theory, Feynman integrals appear when computing scattering amplitudes with perturbative methods. In practice, computing Feynman integrals is the most effective way to obtain predictions from a quantum field theory. Precise prediction requires higher-order perturbative terms leading to more complex integrals and daunting computational challenges. For example, 29 reports on the methods used, the difficulties encountered and the limitations met when computing precision calculation for teraelectronvolt collisions in the Large Hadron Collider (LHC).

As far as mathematics is concerned, Feynman integrals are periods. Although this makes the evaluation of Feynman integrals look like just a special case of symbolic-numeric integration, it would be naive to pretend that our methods apply without effort: it is clear that the computations are so challenging that only specialized methods may succeed. Current methods include sector decomposition 130 (where the integration domain is decomposed in smaller pieces on which traditional numerical integral algorithms perform well) and the use of differential equations 92 in a similar fashion to what we propose here, namely the symbolic computation of integrals with a parameter combined with numerical ODE solving. In the longer term, we expect that an efficient toolbox to deal with holonomic ideals would improve computations with Feynman integrals. It is however too early to say.

In the short term, the experimental mathematics toolbox that we want to develop may be useful to understand the geometry underlying some Feynman integrals. The typical outcome is simple analytic formulas 44, 43 allowing for fast and precise computations. In this context, identifying key algebraic invariants before engaging further mathematical thinking is crucial. For example, a key fact in the analysis of a three-loop graph by 43 is the generic member of some family of K3 surfaces having Picard rank 19. For other graphs appear cubic fourfolds which we cannot investigate numerically at the moment. An expected outcome of the previously exposed objectives is the computation of the periods of such varieties. This is a first step towards a more systematic development of this interface with high-energy physics.

3.5.4 Software

Solid software foundations for effective analytic continuation (see §3.5.1) will be important for the other tasks in this section. We use currently the part of the package ore_algebradeveloped by Marc Mezzarobba, but it is a bottleneck for several algorithms. The plan for the software development (improvement of ore_algebra, or whole new package) is not fixed yet: it depends on the nature of the algorithmic ideas that will emerge.

4 Application domains

As already expressed in §2.3, our natural application domains are:

  • Combinatorics,
  • Probability theory,
  • Number theory,
  • Algebraic geometry,
  • Statistical physics,
  • Quantum mechanics.

5 Highlights of the year

5.1 Institutional life

In the year 2022, the activity of the team MATHEXP was impacted by the bad mood and heavy dysfunctions in the institute.

  • The disastrous failure of the deployment of a new financial and human resource information system (Eksae) impacts negatively all administrative colleagues with whom researchers interact. It forces the former to duplicate their works, when not making them unable to perform even simple tasks. It also directly impacts researchers, who cannot oversee their budget in any reliable way, and occasionally have to delay simple purchases as a consequence of a reprioritization of overburdened administrative forces.
  • The continuously diminishing emphasis on research in the presentation of Inria and the growing fuzziness around the status of researchers and the role and position of the institute attack the meaning of work and our collective values.
  • The extremely degraded relationship between the Inria Commission d'Évaluation and the current Inria direction is unbearable. It causes a general distrust in the direction's intentions and in the outcome of future hiring and promoting process. The team deplores the current situation. It thanks the Commission d'Évaluation for its outstanding communication to the researchers it represents and for its continued effort in organizing and participating in Inria hiring and promotion committees.

5.2 Creation of the project-team

Project-team MATHEXP has been officially created on April 1st, 2022. It evolves from the former SPECFUN project-team, with a more definite bias towards experimental mathematics.

5.3 ERC project: 10,000 DIGITS

Lairez's ERC proposal, which had been retained for funding with a grant of roughly 1.4 million euros, has started its activities, including first hirings. The project focuses on the foundations of transcendental methods in numerical nonlinear algebra.

5.4 David P. Robbins Prize

Alin Bostan, of the team, together with Irina Kurkova and Kilian Raschel, have received the 2022 AMS David P. Robbins Prize for their paper “A human proof of Gessel's lattice path conjecture,” published in Transactions of the American Mathematical Society in 2017. The paper proves highly nontrivial enumeration results on a family of lattice paths known as Gessel walks.

5.5 Promotion

Frédéric Chyzak was nominated “Directeur de Recherche” in 2022.

6 New results

Participants: Alin Bostan, Frédéric Chyzak, Guy Fayolle, Alexandre Goyer, Pierre Lairez, Rafael Mohr, Hadrien Notarantonio, Raphaël Pagès, Eric Pichon-Pharabod, Sergey Yurkevich.

6.1 Algorithms for discrete differential equations of order 1

Discrete differential equations of order 1 relate in a polynomial fashion a power series F(t,u) in t whose coefficients are polynomial in a “catalytic” variable u to one of its specializations, say F(t,1). Such equations are ubiquitous in combinatorics, notably in the enumeration of maps and walks. When the solution F is unique, a celebrated result by Bousquet-Mélou and Jehanne, reminiscent of Popescu's theorem in commutative algebra, states that F is algebraic. In 12, Alin Bostan, Frédéric Chyzak, and Hadrien Notarantonio, together with Mohab Safey El Din (Sorbonne Université), address algorithmic and complexity questions related to this result. In generic situations, they first revisited and analyzed known algorithms, based either on polynomial elimination or on the guess-and-prove paradigm. They then designed two new algorithms: the first has a geometric flavor, the second blends elimination and guess-and-prove. In the general case (no genericity assumptions), they proved that the total arithmetic size of the algebraic equations for F(t,1) is bounded polynomially in the size of the input discrete differential equation, and that one can compute such equations in polynomial time.

6.2 Symbolic-numeric factorization of differential operators

Frédéric Chyzak and Alexandre Goyer, together with Marc Mezzarobba (LIX), presented in 13 a symbolic-numeric Las Vegas algorithm for factoring Fuchsian ordinary differential operators with rational function coefficients. The new algorithm combines ideas of van Hoeij's ‘‘local-to-global” method and of the ‘‘analytic” approach proposed by van der Hoeven. It essentially reduces to the former in ‘‘easy” cases where the local-to-global method succeeds, and to an optimized variant of the latter in the ‘‘hardest” cases, while handling intermediate cases more efficiently than both.

6.3 Effective algebraicity for solutions of systems of functional equations with one catalytic variable

In 21, Hadrien Notarantonio and Sergey Yurkevich studied systems of discrete differential equations of general order in one catalytic variable. They provided a constructive and elementary proof of algebraicity of the solutions of such systems. They obtained effective bounds and developed a systematic method for computing the minimal polynomials. Their approach was a generalization of the pioneering work by Bousquet-Mélou and Jehanne (2006).

6.4 Reflected brownian motion in a non convex cone

Guy Fayolle, in collaboration with S. Franceschi (Télécom SudParis, Institut Polytechnique de Paris) and K. Raschel (CNRS, Université d'Angers, LAREMA), studies the stationary reflected Brownian motion in a non-convex wedge, which, compared to its convex analogue model, has been much rarely analyzed in the probabilistic literature. Two approaches are proposed.

  1. In 8, he proved that the stationary distribution can be found by solving a two-dimensional vector boundary value problem (BVP) on a single curve (a hyperbola) for the associated Laplace transform. The reduction to this kind of vector BVP seems to be quite new in the literature. As a matter of comparison, a single boundary condition is sufficient in the convex case. When the parameters of the model (drift, reflection angles and covariance matrix) are symmetric with respect to the bisector line of the cone, the model is reducible to a standard reflected Brownian motion in a convex cone. He additionally constructed a one-parameter family of distributions, which surprisingly provides for any wedge (convex or not) a particular example of stationary distribution of a reflected Brownian motion.
  2. The main result in 9 is to show that the stationary distribution can be obtained by solving a boundary value problem of the same kind as the one encountered in the quarter plane, up to various dualities and symmetries. The idea is to start from Fourier (and not Laplace) transforms, allowing to get a functional equation for a single function of two complex variables.

6.5 Stability and string stability of car-following models with reaction-time delay

In 10, Guy Fayolle, in collaboration with J.-M. Lagouttes (Inria Paris) and C. Flores (Institute of Transportation Studies, UCB), have investigated the transfer function emanating from the linearization of a car-following model for human drivers when taking into account their reaction time, which is known to be a cause of the stop-and-go traffic phenomenon. This led to a non rational transfer function, implying nontrivial stability conditions that they could give explicitly. These conditions are in particular satisfied whenever string stability holds. It is also shown how the reaction time can introduce a regime of partial string stability, where the transfer function modulus remains smaller than 1, up to some critical frequency. The authors explored conditions in the parameter space that discriminate between 3 different regimes (stability, string stability, partial string stability).

6.6 On the representability of sequences as constant terms

A constant term sequence is a sequence of rational numbers whose n-th term is the constant term of Pn(x)Q(x), where P(x) and Q(x) are multivariate Laurent polynomials. While the generating functions of such sequences are invariably diagonals of multivariate rational functions, and hence special period functions, it is a famous open question, raised by Don Zagier, to classify those diagonals which are constant terms. In 17, Alin Bostan and Sergey Yurkevich, in collaboration with Armin Straub (University of South Alabama, USA), we provided such a classification in the case of sequences satisfying linear recurrences with constant coefficients. They further considered the case of hypergeometric sequences and, for a simple illustrative family of hypergeometric sequences, classified those that are constant terms.

6.7 Solving Rupert's problem algorithmically

A polyhedron 𝐏3 has Rupert's property if a hole can be cut into it, such that a copy of 𝐏 can pass through this hole. There are several works investigating this property for some specific polyhedra: for example, it is known that all 5 Platonic and 9 out of the 13 Archimedean solids admit Rupert's property. A commonly believed conjecture states that every convex polyhedron is Rupert. The paper 22 by Sergey Yurkevich in collaboration with Jakob Steininger is an extended abstract of their joint work in which they proved that Rupert's problem is algorithmically decidable for polyhedra with algebraic coordinates. There they also designed a probabilistic algorithm which can efficiently prove that a given polyhedron is Rupert. Using this algorithm they not only confirmed this property for the known Platonic and Archimedean solids, but also proved it for one of the remaining Archimedean polyhedra and many others. Moreover, they significantly improved on almost all known Nieuwland numbers and conjectured, based on statistical evidence, that the Rhombicosidodecahedron is in fact not Rupert.

6.8 The art of algorithmic guessing in gfun

The technique of guessing can be very fruitful when dealing with sequences which arise in practice. This holds true especially when guessing is performed algorithmically and efficiently. The ideal tool for it exists as a package named gfun in the software Maple. In the by now published paper 11 Sergey Yurkevich explores and explains some of gfun's possibilities and illustrates them on two examples from recent mathematical research by him and his collaborators.

6.9 Fast computation of the N-th term of a q-holonomic sequence and applications

In 1977, Strassen invented a famous baby-step/giant-step algorithm that computes the factorial N! in arithmetic complexity quasi-linear in N. In 1988, the Chudnovsky brothers generalized Strassen’s algorithm to the computation of the N-th term of any holonomic sequence in essentially the same arithmetic complexity. In 6, Alin Bostan and Sergey Yurkevich designed q-analogues of these algorithms. They first extend Strassen’s algorithm to the computation of the q-factorial of N, then Chudnovskys' algorithm to the computation of the N-th term of any q-holonomic sequence. Both algorithms work in arithmetic complexity quasi-linear in N; surprisingly, they are simpler than their analogues in the holonomic case. They provide a detailed cost analysis, in both arithmetic and bit complexity models. Moreover, they describe various algorithmic consequences, including the acceleration of polynomial and rational solving of linear q-differential equations, and the fast evaluation of large classes of polynomials, including a family recently considered by Nogneng and Schost.

6.10 On a class of hypergeometric diagonals

In 7, Alin Bostan and Sergey Yurkevich proved that the diagonal of any finite product of algebraic functions of the form

( 1 - x 1 - - x n ) R , R ,

is a generalized hypergeometric function, and they provided explicit description of its parameters. The particular case (1-x-y)R/(1-x-y-z) corresponds to the main identity of Abdelaziz, Koutschan and Maillard in  23. The result in 7 is useful in both directions: on the one hand it shows that Christol's conjecture holds true for a large class of hypergeometric functions, on the other hand it allows for a very explicit and general viewpoint on the diagonals of algebraic functions of the type above. Finally, in contrast to  23, the new proof is completely elementary and does not require any algorithmic help.

6.11 A hypergeometric proof that 𝖨𝗌𝗈 is bijective

A short and elementary proof of the main technical result of the recent article “On the uniqueness of Clifford torus with prescribed isoperimetric ratio” 136 by Thomas Yu and Jingmin Chen has been found by Alin Bostan and Sergey Yurkevich in 5. The key of the new proof is an explicit expression of the central function (Iso, proved to be bijective) as a quotient of Gaussian hypergeometric functions.

6.12 A short proof of a non-vanishing result by Conca, Krattenthaler and Watanabe

In their 2009 paper Regular sequences of symmetric polynomials, Aldo Conca, Christian Krattenthaler and Junzo Watanabe needed to prove, as an intermediate result, the fact that all terms of a family of binomial sums indexed by an integer h are non-zero, except for h=3. The proof in their paper (Appendix, pp. 190–199) performs a long and quite intricate 3-adic analysis. In 3, Alin Bostan proposes a shorter and elementary proof.

6.13 Persistence probabilities and Mallows-Riordan polynomials

Mallows-Riordan polynomials, sometimes also called inversion polynomials, form a family of polynomials with integer coefficients appearing in many counting problems in enumerative combinatorics. They are also connected with the cumulant generating function of the classical log-normal distribution in probability theory. In 1 Alin Bostan, together with his probabilist co-authors Gerold Alsmeyer (U. Münster), Kilian Raschel (CNRS, U. Angers) and Thomas Simon (U. Lille), provide a probabilistic interpretation of the Mallows-Riordan polynomials that is not only quite different from the classical connection with the log-normal distribution, but in fact also rather unexpected. More precisely, they establish exact formulae in terms of Mallows-Riordan polynomials for the persistence probabilities of a class of order-one autoregressive processes with symmetric uniform innovations. These exact formulae then lead to precise asymptotics of the corresponding persistence probabilities. The connection of the Mallows-Riordan polynomials with the volumes of certain polytopes is also discussed. Two further results provide general factorizations of AR(1) models with continuous symmetric innovations, one for negative and one for positive drift. The second factorization extends a classical universal formula of Sparre Andersen for symmetric random walks.

6.14 On some combinatorial sequences associated to invariant theory

In 4, Alin Bostan together with Jordan Tirrell (Washington College, USA), Bruce W. Westbury (U. Texas at Dallas, USA) and Yi Zhang (Xi'an Jiaotong-Liverpool University, Suzhou, China) study the enumerative and analytic properties of some sequences constructed using tensor invariant theory. The first family, containing the so-called octant sequences, is constructed from the exceptional Lie group G2. The second family, containing the so-called quadrant sequences, is constructed from the special linear group SL(3). All sequences are defined as the dimension of the subspace of invariant tensors in the tensor powers of the corresponding representation. The authors first give combinatorial interpretations for the first octant sequence, T3, based on interpretations of the sequences in the first family as lattice walks in the plane. They then show that the second octant sequence E3 is the binomial transform of T3; this result provides an unexpected connection between the invariant theory of G2 and the combinatorics of set partitions. A third result is a proof (actually three independent proofs) of a recurrence satisfied by T3 that was conjectured by Mihailovs in the early 2000s. Similar results are obtained for the sequences of the quadrant sequences. These sequences also have interpretations as enumerating two-dimensional lattice walks. They are all P-recursive, and recurrence relations are proved for them. In all cases the associated differential operators are of third order and have the remarkable property that they can be solved to give closed formulae for the ordinary generating functions in terms of classical Gaussian hypergeometric functions. Moreover, it is shown that the octant sequences and the quadrant sequences are related by the branching rules for the inclusion of SL(3) in G2.

6.15 Continued fractions, orthogonal polynomials and Dirichlet series

Using an experimental mathematics approach, Alin Bostan together with Frédéric Chapoton (CNRS, IRMA Strasbourg) obtained in 15 new relations between the Dirichlet series for certain periodic coefficients and the moments of certain families of orthogonal polynomials. In addition to the classical hypergeometric orthogonal polynomials, of Racah type and continuous dual Hahn, a new similar family of orthogonal polynomials was discovered.

6.16 Minimization of differential equations and algebraic values of E-functions

A power series being given as the solution of a linear differential equation with appropriate initial conditions, minimization consists in finding a non-trivial linear differential equation of minimal order having this power series as a solution. This problem exists in both homogeneous and inhomogeneous variants; it is distinct from, but related to, the classical problem of factorization of differential operators. Recently, minimization has found applications in Transcendental Number Theory, more specifically in the computation of non-zero algebraic points where Siegel's E-functions take algebraic values. In 16 Alin Bostan together with Bruno Salvy (Inria, ENS Lyon) and Tanguy Rivoal (CNRS, IF Grenoble) present algorithms for these questions and discuss implementation and experiments.

6.17 Gröbner bases and critical values: the asymptotic combinatorics of determinantal systems

In 2 Alin Bostan, together with co-authors Jérémy Berthomieu, Andrew Ferguson and Mohab Safey El Din (all from Sorbonne Université), studied determinantal polynomial systems. These are polynomial systems involving maximal minors of some given matrix. An important situation where these arise is the computation of the critical values of a polynomial map restricted to an algebraic set. This leads directly to a strategy for, among other problems, polynomial optimization.

Computing Gröbner bases is a classical method for solving polynomial systems in general. For practical computations, this consists of two main stages. First, a Gröbner basis is computed with respect to a DRL (degree reverse lexicographic) ordering. Then, a change of ordering algorithm, such as Sparse-FGLM, designed by Faugère and Mou, is used to find a Gröbner basis of the same system but with respect to a lexicographic ordering. The complexity of this latter step, in terms of the number of arithmetic operations in the ground field, is O(mD2), where D is the degree of the ideal generated by the input and m is the number of non-trivial columns of a certain D×D matrix.

While asymptotic estimates are known for m in the case of generic polynomial systems, thus far, the complexity of Sparse-FGLM was unknown for the class of determinantal systems.

By assuming Fröberg's conjecture, a classical conjecture in commutative algebra, and thus ensuring that the Hilbert series of generic determinantal ideals have the necessary structure, the authors expand the work of Moreno-Socías by detailing the structure of the DRL staircase in the determinantal setting. Then, they study the asymptotics of the quantity m by relating it to the coefficients of these Hilbert series. Consequently, they arrive at a new bound on the complexity of the Sparse-FGLM algorithm for generic determinantal systems and, in particular, for generic critical point systems.

The ideal is considered inside the polynomial ring 𝕂[x1,,xn], where 𝕂 is some infinite field, generated by p generic polynomials of degree d and the maximal minors of a p×(n-1) polynomial matrix with generic entries of degree d-1. Then, in this setting, for the case d=2 and for np the paper 2 establishes an exact formula for m in terms of n and p. Moreover, for d3, it gives a tight asymptotic formula, as n, for m in terms of n,p and d.

6.18 A signature-based algorithm for computing the nondegenerate locus of a polynomial system

Polynomial system solving arises in many application areas to model non-linear geometric properties. In such settings, polynomial systems may come with degeneration which the end-user wants to exclude from the solution set. The nondegenerate locus of a polynomial system is the set of points where the codimension of the solution set matches the number of equations. Computing the nondegenerate locus is classically done through ideal-theoretic operations in commutative algebra such as saturation ideals or equidimensional decompositions to extract the component of maximal codimension. By exploiting the algebraic features of signature-based Gröbner basis algorithms the authors design an algorithm which computes a Gröbner basis of the equations describing the closure of the nondegenerate locus of a polynomial system, without computing first a Gröbner basis for the whole polynomial system.

This is a work of Pierre Lairez, together with Christian Eder, Rafael Mohr nad Mohab Safey El Din 18.

6.19 Algorithms for minimal Picard-Fuchs operators of Feynman integrals

In even space-time dimensions the multi-loop Feynman integrals are integrals of rational function in projective space. By using an algorithm that extends the Griffiths–Dwork reduction for the case of projective hypersurfaces with singularities, the authors derive Fuchsian linear differential equations, the Picard–Fuchs equations, with respect to kinematic parameters for a large class of massive multi-loop Feynman integrals. With this approach the authors obtain the differential operator for Feynman integrals to high multiplicities and high loop orders. Using recent factorisation algorithms the authors give the minimal order differential operator in most of the cases studied in this paper. Amongst our results are that the order of Picard–Fuchs operator for the generic massive two-point n-1-loop sunset integral in two-dimensions is 2n-n+1n+12 supporting the conjecture that the sunset Feynman integrals are relative periods of Calabi–Yau of dimensions n-2. The Authors have checked this explicitly till six loops. As well, the authors obtain a particular Picard–Fuchs operator of order 11 for the massive five-point tardigrade non-planar two-loop integral in four dimensions for generic mass and kinematic configurations, suggesting that it arises from K3 surface with Picard number 11. The Authors determine as well Picard–Fuchs operators of two-loop graphs with various multiplicities in four dimensions, finding Fuchsian differential operators with either Liouvillian or elliptic solutions.

This is a work of Pierre Lairez, together with Pierre Vanhove 20.

6.20 Axioms for a theory of signature bases

Twenty years after the discovery of the F5 algorithm, Gröbner bases with signatures are still challenging to understand and to adapt to different settings. This contrasts with Buchberger's algorithm, which we can bend in many directions keeping correctness and termination obvious. Pierre Lairez 19 proposes an axiomatic approach to Gröbner bases with signatures with the purpose of uncoupling the theory and the algorithms, and giving general results applicable in many different settings (e.g. Gröbner for submodules, F4-style reduction, noncommutative rings, non-Noetherian settings, etc.).

6.21 Factoring differential operators on algebraic curves in positive characteristic

Factorisation of linear differential operators and systems in positive characteristic have been studied before by van der Put in 123 and Thomas Cluzeau in 69 where both made use of the p-curvature to find a first factorisation of differential operators and systems respectively. Unfortunately the p-curvature alone is not enough to completely factor operators as a product of irreducible differential operators when its characteristic polynomial is not square free.

In particular to this point no algorithm where yet known to factor central operators in polynomial time.

In 14, the author present a refinement of this method to factor differential operators on algebraic curves of positive characteristic relying on tools of algebraic geometry such as Riemann-Roch spaces and Picard 0 group which should solve this issue. Additionally this work should allow for a better control of the size of the output of the factorisation.

A full article presenting this work in more details should be submitted in 2023.

7 Partnerships and cooperations

7.1 International initiatives

7.1.1 Participation in other International Programs

PRCI “EAGLES

Participants: Alin Bostan (PI), Frédéric Chyzak.

  • Title:
    Efficient Algorithms for Guessing, inequaLitiEs and Summation
  • Partner Institution(s):
    • JKU (Linz), Austria
    • RICAM (Linz), Austria
    • Inria (Saclay), France
    • Sorbonne Univ. (Paris), France
  • Date/Duration: March 2023 – March 2027
  • Additionnal info/keywords:
    This is a PRCI ANR/FWF project between two computer algebra teams in France and two computer algebra teams in Austria. The Austrian PI is Manuel Kauers from JKU Linz. The goal is to work together on four axes: structured and multivariate guessing, inequalities and D-finiteness, creative telescoping and applications in combinatorics, number theory and theoretical physics. The obtained funding is of 770,000 euros in total, a major part of which will be used to fund 4 PhD theses.
PhD project of Yurkevich
  • Title:
    Integer sequences, algebraic series and differential operators.
  • Partner Institution(s):
    • University of Vienna, Austria.
  • Date/Duration: September 2020 – August 2023
  • Additionnal info/keywords:
    The PhD thesis project of Sergey Yurkevich is a cotutelle with the University of Vienna (Austria). The supervisors are Alin Bostan on the French side and Herwig Hauser on the Austrian side. The investigation topic covers on the one hand integer sequences naturally arising in various scientific disciplines such as number theory, combinatorics and physics, and on the other hand solutions to special kinds of differential equations.

7.2 European initiatives

7.2.1 Horizon Europe

  • ERC Starting Grant “10000 DIGITS”. This project led by Pierre Lairez spans for five years starting from April 2022. It funds three PhD theses and three 2-year post-doctoral positions. Its goal is to develop algorithms and software to compute with high precision integrals with a geometric origin, especially periods of algebraic varieties, and to tackle applications in diophantine approximation, quantum field theory, and optimization.

7.3 National initiatives

  • De rerum natura. This project, set up by the team, was accepted this year and will be funded until 2023. It gathers over 20 experts from four fields: computer algebra; the Galois theories of linear functional equations; number theory; combinatorics and probability. Our goal is to obtain classification algorithms for number theory and combinatorics, particularly so for deciding irrationality and transcendence. (Permanent members with pm listed: Bostan, Chyzak, Lairez.)
  • ifference. This project, led by Olivier Bournez (Lix), started in November 2020. Its objective is to consider a novel approach in between the two worlds: discrete-oriented computations on the one side and differential equations on the other side. We aims at providing new insights on classical complexity theory, computability and logic through this prism and at introducing new perspectives in algorithmic methods for differential equations solving and computer science applications. (Permanent members with pm listed: Bostan, Chyzak.)

7.4 Regional initiatives

  • Alin Bostan is co-leader of the Amadeus (Campus France) bilateral project “Integer Sequences arising in Number Theory, Combinatorics and Physics” between France and Austria. The Austrian co-leader is Herwig Hauser (U. Vienna, Austria).

8 Dissemination

Participants: Alin Bostan, Frédéric Chyzak, Guy Fayolle, Alexandre Goyer, Alaa Ibrahim, Pierre Lairez, Hadrien Notarantonio, Raphaël Pagès, Eric Pichon-Pharabod, Sergey Yurkevich.

8.1 Promoting scientific activities

8.1.1 Scientific events: organisation

General chair, scientific chair
  • Alin Bostan is part of the Scientific advisory board of the conference series Effective Methods in Algebraic Geometry (MEGA).
  • Since 2020, for a period of 5 years, Alin Bostan is member of the steering committee of the Journées Nationales de Calcul Formel (JNCF), the annual meeting of the French computer algebra community.
  • Alin Bostan is part of the scientific committee of the GDR EFI (“Functional Equations and Interactions”) dependent on the mathematical institute (INSMI) of the CNRS. The goal of this GDR is to bring together various research communities in France working on functional equations in fields of computer science and mathematics.
Member of the organizing committees
  • Alin Bostan co-organizes, with Lucia Di Vizio, the Séminaire Différentiel between U. Versailles and Inria Saclay, with a bi-annual frequency.
  • Alin Bostan co-organizes, with Lucia Di Vizio and Kilian Raschel the working group Transcendance et Combinatoire, at Institut Henri Poincaré (Paris), with a weekly frequency.
  • Alin Bostan, together with Mohab Safey El Din, Bruno Salvy and Gilles Villard, organize a thematic program, Recent Trends in Computer Algebra (RTCA), to be held in 2023 in Paris and Lyon. The proposal has been accepted, the main funders being IHP (120,000 euros) and Labex Milyon (60,000 euros).

8.1.2 Scientific events: selection

Reviewer

8.1.3 Journal

Member of the editorial boards
Reviewer - reviewing activities

8.1.4 Invited talks

8.1.5 Scientific expertise

8.1.6 Research administration

  • Alin Bostan is one of the two members of the scientific search committee of the Inria Saclay Center.

8.2 Teaching - Supervision - Juries

  • Bachelor:
    • Alexandre Goyer, Mathématiques générales pour la biologie (LSMA100A), 60h, L1, Université de Versailles Saint-Quentin-en-Yvelines, France.
    • Eric Pichon-Pharabod, Integration, Taylor expansion, Ordinary differential equations (MAA105), 64h, 2nd semester Bachelor Polytechnique, France.
    • Eric Pichon-Pharabod, Topology and multivariable calculus (MAA105), 64h, 3rd semester Bachelor Polytechnique, France.
    • Hadrien Notarantonio, Structure de données (LU2IN006), 64h, 2nd semester, Sorbonne Université, France.
  • Master:
    • Alin Bostan, Algorithmes efficaces en calcul formel, 22.5h, M2, MPRI, France.
    • Alin Bostan, Modern Algorithms for Symbolic Summation and Integration, 18h, M2, ENS Lyon, France.
    • Frédéric Chyzak, Algorithmes efficaces en calcul formel, 9h, M2, MPRI, France. (Also responsible for the course until 2022.)
    • Pierre Lairez, Algorithmes efficaces en calcul formel, 12h, M2, MPRI, France.
    • Pierre Lairez, Competitive programming (INF473A), TD, 40h, M2, École polytechnique, France.
    • Pierre Lairez, Les bases de la programmation et de l'algorithmique (INF411), TD, 40h, M1, École polytechnique, France.

8.2.1 Supervision

  • Master interships:
    • Alin Bostan co-supervised together with Mohab Safey El Din (Sorbonne U.) and Bruno Salvy (Inria, ENS Lyon) the Master thesis of Alaa Ibrahim on the topic “Preuves automatiques d'inégalités entre fonctions spéciales”.
  • PhD theses:
    • Alin Bostan co-supervises together with Xavier Caruso (CNRS, IMB Bordeaux) the PhD thesis of Raphaël Pagès on “Algorithms for factoring linear differential operators in positive characteristic”.
    • Alin Bostan co-supervises together with Herwig Hauser (U. Vienna, Austria) the PhD thesis of Sergey Yurkevich on “Integer sequences arising in number theory, combinatorics and physics”.
    • Alin Bostan co-supervises together with Mohab Safey El Din (Sorbonne U.) and Bruno Salvy (Inria, ENS Lyon) the PhD thesis of Alaa Ibrahim on “Automated proofs of inequalities between special sequences and functions”.
    • Alin Bostan and Frédéric Chyzak co-supervise together with Mohab Safey El Din (Sorbonne U.) the PhD thesis of Hadrien Notarantonio on “Geometry-driven algorithms for the efficient solving of combinatorial functional equations”.
    • Frédéric Chyzak co-supervises together with Marc Mezzarobba (CNRS, LIX) the PhD thesis of Alexandre Goyer on “Symbolic-numeric algorithms in differential algebra”.
    • Frédéric Chyzak and Pierre Lairez co-supervise the PhD thesis of Hadrien Brochet on “Algorithms for D-modules”.
    • Pierre Lairez co-supervises together with Pierre Vanhove (CEA, IPhT) the PhD thesis of Eric Pichon-Pharabod on “Periods in algebraic geometry: computation and application to Feynman's integrals”.
    • Pierre Lairez co-supervises together with Christian Eder (TU Kaiserslautern) and Mohab Safey El Din (Sorbonne U.) the PhD thesis of Rafael Mohr on “Equidimensional decomposition algorithms with signature bases”.

8.2.2 Juries

  • Alin Bostan was examiner in the PhD defense jury of Andrew Ferguson, Exact Algorithms for Polynomial Optimisation, Sorbonne Univ., October 24, 2022.
  • Pierre Lairez was examiner in the PhD defense jury of Cédric Mazet (Institut mathématique de Marseille).
  • Alin Bostan has served as a reviewer in the mid-PhD examination of Subhayan Saha, Lower bounds and reconstruction algorithms for arithmetic circuits, ENS Lyon.
  • Alin Bostan has served as a reviewer in the mid-PhD examination of Aadil Oufkir, Quantum information, Moderate deviations and beyond, ENS Lyon.
  • Alin Bostan has served as a reviewer in the mid-PhD examination of Hippolyte Signargout, Algorithmes efficaces pour les matrices et polynômes structurés, ENS Lyon.

8.3 Scientific animation of the project-team

  • The team runs a monthly seminar, jointly with colleagues in Sorbonne Université, in alternation in Palaiseau and Paris, and open to remote attendance in hybrid mode. This year, the organizers for our team were Pierre Lairez and Hadrien Notarantonio. We could have 13 talks, including some by international visiting colleagues.
  • The team also runs an internal weekly meeting, where some of us could give early talks on new results. This year we had talks by Guy Fayolle, Alexandre Goyer, Alaa Ibrahim, Pierre Lairez, Raphaël Pagès, Eric Pichon-Pharabod, and Sergey Yurkevich.
  • In December, the team had a 1-day visit of a group of 25 master's students from Université of Versailles Saint Quentin (M1 and M2 of the Applied Algebra master). Introductory talks about our research were given by Alin Bostan, Hadrien Brochet, Frédéric Chyzak, Alexandre Goyer, Pierre Lairez, Hadrien Notarantonio, and Eric Pichon-Pharabod.

9 Scientific production

9.1 Publications of the year

International journals

International peer-reviewed conferences

Conferences without proceedings

  • 14 inproceedingsR.Raphaël Pagès. Factoring differential operators over algebraic curves in positive characteristic.ISSAC 2022 - International Symposium on Symbolic and Algebraic ComputationLille, FranceMay 2022

Reports & preprints

Other scientific publications

  • 22 inproceedingsJ.Jakob Steininger and S.Sergey Yurkevich. Extended abstract for: Solving Rupert’s problem algorithmically.ISSAC 2022 - The International Symposium on Symbolic and Algebraic Computation (ISSAC)562Lille, FranceJune 2022, 32-35

9.2 Cited publications

  • 23 articleY.Y. Abdelaziz, C.C. Koutschan and J.-M.J.-M. Maillard. On Christol's conjecture.J. Phys. A53202020, 205201, 16 pagesURL: https://doi.org/10.1088/1751-8121/ab82dc
  • 24 articleS. A.S. A. Abramov, M. A.M. A. Barkatou and M.M. van Hoeij. Apparent singularities of linear difference equations with polynomial coefficients.1722006, 117--133
  • 25 inproceedingsS. A.Sergei A. Abramov and M.Mark van Hoeij. Desingularization of linear difference operators with polynomial coefficients.ISSAC~'99Conference proceedingsACM1999
  • 26 articleB.Boris Adamczewski and C.Colin Faverjon. Méthode de Mahler, transcendance et relations linéaires: aspects effectifs.3022018, 557--573
  • 27 articleB.Boris Adamczewski and T.Tanguy Rivoal. Exceptional values of E-functions at algebraic points.5042018, 697--908
  • 28 articleM. F.Michael F. Adamer, A. C.András C. Lőrincz, A.-L.Anna-Laura Sattelberger and B.Bernd Sturmfels. Algebraic analysis of rotation data.112December 2020, 189--211
  • 29 reportS.S. Badger, J.J. Bendavid, V.V. Ciulli, A.A. Denner, R.R. Frederix, M.M. Grazzini, J.J. Huston, M.M. Schönherr, K.K. Tackmann, J.J. Thaler, C.C. Williams, J. R.J. R. Andersen, K.K. Becker, M.M. Bell, J.J. Bellm, E.E. Bothmann, R.R. Boughezal, J.J. Butterworth, S.S. Carrazza, M.M. Chiesa, L.L. Cieri, M.M. Duehrssen-Debling, G.G. Falmagne, S.S. Forte, P.P. Francavilla, M.M. Freytsis, J.J. Gao, P.P. Gras, N.N. Greiner, D.D. Grellscheid, G.G. Heinrich, G.G. Hesketh, S.S. Höche, L.L. Hofer, T.-J.T.-J. Hou, A.A. Huss, J.J. Isaacson, A.A. Jueid, S.S. Kallweit, D.D. Kar, Z.Z. Kassabov, V.V. Konstantinides, F.F. Krauss, S.S. Kuttimalai, A.A. Lazapoulos, P.P. Lenzi, Y.Y. Li, J. M.J. M. Lindert, X.X. Liu, G.G. Luisoni, L.L. Lönnblad, P.P. Maierhöfer, D.D. Maître, A. C.A. C. Marini, G.G. Montagna, M.M. Moretti, P. M.P. M. Nadolsky, G.G. Nail, D.D. Napoletano, O.O. Nicrosini, C.C. Oleari, D.D. Pagani, C.C. Pandini, L.L. Perrozzi, F.F. Petriello, F.F. Piccinini, S.S. Plätzer, I.I. Pogrebnyak, S.S. Pozzorini, S.S. Prestel, C.C. Reuschle, J.J. Rojo, L.L. Russo, P.P. Schichtel, S.S. Schumann, A.A. Siódmok, P.P. Skands, D.D. Soper, G.G. Soyez, P.P. Sun, F. J.F. J. Tackmann, E.E. Takasugi, S.S. Uccirati, U.U. Utku, L.L. Viliani, E.E. Vryonidou, B. T.B. T. Wang, B.B. Waugh, M. A.M. A. Weber, J.J. Winter, K. P.K. P. Xie, C.-P.C.-P. Yuan, F.F. Yuan, K.K. Zapp and M.M. Zaro. Les Houches 2015: Physics at TeV colliders standard model working group report.May 2016
  • 30 articleD.David Bailey and J. M.J. M. Borwein. High-precision numerical integration: progress and challenges.4672011, 741--754
  • 31 articleD.David Bailey, P.Peter Borwein and S.Simon Plouffe. On the rapid computation of various polylogarithmic constants.662181997, 903--913
  • 32 inproceedingsM.Moulay Barkatou, T.Thomas Cluzeau, L.Lucia Di Vizio and J.-A.Jacques-Arthur Weil. Computing the Lie algebra of the differential Galois group of a linear differential system.ISSAC~'2016Conference proceedingsACM2016, 63--70
  • 33 articleB.Bernhard Beckermann and G.George Labahn. A uniform approach for the fast computation of matrix-type Padé approximants.1531994, 804--823
  • 34 articleJ. P.Jason P. Bell and M.Michael Coons. Transcendence tests for Mahler functions.14532017, 1061--1070
  • 35 articleO.Olivier Bernardi and M.Mireille Bousquet-Mélou. Counting colored planar maps: algebraicity results.10152011, 315--377URL: https://doi.org/10.1016/j.jctb.2011.02.003
  • 36 articleO.Olivier Bernardi, M.Mireille Bousquet-Mélou and K.Kilian Raschel. Counting quadrant walks via Tutte's invariant method.12021
  • 37 inproceedingsJ.Jérémy Berthomieu, C.Christian Eder and M.Mohab Safey El Din. Msolve: a library for solving polynomial systems.ISSAC~'21Conference proceedingsACMJuly 2021, 51--58
  • 38 inproceedingsJ.Jérémy Berthomieu and J.-C.Jean-Charles Faugère. A polynomial-division-based algorithm for computing linear recurrence relations.ISSAC~'18Conference proceedings2018, 79--86
  • 39 articleD.Daniel Bertrand and F.Frits Beukers. Équations différentielles linéaires et majorations de multiplicités.1811985, 181--192
  • 40 articleF.F. Beukers. A note on the irrationality of (2) and (3).1131979, 268--272
  • 41 articleF.F. Beukers. A refined version of the Siegel-Shidlovskii theorem.16312006, 369--379URL: https://doi.org/10.4007/annals.2006.163.369
  • 42 incollectionF.F. Beukers. Padé-approximations in number theory.Padé Approximation and its Applications, Amsterdam 1980888Lecture Notes in Math.Springer1981, 90--99
  • 43 articleS.Spencer Bloch, M.Matt Kerr and P.Pierre Vanhove. A Feynman integral via higher normal functions.151122015, 2329--2375
  • 44 articleS.Spencer Bloch and P.Pierre Vanhove. The elliptic dilogarithm for the sunset graph.1482015, 328--364
  • 45 bookJ.Jonathan Borwein and D.David Bailey. Mathematics by experiment.Plausible reasoning in the 21st centuryA K Peters2008, xii+377
  • 46 bookA.A. Bostan, F.F. Chyzak, M.M. Giusti, R.R. Lebreton, G.G. Lecerf, B.B. Salvy and É.É. Schost. Algorithmes efficaces en calcul formel.686 pagesCreateSpace2017
  • 47 inproceedingsA.Alin Bostan, F.Frédéric Chyzak, P.Pierre Lairez and B.Bruno Salvy. Generalized Hermite reduction, creative telescoping and definite integration of D-finite functions.ISSAC~'18Conference proceedingsACM2018, 95--102
  • 48 inproceedingsA.Alin Bostan, F.Frédéric Chyzak, G.Grégoire Lecerf, B.Bruno Salvy and É.Éric Schost. Differential equations for algebraic functions.ISSAC~'07Conference proceedings2007, 25--32
  • 49 articleA.A. Bostan, C.-P.C.-P. Jeannerod, C.C. Mouilleron and É.É. Schost. On matrices with displacement structure: generalized operators and faster algorithms.3832017, 733--775URL: https://doi.org/10.1137/16M1062855
  • 50 articleA.A. Bostan and M.M. Kauers. The complete generating function for Gessel walks is algebraic.1389With an appendix by Mark van Hoeij2010, 3063--3078
  • 51 inproceedingsA.Alin Bostan, P.Pierre Lairez and B.Bruno Salvy. Creative telescoping for rational functions using the Griffiths–Dwork method.ISSAC~'13Conference proceedingsACM2013, 93--100
  • 52 articleA.Alin Bostan, P.Pierre Lairez and B.Bruno Salvy. Multiple binomial sums.80part 22017, 351--386URL: https://doi.org/10.1016/j.jsc.2016.04.002
  • 53 articleA.Alin Bostan, T.Tanguy Rivoal and B.Bruno Salvy. Explicit degree bounds for right factors of linear differential operators.5312020, 53--62
  • 54 articleM.Mireille Bousquet-Mélou and A.Arnaud Jehanne. Polynomial equations with one catalytic variable, algebraic series and map enumeration.9652006, 623--672
  • 55 articleM.Mireille Bousquet-Mélou. Square lattice walks avoiding a quadrant.1442016, 37--79
  • 56 inproceedingsB.Brice Boyer, C.Christian Eder, J.-C.Jean-Charles Faugère, S.Sylvian Lachartre and F.Fayssal Martani. GBLA: Gröbner basis linear algebra package.ISSAC~'16Conference proceedingsACM2016, 135--142
  • 57 articleR.R. Brak and A. J.A. J. Guttmann. Algebraic approximants: a new method of series analysis.23241990, L1331--L1337
  • 58 articleN.Nils Bruin, J.Jeroen Sijsling and A.Alexandre Zotine. Numerical computation of endomorphism rings of Jacobians.2113th Algorithmic Number Theory Symposium2019, 155--171
  • 59 inproceedingsM.Manfred Buchacher, M.Manuel Kauers and G.Gleb Pogudin. Separating variables in bivariate polynomial ideals.ISSAC~'20Conference proceedingsACM2020, 54--61
  • 60 bookJ. C.J. C. Butcher. Numerical methods for ordinary differential equations.John Wiley & Sons2016, xxiii+513
  • 61 incollectionS.Shaoshi Chen, M.Maximilian Jaroschek, M.Manuel Kauers and M. F.Michael F. Singer. Desingularization explains order-degree curves for Ore operators.ISSAC~'13Conference proceedingsACM2013, 157--164
  • 62 articleS.Shaoshi Chen, M.Manuel Kauers, Z.Ziming Li and Y.Yi Zhang. Apparent singularities of D-finite systems.952019, 217--237
  • 63 incollectionS.Shaoshi Chen and M.Manuel Kauers. Order-degree curves for hypergeometric creative telescoping.ISSAC~'12Conference proceedingsACM2012, 122--129
  • 64 articleS.Shaoshi Chen and M.Manuel Kauers. Trading order for degree in creative telescoping.4782012, 968--995
  • 65 incollectionD. V.David V. Chudnovsky and G. V.Gregory V. Chudnovsky. Computer algebra in the service of mathematical physics and number theory.Computers in Mathematics (Stanford, CA, 1986)125Lecture Notes in Pure and Appl. Math.Dekker1990, 109--232
  • 66 inproceedingsD. V.David V. Chudnovsky and G. V.Gregory V. Chudnovsky. Computer assisted number theory with applications.Number theory (New York, 1984--1985)1240Lecture Notes in MathematicsSpringer1987, 1--68
  • 67 articleF.Frédéric Chyzak, T.Thomas Dreyfus, P.Philippe Dumas and M.Marc Mezzarobba. Computing solutions of linear Mahler equations.873142018, 2977--3021
  • 68 inproceedingsF.Frédéric Chyzak and P.Philippe Dumas. A Gröbner-basis theory for divide-and-conquer recurrences.ISSAC~'20Conference proceedingsACMJuly 2020
  • 69 inproceedingsT.Thomas Cluzeau. Factorization of differential systems in characteristic p.Proceedings of the 2003 International Symposium on Symbolic and Algebraic ComputationACM, New York2003, 58--65URL: https://doi.org/10.1145/860854.860875
  • 70 articleE.Edgar Costa, N.Nicolas Mascot, J.Jeroen Sijsling and J.John Voight. Rigorous computation of the endomorphism ring of a Jacobian.883172019, 1303--1339
  • 71 articleJ.John Cremona. The L-functions and modular forms database project.1662016, 1541--1553
  • 72 articleS.Sławomir Cynk and D.Duco van Straten. Periods of rigid double octic Calabi-Yau threefolds.12312019, 243--258
  • 73 articleB.Bernard Deconinck and M.Mark van Hoeij. Computing Riemann matrices of algebraic curves.152-153Special issue to honor Vladimir ZakharovMay 2001, 28--46
  • 74 articleK.Karl Dilcher and K. B.Kenneth B Stolarsky. A polynomial analogue to the Stern sequence.3012007, 85--103
  • 75 articleA.Alexandru Dimca. On the de Rham cohomology of a hypersurface complement.11341991, 763--771
  • 76 articleT.Thomas Dreyfus, C.Charlotte Hardouin and J.Julien Roques. Hypertranscendance of solutions of Mahler equations.202018, 2209--2238
  • 77 articleT.Thomas Dreyfus, C.Charlotte Hardouin, J.Julien Roques and M. F.Michael F. Singer. On the nature of the generating series of walks in the quarter plane.21312018, 139--203
  • 78 miscA.-S.Andreas-Stephan Elsenhans and J.Jörg Jahnel. Real and complex multiplication on K3 surfaces via period integration.February 2018
  • 79 articleJ.-C.Jean-Charles Faugère. A new efficient algorithm for computing Gröbner bases (F 4 ).1391-3Effective methods in algebraic geometry (Saint-Malo, 1998)1999, 61--88
  • 80 inproceedingsJ.-C.Jean-Charles Faugère. A new efficient algorithm for computing Gröbner bases without reduction to zero (F 5 ).ISSAC~'02Conference proceedingsACM2002, 75--83
  • 81 inproceedingsJ.-C.J.-Ch. Faugère and C.Ch. Mou. Fast algorithm for change of ordering of zero-dimensional Gröbner bases with sparse multiplication matrices.ISSAC~'11Conference proceedings2011, 115--122
  • 82 articleJ.-C.J.-Ch. Faugère and C.Ch. Mou. Sparse FGLM algorithms.8032017, 538--569
  • 83 bookG.Guy Fayolle, R.Roudolf Iasnogorodski and V.Vadim Malyshev. Random walks in the quarter plane.40Probability Theory and Stochastic ModellingSpringer2017, xvii+248URL: https://doi.org/10.1007/978-3-319-50930-3
  • 84 articleS.Stéphane Fischler and T.Tanguy Rivoal. Approximants de Padé et séries hypergéométriques équilibrées.82102003, 1369--1394
  • 85 miscS.S. Fischler and T.T. Rivoal. Effective algebraic independence of values of E-functions.Preprint arXiv2019
  • 86 bookJ.Joachim von zur Gathen and J.Jürgen Gerhard. Modern computer algebra.Cambridge University Press, Cambridge2013, xiv+795URL: https://doi.org/10.1017/CBO9781139856065
  • 87 articleP.Paul Görlach, C.Christian Lehn and A.-L.Anna-Laura Sattelberger. Algebraic analysis of the hypergeometric function 1 F 1 of a matrix argument.6222021, 397--427
  • 88 articleD. Y.D. Yu. Grigor'ev. Complexity of factoring and calculating the GCD of linear ordinary differential operators.1011990, 7--37
  • 89 articleA. J.A. J. Guttmann and G. S.G. S. Joyce. On a new method of series analysis in lattice statistics.591972, 81--84
  • 90 articleA. J.A. J. Guttmann. On the recurrence relation method of series analysis.871975, 1081--1088
  • 91 articleC.Charlotte Hardouin and M. F.Michael F. Singer. Differential Galois theory of linear difference equations.34222008, 333--377URL: http://dx.doi.org/10.1007/s00208-008-0238-z
  • 92 articleJ. M.Johannes M. Henn. Lectures on differential equations for Feynman integrals.48152015, 153001
  • 93 articleD.Didier Henrion, J. B.Jean B. Lasserre and C.Carlo Savorgnan. Approximate volume and integration for basic semialgebraic sets.5142009, 722--743
  • 94 articleC.Charles Hermite. Sur la fonction exponentielle.771873, 18--24
  • 95 articleJ.Joris van der Hoeven. Fast evaluation of holonomic functions.21011999, 199--215
  • 96 articleJ.Joris van der Hoeven. Constructing reductions for creative telescoping.2020
  • 97 articleD. L.D. L. Hunter and G. A.G. A. Baker Jr. Methods of series analysis III. Integral approximant methods.1971979, 3808--3821
  • 98 articleA. M.A. M. Jasour, N. S.N. S. Aybat and C. M.C. M. Lagoa. Semidefinite programming for chance constrained optimization over semialgebraic sets.2532015, 1411--1440
  • 99 inproceedingsA. M.Ashkan M. Jasour, A.Andreas Hofmann and B. C.Brian C. Williams. Moment-sum-of-squares approach for fast risk estimation in uncertain environments.2018 IEEE Conference on Decision and Control (CDC)2018, 2445--2451
  • 100 articleM. A.M. A. H. Khan. High-order differential approximants.14922002, 457--468
  • 101 bookD. E.Donald E. Knuth. The art of computer programming. Vol. 2.Seminumerical algorithms, Third edition [of MR0286318]Addison-Wesley, Reading, MA1998
  • 102 articleC.Christoph Koutschan, M.Manuel Kauers and D.Doron Zeilberger. Proof of George Andrews's and David Robbins's q-TSPP conjecture.10862011, 2196--2199
  • 103 articleC.Christoph Koutschan and T.Thotsaporn Thanatipanonda. Advanced computer algebra for determinants.1732013, 509--523
  • 104 articleP.Pierre Lairez. Computing periods of rational integrals.853002016, 1719--1752
  • 105 inproceedingsP.Pierre Lairez, M.Marc Mezzarobba and M.Mohab Safey El Din. Computing the volume of compact semi-algebraic sets.ISSAC~'19Conference proceedingsACM2019, 259--266
  • 106 articleP.Pierre Lairez and E. C.Emre Can Sertöz. A numerical transcendental method in algebraic geometry: computation of Picard groups and related invariants.342019, 559--584
  • 107 unpublishedJ. B.Jean Bernard Lasserre. Connecting optimization with spectral analysis of tri-diagonal matrices.March 2020
  • 108 unpublishedJ. B.Jean Bernard Lasserre, V.Victor Magron, S.Swann Marx and O.Olivier Zahm. Minimizing rational functions: a hierarchy of approximations via pushforward measures.December 2020
  • 109 bookJ. B.Jean Bernard Lasserre. Moments, positive polynomials and their applications.1Series on Optimization and its ApplicationsImperial College PressOctober 2009, xxii+361
  • 110 articleJ. B.Jean Bernard Lasserre. Volume of sublevel sets of homogeneous polynomials.322019, 372--389
  • 111 articleL. F.Laura Felicia Matusevich. Weyl closure of hypergeometric systems.602June 2009, 147--158
  • 112 inproceedingsM.Marc Mezzarobba. NumGfun: a package for numerical and analytic computation and D-finite functions.ISSAC~'10Conference proceedingsACM2010, 139--146
  • 113 miscM.Marc Mezzarobba. Rigorous multiple-precision evaluation of D-finite functions in Sagemath.July 2016
  • 114 articleP.Pascal Molin and C.Christian Neurohr. Computing period matrices and the Abel-Jacobi map of superelliptic curves.883162019, 847--888
  • 115 inproceedingsS.S. Naldi and V.V. Neiger. A divide-and-conquer algorithm for computing Gröbner bases of syzygies in finite dimension.ISSAC~'20Conference proceedings2020, 380--387
  • 116 thesisC.Christian Neurohr. Efficient integration on Riemann surfaces and applications.Ph.D. ThesisUniversität Oldenburg2018
  • 117 articleT.Toshinori Oaku. Algorithms for b-functions, restrictions, and algebraic local cohomology groups of D-modules.1911997, 61--105
  • 118 articleT.Toshinori Oaku and N.Nobuki Takayama. Algorithms for D-modules: restriction, tensor product, localization, and local cohomology groups.1562-32001, 267--308
  • 119 articleM.Marko Petkovšek. Hypergeometric solutions of linear recurrences with polynomial coefficients.141992, 243--264
  • 120 bookG.G. Pólya. Mathematics and plausible reasoning.Induction and analogy in mathematicsPrinceton U. Press1954, xvi+280
  • 121 bookG.G. Pólya. How to solve it.A new aspect of mathematical methodPrinceton U. Press1945, xxviii+253
  • 122 articleD.Dorin Popescu. General Néron desingularization and approximation.1041986, 85--115URL: https://doi.org/10.1017/S0027763000022698
  • 123 incollectionM.Marius van der Put. Differential equations in characteristic p.971-2Special issue in honour of Frans Oort1995, 227--251URL: http://www.numdam.org/item?id=CM_1995__97_1-2_227_0
  • 124 articleJ.Julien Roques. On the algebraic relations between Mahler functions.37012018, 321--355
  • 125 articleE. C.Emre Can Sertöz. Computing periods of hypersurfaces.883202019, 2987--3022
  • 126 articleA. B.A. B. Šidlovskiĭ. On transcendentality of the values of a class of entire functions satisfying linear differential equations.1051955, 35--37
  • 127 incollectionC. L.Carl L. Siegel. Über einige Anwendungen diophantischer Approximationen [reprint of Abhandlungen der Preußischen Akademie der Wissenschaften. Physikalisch-mathematische Klasse 1929, Nr. 1].On some applications of Diophantine approximations2Quad./Monogr.Ed. Norm., Pisa2014, 81--138
  • 128 inproceedingsM. F.Michael F. Singer. Algebraic solutions of nth order linear differential equations.Proceedings of the Queen's Number Theory Conference, 1979 (Kingston, Ont., 1979)54Queen's Papers in Pure and Appl. Math.Queen's Univ., Kingston, Ont.1980, 379--420
  • 129 articleL.Lucas Slot and M.Monique Laurent. Near-optimal analysis of Lasserre’s univariate measure-based bounds for multivariate polynomial optimization.2020
  • 130 articleA. V.A. V. Smirnov and M. N.M. N. Tentyukov. Feynman integral evaluation by a sector decomposition approach (FIESTA).18052009, 735--746
  • 131 articleV. N.V. N. Sorokin. A transcendence measure for 2 .187121996, 1819--1852
  • 132 articleR. P.R. P. Stanley. Differentiably finite power series.121980, 175--188
  • 133 incollectionH.Harrison Tsai. Algorithms for associated primes, Weyl closure, and local cohomology of D-modules.Local cohomology and its applications (Guanajuato, 1999)226Lecture Notes in Pure and Appl. Math.Dekker2002, 169--194
  • 134 articleH.Harrison Tsai. Weyl closure of a linear differential operator.294-5Special issue on Symbolic Computation in Algebra, Analysis, and Geometry (Berkeley, CA, 1998)2000, 747--775URL: http://dx.doi.org/10.1006/jsco.1999.0400
  • 135 articleW. T.W. T. Tutte. Chromatic sums for rooted planar triangulations: the cases =1 and =2.251973, 426--447URL: https://doi.org/10.4153/CJM-1973-043-3
  • 136 miscT.Thomas Yu and J.Jingmin Chen. On the Uniqueness of Clifford Torus with Prescribed Isoperimetric Ratio.Technical Report arXiv:2003.13116 [math.DG]2020
  • 137 incollectionD.Don Zagier. The arithmetic and topology of differential equations.European Congress of MathematicsEuropean Mathematical Society, Zürich2018, 717--776
  • 138 articleD.Doron Zeilberger and W.Wadim Zudilin. The irrationality measure of is at most 7.103205334137….942020, 407--419
  1. 1It relies, among other costly operations, on factoring differential operators, which is known to be a highly expensive procedure, of complexity (N)O(r4), where is the bitsize of the input operator, r its order, and Nexp(·2r)2r  88. It also relies on deciding whether a non-linear (Ricatti-type) differential equation of order r-1 has an algebraic solution of degree at most M:=(49r)r2; this step itself relies on deciding non-emptiness of a constructible set defined by polynomials in M variables (and potentially huge degrees). It also relies on the famously difficult Abel's problem: given an algebraic function u, decide if y'/y=u has an algebraic solution.