“Experimental mathematics” is the study of mathematical phenomena by computational means. “Computer algebra” is the art of doing effective and efficient exact mathematics on a computer. The MATHEXP team develops both themes in parallel, in order to discover and prove new mathematical results, often out of reach for classical human means. It is our strong belief that modern mathematics will benefit more and more from computer tools. We ambition to provide mathematical users with appropriate algorithmic theories and implementations.

Besides the classification by mathematical and methodological axes to be presented in §3, MATHEXP's research falls into four interconnected categories, corresponding to four different ways to produce science. The raison d'être of the team is solving core questions that arise in the practice of experimental mathematics. Through the experimental mathematics approach, we aim at applications in diverse areas of mathematics and physics. All rests on computer algebra, in its symbolic and seminumerical aspects. Lastly, software development is a significant part of our activities, with the aim of enabling cutting-edge applications and disseminating our tools. Each of these four levels is reflected in the thematic axes of the research program.

In science, observation and experiment play an important role in formulating hypotheses. In mathematics, this role is shadowed by the primacy of deductive proofs, which turn hypotheses into theorems, but it is no less important. The art of looking for patterns, of gathering computational evidence in support of mathematical assertions, lies at the heart of experimental mathematics, promoted by Euler, Gauss and Ramanujan. These prominent mathematicians spent much of their time doing computations in order to refine their intuitions and to explore new territories before inventing new theories. Computations led them to plausible conjectures, by an approach similar to those used in natural sciences. Nowadays, experimental mathematics has become a full-fledged field, with prominent promoters like Bailey and Borwein. In their words 55, experimental mathematics is “the methodology of doing mathematics that includes the use of computation for

At a fundamental level, we manipulate several kinds of algebraic objects that are characteristic of computer algebra: arbitrary-precision numbers (big integers and big floating-point numbers, typically with dozens of thousands of digits), polynomials, matrices, differential and recurrence operators. The first three items form the common ground of computer algebra 99. We benefit from years of research on them and from broadly used efficient software: general-purpose computer-algebra systems like Maple, Magma, Mathematica, Sage, Singular; and also special-purpose libraries like Arb, Fgb, Flint, Msolve, NTL. Current developments, whether software implementation, algorithm design or new complexity analyses, directly impact us. The fourth kind of algebraic objects, differential and recurrence operators, is more specific to our research and we concentrate our efforts on it. There, we try to understand the basic operations in terms of computational complexity. Complexity is also our guide when we recombine basic operations into elaborate algorithms. In the end, we want fast implementations of efficient algorithms.

Here are some of the typical questions we are interested in:

Getting involved in applications is both an objective and a methodology. The applications shape the tools that we design and foster their dissemination.

Combinatorics is a longstanding application of computer algebra, and
conversely, computer algebra has a deep impact on the field. The study of
random walks in lattices, first motivated by statistical physics and queueing
theory, features prominent examples of experimental mathematics and
computer-assisted proofs. Our main collaborators in combinatorics are Mireille
Bousquet-Mélou (Université de Bordeaux), Stephen Melczer (University of
Waterloo) and Kilian Raschel (Université d'Angers).

Probability theory.
Apart from the already mentioned interest in random walks,
which is a classical topic in probability theory,
and on which we have an expert, Guy Fayolle, in our group,
the main applications we have in mind are
to integrals arising from: 2D fluctuation theory (generalizing arc-sine
laws in 1D); moments of the quadrant occupation time for the planar
Brownian motion; persistence probability theory (survival functions of
first passage time for real stochastic processes); volumes of structured
families of polytopes also arising in polyhedral geometry and combinatorics.
Our main interactions on these topics are with
Gerold Alsmeyer (U. Münster),
Dan Betea (KU Leuven),
and Thomas Simon (U. Lille).

Number theory, and especially diophantine approximation,
are also fields with longstanding users of computer algebra tools.
For example, the recently discovered sequence of integrals

whose analysis leads to the best known measure of irrationality of automata theory,
and appear in several of our research axes.
Philippe Dumas, in our group, and Boris Adamczewski, already mentioned,
have long been experts in this topic.

In algebraic geometry,
in spite of tremendous theoretical achievements, it is a challenge to apply general theories to specific examples.
We focus on putting into practice transcendental methods through symbolic integration and seminumerical methods.
Our main collaborators are Emre Sertöz (Max Planck Institute for Mathematics) and Duco van Straten (Gutenberg University).

In statistical physics,
the Ising model, and its generalization, the Potts model,
are classical in the study of phase transitions.
Although the Ising model with no magnetic field is one of the most important
exactly solved models in statistical mechanics (Onsager won the Nobel prize 1968 for this), its magnetic susceptibility
continues to be an unsolved aspect of the model.
In absence of an exact closed form, the susceptibility is approached analytically, via the singularities of some multiple integrals with parameters.
Experimental mathematics is a key tool in their study.
Our main collaborators are Jean-Marie Maillard (SU, LPTMC)
and Tony Guttmann (U. Melbourne).

In quantum mechanics, turning theories into predictions requires the computation of Feynman integrals.
For example, the reference values of experiments carried out in particle accelerators are obtained in this way.
The analysis of the structure of Feynman integrals benefits from tools in experimental mathematics.
Our main collaborator in this field is Pierre Vanhove (CEA, IPhT).

We ambition to provide efficient software libraries that perform the core tasks that we need in experimental mathematics. We target especially four tasks of general interest: algebraic algorithms for manipulating systems of linear PDEs, univariate and multivariate guessing, symbolic integration, and seminumerical integration.

For several reasons, we want to stay away from a development model that is too tied to commercial computer algebra systems. Firstly, they restrict dissemination and interoperability. Secondly, they do not offer the level of control that we need to implement these foundations efficiently. Concretely, we will develop open-source libraries in C++ for the most fundamental tasks in our research area. Computer algebra systems, like Sagemath or Maple, are good at coordinating primitive algorithms, but too high-level to implement them efficiently. We seek solid software foundations that provide the primitive algorithms that we need. This is necessary to implement the new higher-level algorithms that we design, but also to reach a performance level that enables new applications. Still, we will strive to expose our libraries to the prominent computer-algebra systems, especially Maple and Sagemath, used by many colleagues.

Besides, there is a growing interest in the programming language Julia for computer algebra, as shown by the Oscar project. We already internally use Julia and occasionally some of the libraries Oscar is build upon, and we want to promote this young ecosystem. It is very attractive to contribute to it, but on the flip side of the coin, it is too young to offer the same usability as Maple, or even Sagemath. So there is an assumed element of risk taking in our intent to also make our libraries available to Julia.

At large, MATHEXP deals with algebraic and seminumerical methods.
This part goes through the fundamental aspects of the algebraic side.
As opposed to numerical analysis where numerical evaluations underlie the basic algorithms,
algebraic methods manipulate functions through functional equations.
Depending on the context, different kinds of functional equations are appropriate.
Algebraic functions are handled through polynomial equations
and the classical theory of polynomial systems.
To deal with integrals, systems of linear partial differential equations (PDEs)
are appropriate.
In combinatorics and number theory appears the need for non-linear ordinary differential equations (ODEs).
We also consider other kinds of functional equations more related to discrete structures, namely linear recurrence relations,

The various types of functional equations raise similar questions: is a given equation consequence of a set of other equations? What are the solutions of a certain type (polynomial, rational, power series, etc.)? What is the local behavior of the solutions? Algorithms to solve these problems support an important part of our research activity.

One of the major data structure that we consider are systems of linear PDEs with
polynomial coefficients. A system that has a finite dimensional solution space
is called holonomic and a function that is solution of a holonomic system
is called holonomic too.
The theory of holonomy is important because it allows for an algebraic theory of analysis and integration (on this aspect see also §3.2).
The basic objects of holonomy theory are linear differential operators, that are some sort of quasicommutative polynomials, and ideals in rings of linear differential operators, called Weyl algebras.
In this aspect, holonomy theory is analogue to the theory of polynomial systems, where the basic objects are commutative polynomials and ideals in polynomial rings.
Some of the important concepts, for example the concept of Gröbner basis, are also similar.
Gröbner bases are a way to describe all the consequences of a set of equations.

As much as Gröbner bases in polynomial rings are the backbone of effective commutative algebra, Gröbner bases in Weyl algebras of differential operators are the backbone of effective holonomy theory, which includes integration. In a commutative setting, there has been a long way from the early work of Buchberger to today's state-of-the-art polynomial system solving libraries 47. We will develop a similar enterprise in the noncommutative setting of Weyl algebras. It will unlock a lot of applications of holonomy theory.

Following the commutative case, progress in a differential context will come from an
appropriate theory and efficient data structures. We will first develop a matrix
approach to handle simultaneous reduction of differential operators as the F4
algorithm for the polynomial case 90. The real challenge here
is more practical than theoretical.
It is not difficult to come with some F4 algorithm in the differential case.
But will it be efficient? From the experience of modern Gröbner engines
in the commutative case, we know that efficient implementation of simultaneous
reduction requires a significant amount of low-level programming to deal with sparse matrices with a special structure.
We also know that many choices,
irrelevant to the mathematical theory, strongly influence the running times.
The noncommutativity of differential operators adds extra complications, whose
consequences are still to be understood at this level.
We want to reuse, as much as possible, the specialized linear algebra libraries
that have been developed in the polynomial context 66, 47,
but we may have to elude the densification of products induced by noncommutativity.

On a more theoretical aspect, one step further in the analysis is that the possible analogues of the F5 algorithm 91 are not fully explored in a differential setting. We may expect not only faster algorithms, but also new algorithms for operating on holonomic functions (Weyl closure for example, see §3.1.2). Rafael Mohr started a PhD thesis in the team on using F5 for computing equidimensional decompositions in the commutative case.

Among the structural properties of systems of linear differential or difference equations with polynomial coefficients,
the question of understanding and simplifying their singularity structure pops up regularly.
Indeed, an equation or a system of equations may exhibit singularities that no solution have,
which are then called apparent singularities.
Desingularization is a process of simplifying a 58,
and we want to promote further this idea of trading minimality of order for minimality of total size,
with the goal of improved speed.
On the other hand,
apparent singularities have been defined only recently
in the multivariate holonomic case
72.

Our project includes developing good notions and
fast heuristic methods for the desingularization of a

Moreover, fast algorithms will be obtained for testing the separability of special functions: in a nutshell, this problem is to decide whether the solutions to a given system also satisfy linear differential or difference equations in a single variable, and algorithmically this corresponds to obtaining structured multiples of operators with a structure similar to that for desingularization.

In the multivariate case, the operation of saturating
an ideal in the Weyl algebra
by factoring out (and removing) all polynomial factors on the left
is known under the name of Weyl closure.
This relates to desingularization as the Weyl closure of an ideal contains
all desingularized operators.
Weyl closure also is a relative of the radical of an ideal in commutative algebra:
given an ideal of linear differential operators,
its Weyl closure is the (larger) ideal of all operators that annihilate
any function solution to the initial ideal.
Computing Weyl closure applies to symbolic integration,
and algorithms exist to compute it
149, 148,
although they are slow in practice.
Weyl closure also plays an important role in applications
to the theory of special functions,
e.g., in the study of GKZ-systems (a.k.a.

Converting a linear Mahler equation with polynomial coefficients (see §3.3.3)
into a constraint on the coefficient sequence of its series solutions
results in a recurrence between coefficients indexed with rational numbers,
which must be interpreted to be zero at noninteger indices.
The recurrence can be replaced with a system
of recurrences by cases depending on residues modulo some power of the base 78
that developed a Gröbner-bases theory as a pre-requisite for this goal,
we will address those problems of conversion and well-foundedness.

Software development is a real challenge, regarding the symbolic manipulation of linear PDEs.
While symbolic integration has
gained more and more recognition, its execution is still reserved to experts.
Providing a highly efficient software library with functionalities that come as
close as possible to the actual integrals, rather than some idealized form, will
foster adoption and applications.
In the past, the lack of solid software foundations has been an obstacle in
implementing newly developed algorithms and in disseminating our work.
It was the case, for example, for our work on binomial sums 62,
or the computation of volumes 118,
where having to use an integration algorithm implemented in Magma has been a major obstacle.

What is lacking is a complete tool chain integrating the following three layers:

The first layer of the toolchain will be developed in C++ for performance but also to open the way to an integration in free computer algebra systems, like Sagemath or Macaulay2. We will benefit from years of experience of the community and close colleagues in implementing Gröbner basis algorithms in the commutative case. The third layer of the toolchain should be easily accessible for the users, so at least available in Sagemath. Some of our current software development, related to the second layer, already happens in Julia (as part of R. Mohr's PhD work).

Among common operations on functions, integration is the most delicate. For
example, differentiation transforms functions of a certain kind into functions of the same kind; integration does not.
For this reason, integration is also
expressive: it is an essential tool for defining new functions or solving
equations, not to mention the ubiquitous Fourier transform and its cousins.
Integration is the fundamental reason why holonomic functions are so important:
integrals of holonomic functions are holonomic.
Algorithms to perform this operation enable many applications, including:
various kinds of coefficient extractions in combinatorics,
families of parametrized integrals in mathematical physics,
proofs of irrationality in number theory,
and computations of moments in optimization.

Given a function

Concretely,

From the algebraic and computational point of view, integration has several
analogues. Discrete sums are the prominent example, but there are also

Symbolic integration is an historical focus of MATHEXP's founding members with many significant contributions. Compared to our previous activities, we want to put more emphasis on software development. We are at a point where the theory is well understood but the lack of efficient implementations hinders many applications. Naturally, this effort will rest on the results obtained in §3.1.

The algebraic aspects of symbolic integration are best understood when the
integration domain has no boundary: typically telescopic relation which states that the integral of a derivative
vanishes: for example, if

It gives a nice algebraic flavor to the problem of symbolic integration
and reduces it to the study of the quotient space

The first one, where the integration domain is some complex cycle

The next generation of symbolic integration algorithms must deal with integrals defined on domains with boundaries.
The framework of algebraic D-modules seems to be very appropriate and already features some algorithms. But this is not the end of the story, as this line of research has not led yet to efficient implementations.
We identified two ways of action to reach this goal. Firstly, existing algorithms 130, 131 put too much
emphasis on computing a minimal-order equation for the integral.
While this is an
interesting property, other kinds of integration algorithms have successfully
relaxed this condition. For example, for integrating rational functions, the
state-of-the-art algorithm 117 depends on a
parameter a posteriori;
this will be a consequence of a work on univariate guessing (see §3.4.1)
that bases and expands on 63.
The algorithm by small values of 61
and practical performance, being able to compute integrals that were previously out of reach. We consider it to be a special case of the general algorithm that we want to develop, and a proof of feasibility.
However, the effort will be vain without significant progress on the computation of Gröbner bases in Weyl algebras.
Fortunately, and this is the second way of action, we think that the framework of algebraic D-modules enables efficient data structures modeled on recent progress in the context of polynomial systems.
Progress in this direction (as explained in §3.1.1) will immediately lead to significant improvement for symbolic integration.

The approach to symbolic integration based on creative telescoping
is a definite expertise of the team.
Although the approach is difficult to use for integrals with boundaries, it still has many appeals.
In particular, it generalizes well to discrete analogues.
Recently, the team has initiated the development of a new line of algorithms, called reduction-based.
After continuing work, this line has not yet been extended
to full generality 57, 109.
These recent theoretical
developments are not yet reflected in current software packages (only
prototype implementations exist) and therefore their practical applicability,
and how the algorithms compare, is not yet fully understood. Filling
these gaps will be a good starting point for us, but the ultimate goal will be
to formulate analogue algorithms for the difference case (summation of
holonomic sequences), for the

In applied mathematics, the method of moments provides a computational approach to several important problems involving polynomial functions and polynomial constraints: polynomial optimization, volume estimation, computation of Nash equilibria, ... 122.
This method considers infinite-dimensional linear optimization problems over the space of Borel measures on some space

From the holonomic point of view, the generating function of the moments of

A line of research developed recently
112, 123, 120, 144, 121
focuses on reducing the size of the matrices in the linear matrix inequalities (LMI) involved in
the relations by using pushforward measures.
For example, let us consider a polynomial

One step further, we will try to interpret the whole moment method in the holonomic setting.
The differential equation for all the

Here the aim is to determine the relations satisfied by a solution of a Mahler equation (see §3.3.3).
A natural generalization is to search for relations among solutions of different Mahler equations.
Our objective is to provide an algorithmic answer to this generalization,
for (Laurent) power series

Random walks confined to the quarter plane is a well studied topic, as testified by the book 95.
A new algebraic approach, relying on the Galois theory of difference equations, has been introduced in 88
to determine the nature of the generating series of such walks.
This approach gives access to the D-algebraicity of the generating functions, that is, to the knowledge of whether they satisfy some differential equations (linear or non-linear).
More precisely, D-algebraicity is shown to be equivalent to the fact that a certain telescopic equation, similar to the one appearing in the classical context of creative telescoping,
but defined on an elliptic curve attached to the walk model, admits solutions in the function field of that curve.
For the moment, the corresponding telescoping equations are solved by hand, in a quite ad-hoc fashion, using case-by-case treatment.
We aim at developing a systematic and automatized approach for solving this kind of elliptic creative telescoping problems.
To this end, we will import and adapt our algorithmic methods
from the classical case to the elliptic framework.

Because of the dependency of the software pertaining to symbolic integration on developments on multivariate systems, our goals related to software on symbolic integration have been described in §3.1.4.

Classifying objects, determining their nature, is often the culmination of a
mature theory. But even the best established theories can be impracticable on
a concrete instance, either by a lack of effectiveness or by a computational
barrier. In both cases, an algorithm is missing: we have to
systematize, but also effectivize and automate efficiently. This is
what we propose to do, in order to solve classification problems relating to
numbers, analytical functions, and combinatorial generating series.

It is an old question addressed by Fuchs in the 1870s
of whether one can decide if all solutions of a given linear differential
equation are algebraic.
Singer showed in 143 that there
exists an algorithm which takes as input a linear differential equation with
coefficients in in principle
Stanley's open problem 147: given a holonomic power
series $y\left(x\right)$ by an annihilating linear differential equation and sufficiently
many initial terms in its expansion, decide if $y\left(x\right)$ is algebraic or transcendental.
Unfortunately, the corresponding algorithm is too slow in practice,
because of its high computational complexity1. An interesting question is to find efficient alternatives that
are able to answer Stanley's question on concrete difficult examples.

An approach that always works is the algorithmic guess-and-prove
paradigm (see §3.4): one guesses a concrete polynomial witness,
and then post-certifies it. This method is very robust and works perfectly
well, but it may fail on examples with minimal polynomial much larger than the
input differential equation. For instance, in an open question by
Zagier 151, the input differential equations have order 4, but the
(estimated) algebraic degree of the desired solution is 155520, hence much too
large to allow the computation of the minimal polynomial. (Note that the
estimate is obtained using seminumerical methods evoked
in §3.5).

We aim at designing various pragmatic algorithmic methods for proving
algebraicity or transcendence in such difficult cases. First, the algebraic
nature of the holonomic function is tested heuristically, using a mixture of
numeric and

$E$-functions are holonomic and entire power series subject to some arithmetic conditions; they generalize the exponential function.
The class contains most of the holonomic exponential generating functions in combinatorics
and many special functions such as the Airy and the Bessel functions.
Given an

Mahler equations are functional equations that relate a function

Roques
designed an algorithm for the computation of the Galois group of Mahler equations of order 2 139.
This group reflects the algebraic relations between the solutions.
So its computation is relevant in transcendence theory.
Roques' algorithm relies on deciding the existence of rational solutions to some nonlinear Mahler equations that are analogues of Riccati differential equations.
For this task, Roques proposes an algorithm
reminiscent of Petkovšek's algorithm 134,
with an exponential arithmetic complexity
as it has to iterate through all monic factors of well-identified polynomials.
Building on recent progress in the linear case 77, we want to obtain a polynomial-time algorithm for this decidability problem,
or at least one that is not exponentially sensitive to the degree of the polynomial coefficients of the equation.

An application of this work will be a new algorithm to decide the differential transcendence of solutions of Mahler equations of order 2, following a criterion given by Dreyfus, Hardouin and Roques (see 87, 139). This would make it possible to prove new results about some classical Mahler functions and the relations between them. An example will be to reprove and extend the hypertranscendence of the solutions to the Mahler equation satisfied by the generating series of the Stern sequence 85.

We aim at studying the special values of Mahler functions, going through the search for algebraic values and more generally for algebraic relations between values.
We will resume the analysis of the algorithm in 36, to highlight its computational limitations, before optimizing its subtasks.
We are thinking in particular of the rationality test,
for which an algorithm was given in 44 and another of better complexity
has appeared recently 77,
and of the search for minimal equations, for which structured linear algebra techniques must allow practical efficiency.

In enumerative combinatorics, many classes of objects have generating
functions that satisfy functional equations with “catalytic” variables,
relating the complete function with the partial functions obtained by
specializing the catalytic variables.
For equations with a single catalytic variable, either linear or nonlinear,
solutions are invariably algebraic.
This is a consequence of Popescu's
theorem on Artin approximation with nested conditions 137,
a deep result in commutative algebra.
However, the proof of this qualitative result is not constructive.
Hence, to go further, towards quantitative results, different
approaches are needed.
Bousquet-Mélou and Jehanne proposed in 64
a method which applies in principle to any equation of the form
in a click.

When several catalytic variables are involved, Popescu's theorem does not hold anymore. The solutions are not necessarily algebraic anymore, and even holonomy is not guaranteed, even in the linear case.

In the linear case, our the main objective is to fully automatize the resolution of linear equations with two catalytic variables coming from lattice walk questions, when the walk model admits Tutte invariants and decoupling functions.
A first nontrivial challenge will be to produce a new computer-assisted proof of algebraicity for the famous Gessel model, different in spirit from the first proof 60: instead of guess-and-proof, we will be inspired by the recent “human” proofs in 65, 46 relying on Tutte invariants.
There are several nontrivial subproblems, both on the mathematical and algorithmic sides. One of them is to determine if a model admits invariants and decoupling functions, and if so, to compute them.
A first step in this direction was recently done by Buchacher, Kauers and Pogudin 69, in the simpler case when one looks for polynomials instead of rational functions.

In the nonlinear case with two catalytic variables, few results exist, and almost no general theory.
These equations occur systematically when counting planar maps equipped with an additional structure, for instance a colouring (or, a spanning tree, a self-avoiding walk, etc.).
On this side, the study will be of a more prospective nature. However, we envision the resolution of several challenges.
A first objective will be to test various guess-and-prove methods on Tutte's equation 150
satisfied by the generating function of properly

Given enough terms of a sequence, it is possible to reconstruct a linear
recurrence relation of which the sequence is a solution, if there is one.
For example, with the nine numbers 1, 3, 13, 63,
321, 1683, 8989, 48639 and 265729, one can reconstruct the recurrence relation
a posteriori
that the reconstructed closed form is indeed correct.

Most of the proofs of irrationality of some constant Padé approximants of

We will use computer-assisted symbolic and numerical computations
for the construction of a relevant Padé-approximation problem.
Then, the resolution of the problem must be automated. This is fundamentally a
computational problem in a holonomic setting. The natural approach here is
guess-and-prove: we first guess what could be a closed-form formula for
the solution by computing explicitly the solutions for some fixed values of

Our future algorithm for computing a linear differential equation of minimal order satisfied by a given holonomic function will be implemented and made available to users. This may include the application to the determination of algebraic values of E-functions. We will do the same concerning linear Mahler equations of minimal order satisfied by given Mahler functions, and concerning the determination of their algebraic values. Our work on solving equations with catalytic variables has started rather recently, so it is still too early to decide the form that related software should take, but we definitely ambition to provide combinatorialists with an implementation that exhibits the algebraic and/or differential equations they are after.

Pólya has theorized and popularized a “guess-and-prove” approach to mathematics in remarkable books 136, 135.
It has now became an essential ingredient in experimental
mathematics, whose power is highly enhanced when used in conjunction with
modern computer algebra algorithms.
This paradigm is a key stone in recent
spectacular applications in experimental mathematics, such
as 60115, 116. The first half (the guessing part) is
based on a “functional interpolation” phase, which consists in recovering
equations starting from (truncations of) solutions. The second half (the
proving part) is based on fast manipulations (e.g., resultants and
factorization) with exact algebraic objects (e.g., polynomials and
differential operators).

In what follows we mostly focus on the guessing phase. It is called
algebraic approximation 67 or differential
approximation 113, depending on the type of equations to be
reconstructed. For instance, differential approximation is an operation to get
an ODE likely to be satisfied by a given approximate series expansion of an
unknown function. This kind of reconstruction technique has been used at least
since the 1970s by physicists 102, 103, 110, under the name
recurrence relation method, for investigating critical phenomena and
phase transitions in statistical physics. Modern
versions are based on subtle
algorithms for Hermite–Padé approximants 43;
efficient
differential and algebraic guessing procedures are implemented in most
computer algebra systems.

In the following subsections, we describe improvements that we will work on.

A first task is to optimize the search for the minimal-order ODE satisfied by a given holonomic series. Feasibility is already known from the recent 37, but the corresponding algorithm is not efficient in practice, because it relies on pessimistic degree bounds and on pessimistic multiplicity estimates. We will design and implement a much more efficient minimization algorithm, which will combine efficient differential guessing with a dynamic computation of tight degree bounds.

“Multiplicity lemmas” are theorems concluding
that an expression representing a formal power series is exactly zero
under the weaker assumption that the expression is zero
when truncated to some order.
In general,
the expression is a differential polynomial in a series,
but interesting subcases are
non-differential polynomials, to test algebraicity,
and linear differential expressions, to test holonomicity.
In good situations,
multiplicity lemmas turn guessing into a proving method or even a decision algorithm.
A particularly nice form of a multiplicity lemma is available
for polynomial expressions 56,
and a similar result exists for linear ODEs 49. We will implement such bounds as proving procedures,
and we will generalize the approach to other kinds of expressions,
e.g., expressions in divided-difference operators that appear in combinatorics, e.g., in map enumeration 64.

Generating functions appear in a variety of classes of increasing complexity, in relation to the equations they satisfy. A third subtask relates to the search for an element in a lower complexity class inside the solution set of a higher complexity class. For instance, can a linear or some other combination of non-holonomic series be holonomic? Can a linear combination of holonomic series be algebraic, or even rational? A promising ongoing result, obtained incidentally in the work on Riccati-type solutions for Mahler equations (see §3.3.3), performs a similar guessing by a suitable search for constrained Hermite–Padé approximants after computing the whole module of approximants. But the main expected impact of the approach would be for differential analogues, and we will strive to generalize the approach, taking advantage of the formal analogy between many types of linear operators.

As guessing often requires to first prepare a lot of data, developing fast expansion algorithms for classes of equations is also related to guessing. In this direction, we plan to design a fast algorithm for the high-order expansion of a DD-finite series (i.e., series satisfying linear differential equations with holonomic coefficients). The complexity of the homologue problem for a linear ODE with series coefficients is quasi-linear in the truncation order; that for a linear ODE with polynomial coefficients is just linear. For DD-finite series, we plan to interlace the two approaches without first expanding the series coefficients of the input equation to the wanted order, so as to avoid a large constant and a logarithmic factor.

Multivariate aspects of guessing relate to activities that we plan to develop as a means of strengthening scientific collaborations with colleagues in Paris (PolSys, Sorbonne U.) and Linz (Johannes Kepler University Linz, Austria). How soon the research happens will depend on how interaction with those colleagues evolves.

An established technique in the univariate case is known as “trading order for degree”. It is based on the observation that minimal order operators tend to have very high degree, while operators of slightly higher order often have much smaller degrees and are therefore easier to guess. A candidate for the minimal order operator is then obtained as greatest common right divisor of two guessed operators of nonminimal order. We will extend this successful technique to the multivariate case. The desired output in this case is a Gröbner basis of a zero-dimensional annihilating ideal. The coefficients of the Gröbner basis elements are high-degree polynomials, and the idea is, as in the univariate case, not to guess them directly, but to guess ideal elements of smaller total size and to compute the Gröbner basis of them. As Gröbner basis computations can be costly, the alternative operators will clearly already have to be “close” to a Gröbner basis in order for the idea to be beneficial. The questions are: what should close to a Gröbner basis mean, how close should the operators be chosen, how much degree drop can be expected then, and how do the answers to these questions depend on the monomial order?

In another direction, we plan to exploit the generalized Hankel structure of the matrices that
appear when modeling linear recurrence relations guessing through linear
algebra. Regarding relations with constant coefficients, this finds
applications in polynomial system solving through the spFGLM algorithm 92, 93 for finding a lexicographic Gröbner
basis. The linear system is block-Hankel with blocks sharing the same
structure, and this recursive structure has the same depth as the number of
variables. Yet, up to now, only one layer of the structure is handled using
fast univariate polynomial arithmetic, then the other ones are dealt with by
noting that the matrix has a quasi-Hankel structure and using fast algorithms
for this type of matrix 59. However, the displacement rank of
this matrix is not small; hence, not taking into account the full structure of
the matrix is suboptimal. This is related to 48 for
computing linear recurrence relations with constant coefficients using
polynomial arithmetic and 128 for computing multivariate Padé
approximants. Analogously, the linear system modeled for guessing linear
recurrence relations with polynomial coefficients is highly structured. It is
the concatenation of matrices as above, yet these matrices are not
independent, as they are all built from the same sequence. Even in the
univariate case, the Beckermann–Labahn algorithm is not able to exploit this
extra structure in order to be quasi-optimal in the input size. Hence, we
would like to investigate how to do so.

In addition to the structure in the modeling, we want to exploit the structure of the sequences that come from applications. For instance, in the enumeration of lattice walks, the nonzero terms often lie in a cone and a lattice, and they are invariant under the action of a finite group. The goal is to take this structure into account in order to build smaller systems for the guessing, and to avoid the generation of more sequence terms than necessary.

We will implement fast algorithms
for computing Hermite–Padé approximants of various types 43.
This will include modular integers, integers (via modular reconstruction),
simple approximants, and simultaneous approximants.
With such a fast, robust implementation at hand,
we will also be able to address the guessing of algebraic differential equations (ADE), going beyond the linear case.
Our use of state-of-the-art algorithms for computing approximants
(including the “superfast” one)
will ensure that we outperform earlier implementations such as
Guess (by Hebisch and Rubey)
and GuessFunc (by Pantone).
We will also develop a variant of trading order for degree for the nonlinear setting.
Our implementation will automate the critical selection
of derivatives, powers, and coefficient degrees
needed to reconstruct an ADE.

The methods in this research axis deal directly with numbers but, following
Knuth 114, they are properly called seminumerical because they lie
on the borderline between symbolic and numeric computations. While numerical
methods process numerical data and generate further numerical data, our
seminumerical methods process exact data, generate high-precision numerical data
and reconstruct exact data. In this perspective, the basic unit is not the
IEEE-754 floating-point number, but arbitrary precision numbers, typically
several thousand decimal places, sometimes more.
The crux is not numerical stability, but computational complexity as the number of significant digits goes to infinity.
When a number is known at such a high precision, it reveals fundamental structures: rationality, algebraicity, relations with other constants, etc.
High-precision computation is a recurring useful tool in the field of experimental mathematics 40.
In some situations, it enables a guess-and-prove approach. In some others, we are unable to step from “guess” to “prove” but overwhelming numerical evidence is enough to shape a conviction.
A celebrated example is the experimental discovery of the BBP formula for

We promote linear differential equations as a data structure to represent and compute with functions (see §3.1). In truth, this data structure represents functions up to finitely many constants. It determines a global behavior but misses the pointwise aspect. Seminumerical methods combine both. They are an important tool for experimental mathematics because they can give strong indications about the nature of a function in very general situations (see §3.3.1).

Alexandre Goyer and Raphaël Pagès started a PhD thesis on the factorization of differential operators. It is a fundamental operation for solving linear differential equations, or, at least, elucidate the nature of the solutions. Goyer considers seminumerical methods. They rely on numerical evaluations of the solutions of the differential operators to guess numerically a factorization. High precision makes it possible to reconstruct the factors exactly, and a simple multiplication certifies the computation. Pagès considers a discrete analogue of numerical evaluation: reduction modulo a prime number.

The main tool for computing high-precision evaluations of functions or integrals is effective analytic continuation of solutions of linear differential equations. It is a form of numerical ODE solver, specialized for linear equations and able to carry out high precision all along the continuation path.

Numerical ODE
solvers are a very classical topic in numerical analysis
70, with popular methods, like Runge–Kutta or multistep
methods.
A much less known family of symbolic-numeric algorithms, that we could call rigorous Taylor methods,
originates from works of the Chudnovskys' in the 1980s and 1990s
76, 75 and has later been
developed by van der Hoeven 108 and
Mezzarobba 125, 126.
This family of algorithms only
handles linear ODEs with polynomial coefficients, which is precisely the nature
of ODEs arising in the context of this document.
But contrary to classical
methods, they provide very strong guarantees even in difficult situations,
especially rigorous error bounds and correct behavior at singular points,
all very desirable features in experimental mathematics.
Furthermore, they feature a quasi-optimal complexity with respect to precision,
meaning that one can compute easily with thousands digits of precision:
computing twice as many digits takes roughly twice as much time.
This contrasts with fixed-order methods, which cannot reach such precision.
For example, to compute 10,000 digits, the
classical order four Runge–Kutta method would need typically

Yet, as advanced as these algorithms may well be, they struggle with the huge ODEs coming from our applications. The reason is easily explained: most algorithms and implementations are designed for small operators and large precision and focuses on a quasilinear complexity with respect to precision. Our situation is quite opposite, with large ODEs and comparatively modest precision. It may be interesting to consider quadratic-time algorithms, with respect to precision, if the complexity with respect to the size of the ODE gets better. This is a really blocking issue that must be addressed to enable new applications. To solve the problem, we will endeavor to provide new software that pays attention to implement algorithms for all regimes of degrees and orders but moderate precision.

Periods are numerical integrals that can
be computed to high precision with symbolic-numeric integration, even though
current algorithms are far from enough to tackle real applications in algebraic
geometry, beyond the case of curves. Algorithms for computing periods of curves are mature
83, 129, 127, 80, 68
and have been used, for example, for the computation of the endomorphism ring of
genus 2 curves in the LMFDB 81. Algorithms in higher
dimension are only emerging 89, 82, 140. Their
current status does not make them suitable for many applications.
Firstly, they are limited in generality.
The articles 89, 82
deal with special double coverings of

With current methods, we managed to compute the periods of 180 000 quartic
surfaces defined by sparse polynomials 119. This
corpus of quartic surfaces was discovered by a random walk. Actually, we are not
able to compute (in a reasonable amount of time) the periods of a given
quartic surface. So we resorted to a random walk guided by ease of computation.
This hinders severely the applicability. Yet, this shows the feasibility of
transcendental continuation to obtain algebraic invariants that are currently
unreachable by any other mean.

The seminumerical algorithms that we develop open perspectives in algebraic geometry.
Some integrals with algebraic origin, called periods, convey some interesting algebraic invariants.
High-precision computation may unravel them where purely algebraic methods fail 119.
These algebraic invariants are crucial to determine the fine structure of algebraic varieties.
We aim at designing algorithms to compute periods
efficiently for varieties of general interest, in particular K3 surfaces,
quintic surfaces, Calabi–Yau threefolds and cubic fourfolds.

In quantum field theory, Feynman integrals appear when computing scattering amplitudes with perturbative methods. In practice, computing Feynman integrals is the most effective way to obtain predictions from a quantum field theory. Precise prediction requires higher-order perturbative terms leading to more complex integrals and daunting computational challenges. For example, 39 reports on the methods used, the difficulties encountered and the limitations met when computing precision calculation for teraelectronvolt collisions in the Large Hadron Collider (LHC).

As far as mathematics is concerned, Feynman integrals are periods.
Although this makes the evaluation of Feynman integrals look like just a
special case of symbolic-numeric integration, it would be naive to
pretend that our methods apply without effort: it is clear that the computations are so
challenging that only specialized methods may succeed.
Current methods include sector decomposition 145 (where the integration domain is decomposed in smaller pieces on which traditional numerical integral algorithms perform well) and the use of differential equations 105 in a similar fashion to what we propose here,
namely the
symbolic computation of integrals with a parameter combined with numerical ODE solving.
In the longer term, we expect that an efficient toolbox to deal with holonomic ideals would improve computations with Feynman integrals. It is however too early to say.

In the short term, the experimental mathematics toolbox that we want to develop may be useful to understand the geometry underlying some Feynman integrals. The typical outcome is simple analytic formulas 54, 53 allowing for fast and precise computations. In this context, identifying key algebraic invariants before engaging further mathematical thinking is crucial. For example, a key fact in the analysis of a three-loop graph by 53 is the generic member of some family of K3 surfaces having Picard rank 19. For other graphs appear cubic fourfolds which we cannot investigate numerically at the moment. An expected outcome of the previously exposed objectives is the computation of the periods of such varieties. This is a first step towards a more systematic development of this interface with high-energy physics.

Solid software foundations for effective analytic continuation (see §3.5.1) will be important for the other tasks in this section. We use currently the part of the package ore_algebradeveloped by Marc Mezzarobba, but it is a bottleneck for several algorithms. The plan for the software development (improvement of ore_algebra, or whole new package) is not fixed yet: it depends on the nature of the algorithmic ideas that will emerge.

As already expressed in §2.3, our natural application domains are:

The year 2023 has seen a lot of international meetings in France, in the framework of the cycle of research schools and workshops Recent Trends in Computer Algebra 2023: 1 week in Luminy, 3 weeks in Lyon, 6 weeks in Paris, for the formal program, plus a 3-month presence at Institut Henri Poincaré while international visitors where present.

This has led to a huge involvement of the team, both because some of the members have been part of the organizers (Alin Bostan for the whole event and two weeks; Pierre Lairez for four weeks), and because most of the team members have attended at least half of the 10 weeks of events.

The team has contributed almost 25 k to the funding the events,
thanks to the ERC project “10000 DIGITS”
and to the ANR project De rerum natura.
This has induced an increased activity of our assistant,
Bahar Carabetta.

In 18, Notarantonio and Yurkevich
studied systems of

Discrete Differential Equations (DDEs) are functional equations
that relate polynomially a power series

In 32,
Notarantonio and Yurkevich studied systems of

Guy Fayolle, in collaboration with S. Franceschi (Télécom SudParis, Institut Polytechnique de Paris) and K. Raschel (CNRS, Université d'Angers, LAREMA), studies the stationary reflected Brownian motion in a non-convex wedge, which, compared to its convex analogue model, has been much rarely analyzed in the probabilistic literature. Two approaches are proposed for the three-quarter plane.

Following up their previous analysis 96, G. Fayolle and P. Mühlethaler analyzed in 29 the so-called back-off technique of the IEEE 802.11 protocol in broadcast mode with waiting queues. In contrast to existing models, packets arriving when a station (or node) is in back-off state are not discarded, but are stored in a buffer of infinite capacity. As in previous studies, the key point of their analysis hinges on the assumption that the time on the channel is viewed as a random succession of transmission slots, whose duration corresponds to the length of a packet, and mini-slots during which the back-off of the station is decremented. These events occur independently, with given probabilities. The state of a node is represented by a three-dimensional Markov chain in discrete-time, formed by the back-off counter, the number of packets at the station, and the back-off stage. The stationary behaviour can be explicitly solved. In particular, Fayolle and Mühlethaler obtained stability (ergodicity) conditions and interpreted them in terms of maximum throughput.

A constant term sequence is a sequence of rational numbers whose

A polyhedron not Rupert.

In 30, Sergey Yurkevich, together with Florian Fürnsinn (U. Vienna), provide a complete classification of the algebraicity of (generalized) hypergeometric functions with no restriction on the set of their parameters. Their characterization relies on the interlacing criteria of Christol (1987) and Beukers-Heckman (1989) for globally bounded and algebraic hypergeometric functions, however in a more general setting which allows arbitrary complex parameters with possibly integral differences. They also showcase the adapted criterion on a variety of different examples.

In 1977, Strassen invented a famous baby-step/giant-step algorithm that computes the factorial

In their 2009 paper Regular sequences of symmetric polynomials,
Aldo Conca, Christian Krattenthaler and Junzo Watanabe needed to prove, as an
intermediate result,
the fact that all terms of a family of binomial sums indexed by an integer

Mallows-Riordan polynomials, sometimes also called inversion
polynomials, form a family of polynomials with integer coefficients appearing
in many counting problems in enumerative combinatorics. They are also
connected with the cumulant generating function of the classical log-normal
distribution in probability theory. In 1
Alin Bostan, together with his probabilist co-authors Gerold Alsmeyer (U. Münster),
Kilian Raschel (CNRS, U. Angers) and Thomas Simon (U. Lille), provide a
probabilistic interpretation of the Mallows-Riordan polynomials that is not
only quite different from the classical connection with the log-normal
distribution, but in fact also rather unexpected. More precisely, they
establish exact formulae in terms of Mallows-Riordan polynomials for the
persistence probabilities of a class of order-one autoregressive processes
with symmetric uniform innovations. These exact formulae then lead to precise
asymptotics of the corresponding persistence probabilities. The connection of
the Mallows-Riordan polynomials with the volumes of certain polytopes is also
discussed. Two further results provide general factorizations of AR(1) models
with continuous symmetric innovations, one for negative and one for positive
drift. The second factorization extends a classical universal formula of
Sparre Andersen for symmetric random walks.

Using an experimental mathematics approach, Alin Bostan together with Frédéric Chapoton (CNRS, IRMA Strasbourg) obtained in 23 new relations between the Dirichlet series for certain periodic coefficients and the moments of certain families of orthogonal polynomials. In addition to the classical hypergeometric orthogonal polynomials, of Racah type and continuous dual Hahn, a new similar family of orthogonal polynomials was discovered.

A power series being given as the solution of a linear differential
equation with appropriate initial conditions, minimization consists in
finding a non-trivial linear differential equation of minimal order
having this power series as a solution. This problem exists in both
homogeneous and inhomogeneous variants; it is distinct from, but related to,
the classical problem of factorization of differential operators.
Recently, minimization has found applications in Transcendental Number
Theory, more specifically in the computation of non-zero algebraic points where
Siegel's

In 21, Alin Bostan, together with Boris Adamczewski (ICJ, Lyon) and
Xavier Caruso (IMB, Bordeaux), provide a new proof of the multivariate version of Christol's
theorem about algebraic power series with coefficients in finite fields, as well as of its
extension to perfect ground fields of positive characteristic obtained independently by Denef and
Lipshitz, Sharif and Woodcok, and Harase. Their new proof is elementary, effective, and allows for
much sharper estimates. They discuss various applications of such estimates, in particular to a
problem raised by Deligne concerning the algebraicity degree of reductions modulo

Given a linear differential equation with coefficients in

In 24, Alin Bostan and Frédéric Chyzak,
together with Vincent Pilaud (CNRS & LIX, Palaiseau)
provided short product formulas for the

The softly linear in purely linear in

In 6, Alin Bostan and Sergey Yurkevich answer a question posed by Michael
Aissen in 1979 about the

In 17, Alaa Ibrahim, together with Bruno Salvy (ARIC team), consider linear recurrences with polynomial coefficients of Poincaré type and with a unique simple dominant eigenvalue. They give an algorithm that proves or disproves positivity of solutions provided the initial conditions satisfy a precisely defined genericity condition. For positive sequences, the algorithm produces a certificate of positivity that is a data-structure for a proof by induction. This induction works by showing that an explicitly computed cone is contracted by the iteration of the recurrence.

Polynomial system solving arises in many application areas to model non-linear geometric properties. In such settings, polynomial systems may come with degeneration which the end-user wants to exclude from the solution set. The nondegenerate locus of a polynomial system is the set of points where the codimension of the solution set matches the number of equations. Computing the nondegenerate locus is classically done through ideal-theoretic operations in commutative algebra such as saturation ideals or equidimensional decompositions to extract the component of maximal codimension. By exploiting the algebraic features of signature-based Gröbner basis algorithms the authors design an algorithm which computes a Gröbner basis of the equations describing the closure of the nondegenerate locus of a polynomial system, without computing first a Gröbner basis for the whole polynomial system.

This is a work of Pierre Lairez, together with Christian Eder, Rafael Mohr nad Mohab Safey El Din 8.

In even space-time dimensions the multi-loop Feynman integrals are integrals of rational function in projective space. By using an algorithm that extends the Griffiths–Dwork reduction for the case of projective hypersurfaces with singularities, the authors derive Fuchsian linear differential equations, the Picard–Fuchs equations, with respect to kinematic parameters for a large class of massive multi-loop Feynman integrals. With this approach the authors obtain the differential operator for Feynman integrals to high multiplicities and high loop orders. Using recent factorisation algorithms the authors give the minimal order differential operator in most of the cases studied in this paper. Amongst our results are that the order of Picard–Fuchs operator for the generic massive two-point

This is a work of Pierre Lairez, together with Pierre Vanhove, published in 2023 12.

Factorisation of linear differential operators and systems in positive
characteristic have been studied before by van der Put in 138 and
Thomas Cluzeau in 79 where both made use of the

In particular to this point no algorithm where yet known to factor central operators in polynomial time.

In 132, Raphaël Pagès presented a refinement of this method to factor differential operators on algebraic curves of positive characteristic relying on tools of algebraic geometry such as Riemann-Roch spaces and Picard 0 group which should solve this issue. Additionally this work should allow for a better control of the size of the output of the factorisation.

R. Pagès will defend his PhD in Spring 2024 and is preparing a corresponding submission.

Creative telescoping is an algorithmic method introduced by Zeilberger in the 1990s to compute parametrized definite sums. It proceeds by synthesizing special summands, called certificates, which have the specific property to telescope; correspondingly, this determines a recurrence equation, called telescoper, satisfied by the definite sum. Hadrien Brochet and Bruno Salvy (ARIC team) described a creative telescoping algorithm that computes telescopers for definite sums of D-finite sequences as well as the associated certificates in a compact form. Their algorithm relies on a discrete analogue of the generalized Hermite reduction, or equivalently, on a generalization of the Abramov–Petkovšek reduction. They provide a Maple implementation with good timings on a variety of examples. An article was submitted this year 26.

Catherine St-Pierre and Éric Schost (U. Waterloo) presented
a

A long version of this manuscript 19 has been submitted.

In her PhD thesis,
Catherine St-Pierre presented an algorithm to find the local structure of intersections of plane curves.
More precisely, she addressed the question of describing the scheme of the quotient ring of a bivariate zero-dimensional ideal i.e. finding the points (maximal ideals of good maximal ideal.
It relies on a structural result about the syzygies in such a basis due to Conca and Valla 33,
from which arises an explicit map between ideals in a stratum (or Gröbner cell) and points in the associated moduli space.
The thesis also qualified what makes a maximal ideal large enough, endowed with an Archimedean or ultrametric valuation, and admits a fraction reconstruction algorithm,
this result is used to give a complete good maximal ideal untangling and tangling,
introduced by van der Hoeven and Lecerf 84.
The two maps form one isomorphism to find points with an isomorphic local structure
and, at the origin, bind them.
Together with Hyun, Melczer and Schost (U. Waterloo), she gave a slightly faster tangling algorithm and discussed new applications of these techniques.
They showed how to extend these ideas to bivariate settings and gave a bound on the arithmetic complexity for certain algebras.

The thesis was realised at the University of Waterloo, and the redaction was completed during St-Pierre's time with MATHEXP. Catherine St-Pierre successfully defended in April 2023.

The Tardigrade graph is a two-loop Feynman graph.
It describes a family of K3 surfaces, obtained as quartic hypersurfaces in

At large scales, the mass configuration of space takes a web-like structure consisting of nodes, connected by filaments, themselves connected by walls, separated by voids. This structure and its evolution with time have an impact on the zoology of galaxies one may find at a given point. In 27, Eric Pichon-Pharabod, together with Corentin Cadiou, Christophe Pichon and Dmitri Pogosyan provided a model to predict this evolution ab initio. They applied it to recover the probability distribution function of satellite-merger separation, the distribution of mergers with respect to peak rarity, and they analysed the typical spin brought by mergers.

In 31, Pierre Lairez, Eric Pichon-Pharabod, together with Pierre Vanhove (IPhT) provided an algorithm to compute an effective description of the homology of complex projective hypersurfaces relying on Picard-Lefschetz theory. Next, they used this description to compute high-precision numerical approximations of the periods of the hypersurface. This is an improvement over existing algorithms as this new method allows for the computation of periods of smooth quartic surfaces in an hour on a laptop, which could not be attained with previous methods. The general theory presented in this paper can be generalised to varieties other than just hypersurfaces, such as elliptic fibrations as showcased on an example coming from Feynman graphs. The algorithm comes with a SageMath implementation.

In 16, Pierre Lairez and Rafael Mohr, together with Christian Eder (U. Kaiserslautern), and Mohab Safey El Din (Sorbonne U.) described a recursive algorithm that decomposes an algebraic set into locally closed equidimensional sets, i.e. sets which each have irreducible components of the same dimension. At the core of this algorithm, they combined ideas from the theory of triangular sets, a.k.a. regular chains, with Gröbner bases to encode and work with locally closed algebraic sets. Equipped with this, their algorithm avoids projections of the algebraic sets that are decomposed and certain genericity assumptions frequently made when decomposing polynomial systems, such as assumptions about Noether position. This makes it produce fine decompositions on more structured systems where ensuring genericity assumptions often destroys the structure of the system at hand. Practical experiments demonstrate its efficiency compared to state-of-the-art implementations.

Sergey Yurkevich successfully completed his PhD studies in July 2023. The defense of his thesis 20 took place in Vienna and the committee consisted of Andreas Cap (president), Gilles Villard, Wadim Zudilin (reviewers), Alin Bostan, Stéphane Fischler, Herwig Hauser (examiners).

The team runs a monthly seminar, jointly with colleagues in Sorbonne Université, in alternation in Palaiseau and Paris, and open to remote attendance in hybrid mode. The organizers for our team are Pierre Lairez and Hadrien Notarantonio. Because of the huge activity induced by the semester Recent Trends in Computer Algebra 2023, we decided to reduce the activity in our seminar during the past year. Nevertheless, we could have 9 talks, including some by international visiting colleagues.