## Section: New Results

### Fundamental algorithms and structured polynomial systems

#### Linear Algebra for Computing Gröbner Bases of Linear Recursive Multidimensional Sequences

The so-called Berlekamp – Massey – Sakata algorithm computes a Gröbner basis of a 0-dimensional ideal of relations satisfied by an input table. It extends the Berlekamp – Massey algorithm to $n$-dimensional tables, for $n>1$.

In the extended version [6], we investigate this problem and design several algorithms for computing such a Gröbner basis of an ideal of relations using linear algebra techniques. The first one performs a lot of table queries and is analogous to a change of variables on the ideal of relations.

As each query to the table can be expensive,
we design a second algorithm
requiring fewer queries, in general.
This FGLM -like algorithm allows us to compute the relations of the
table by extracting a full rank submatrix of a *multi-Hankel*
matrix (a multivariate generalization of Hankel matrices).

Under some
additional assumptions, we make a third, adaptive, algorithm and reduce
further the number of table queries.
Then, we relate the number of queries of
this third algorithm to the
*geometry* of the final staircase and we show that it is
essentially linear in the size of the output when the staircase is convex.
As a direct application to this, we decode $n$-cyclic codes, a
generalization in dimension $n$ of Reed Solomon codes.

We show that the multi-Hankel matrices are heavily structured when using the LEX ordering and that we can speed up the computations using fast algorithms for quasi-Hankel matrices. Finally, we design algorithms for computing the generating series of a linear recursive table.

#### Guessing Linear Recurrence Relations of Sequence Tuples and P-recursive Sequences with Linear Algebra

Given several $n$-dimensional sequences, we first present in [23] an algorithm for computing the Gröbner basis of their module of linear recurrence relations.

A P-recursive sequence ${\left({u}_{\mathbf{i}}\right)}_{\mathbf{i}\in {\mathbb{N}}^{n}}$ satisfies linear recurrence relations with polynomial coefficients in $\mathbf{i}$, as defined by Stanley in 1980. Calling directly the aforementioned algorithm on the tuple of sequences ${\left({\left({\mathbf{i}}^{\mathbf{j}}\phantom{\rule{0.166667em}{0ex}}{u}_{\mathbf{i}}\right)}_{\mathbf{i}\in {\mathbb{N}}^{n}}\right)}_{\mathbf{j}}$ for retrieving the relations yields redundant relations. Since the module of relations of a P-recursive sequence also has an extra structure of a 0-dimensional right ideal of an Ore algebra, we design a more efficient algorithm that takes advantage of this extra structure for computing the relations.

Finally, we show how to incorporate Gröbner bases computations in an Ore algebra $\mathbb{K}\u2329{t}_{1},...,{t}_{n},{x}_{1},...,{x}_{n}\u232a$, with commutators ${x}_{k}\phantom{\rule{0.166667em}{0ex}}{x}_{\ell}-{x}_{\ell}\phantom{\rule{0.166667em}{0ex}}{x}_{k}={t}_{k}\phantom{\rule{0.166667em}{0ex}}{t}_{\ell}-{t}_{\ell}\phantom{\rule{0.166667em}{0ex}}{t}_{k}={t}_{k}\phantom{\rule{0.166667em}{0ex}}{x}_{\ell}-{x}_{\ell}\phantom{\rule{0.166667em}{0ex}}{t}_{k}=0$ for $k\ne \ell $ and ${t}_{k}\phantom{\rule{0.166667em}{0ex}}{x}_{k}-{x}_{k}\phantom{\rule{0.166667em}{0ex}}{t}_{k}={x}_{k}$, into the algorithm designed for P-recursive sequences. This allows us to compute faster the Gröbner basis of the ideal spanned by the first relations, such as in 2D /3D -space walks examples.

#### On the Connection Between Ritt Characteristic Sets and Buchberger-Gröbner Bases

For any polynomial ideal $I$, let the minimal triangular set contained in the reduced Buchberger–Gröbner basis of $I$ with respect to the purely lexicographical term order be called the $W$-characteristic set of $I$. In [18], we establish a strong connection between Ritt’s characteristic sets and Buchberger’s Gröbner bases of polynomial ideals by showing that the $W$-characteristic set $C$ of $I$ is a Ritt characteristic set of $I$ whenever $C$ is an ascending set, and a Ritt characteristic set of $I$ can always be computed from $C$ with simple pseudo-division when $C$ is regular. We also prove that under certain variable ordering, either the $W$-characteristic set of $I$ is normal, or irregularity occurs for the $j$th, but not the $(j+1)$th, elimination ideal of $I$ for some $j$. In the latter case, we provide explicit pseudo-divisibility relations, which lead to nontrivial factorizations of certain polynomials in the Buchberger–Gröbner basis and thus reveal the structure of such polynomials. The pseudo-divisibility relations may be used to devise an algorithm to decompose arbitrary polynomial sets into normal triangular sets based on Buchberger–Gröbner bases computation.

#### On the complexity of computing Gröbner bases for weighted homogeneous systems

Solving polynomial systems arising from applications is frequently made easier by the structure of the systems. Weighted homogeneity (or quasi-homogeneity) is one example of such a structure: given a system of weights $W=({w}_{1},\cdots ,{w}_{n})$, $W$-homogeneous polynomials are polynomials which are homogeneous w.r.t the weighted degree ${deg}_{W}({X}_{1}^{{\alpha}_{1}},\cdots ,{X}_{n}^{{\alpha}_{n}})=\sum {w}_{i}{\alpha}_{i}$.

Gröbner bases for weighted homogeneous systems can be computed by adapting existing algorithms for homogeneous systems to the weighted homogeneous case. In [12], we show that in this case, the complexity estimate for Algorithm F5 $\left({\left(\genfrac{}{}{0pt}{}{n+{d}_{\mathrm{max}}-1}{{d}_{\mathrm{max}}}\right)}^{\omega}\right)$ can be divided by a factor ${\left(\prod {w}_{i}\right)}^{\omega}$. For zero-dimensional systems, the complexity of Algorithm FGLM $n{D}^{\omega}$ (where $D$ is the number of solutions of the system) can be divided by the same factor ${\left(\prod {w}_{i}\right)}^{\omega}$. Under genericity assumptions, for zero-dimensional weighted homogeneous systems of $W$-degree $({d}_{1},\cdots ,{d}_{n})$, these complexity estimates are polynomial in the weighted Bézout bound ${\prod}_{i=1}^{n}{d}_{i}/{\prod}_{i=1}^{n}{w}_{i}$.

Furthermore, the maximum degree reached in a run of Algorithm F5 is bounded by the weighted Macaulay bound $\sum ({d}_{i}-{w}_{i})+{w}_{n}$, and this bound is sharp if we can order the weights so that ${w}_{n}=1$. For overdetermined semi-regular systems, estimates from the homogeneous case can be adapted to the weighted case.

We provide some experimental results based on systems arising from a cryptography problem and from polynomial inversion problems. They show that taking advantage of the weighted homogeneous structure yields substantial speed-ups, and allows us to solve systems which were otherwise out of reach.

#### A Superfast Randomized Algorithm to Decompose Binary Forms

Symmetric Tensor Decomposition is a major problem that arises in areas such as signal processing, statistics, data analysis and computational neuroscience. It is equivalent to a homogeneous polynomial in $n$ variables of degree $D$ as a sum of $D$th powers of linear forms, using the minimal number of summands. This minimal number is called the rank of the polynomial/tensor. We consider the decomposition of binary forms, that corresponds to the decomposition of symmetric tensors of dimension 2 and order $D$. This problem has its roots in Invariant Theory, where the decompositions are known as canonical forms. As part of that theory, different algorithms were proposed for the binary forms. In recent years, those algorithms were extended for the general symmetric tensor decomposition problem. We present in [22] a new randomized algorithm that enhances the previous approaches with results from structured linear algebra and techniques from linear recurrent sequences. It achieves a softly linear arithmetic complexity bound. To the best of our knowledge, the previously known algorithms have quadratic complexity bounds.

#### On the Bit Complexity of Solving Bilinear Polynomial Systems

In [29] we bound the Boolean complexity of computing isolating hyperboxes for all complex roots of systems of bilinear polynomials. The resultant of such systems admits a family of determinantal Sylvester-type formulas, which we make explicit by means of homological complexes. The computation of the determinant of the resultant matrix is a bottleneck for the overall complexity. We exploit the quasi-Toeplitz structure to reduce the problem to efficient matrix-vector products, corresponding to multivariate polynomial multiplication. For zero-dimensional systems, we arrive at a primitive element and a rational univariate representation of the roots. The overall bit complexity of our probabilistic algorithm is ${\tilde{O}}_{B}({n}^{4}{D}^{4}+{n}^{2}{D}^{4}\tau )$, where $n$ is the number of variables, $D$ equals the bilinear Bézout bound, and $\tau $ is the maximum coefficient bitsize. In addition, a careful infinitesimal symbolic perturbation of the system allows us to treat degenerate and positive dimensional systems, thus making our algorithms and complexity analysis applicable to the general case.