Section: New Results
Fundamental algorithms and structured polynomial systems
Linear Algebra for Computing Gröbner Bases of Linear Recursive Multidimensional Sequences
The so-called Berlekamp – Massey – Sakata algorithm computes a
Gröbner basis of a 0-dimensional ideal of relations satisfied by
an input table. It extends the Berlekamp – Massey algorithm to
In [1], we investigate this problem and design several algorithms for computing such a Gröbner basis of an ideal of relations using linear algebra techniques. The first one performs a lot of table queries and is analogous to a change of variables on the ideal of relations.
As each query to the table can be expensive, we design a second algorithm requiring fewer queries, in general. This FGLM -like algorithm allows us to compute the relations of the table by extracting a full rank submatrix of a multi-Hankel matrix (a multivariate generalization of Hankel matrices).
Under some additional assumptions, we make a third, adaptive,
algorithm and reduce further the number of table queries. Then, we
relate the number of queries of this third algorithm to the
geometry of the final staircase and we show that it is
essentially linear in the size of the output when the staircase is
convex. As a direct application to this, we decode
We show that the multi-Hankel matrices are heavily structured when using the LEX ordering and that we can speed up the computations using fast algorithms for quasi-Hankel matrices. Finally, we design algorithms for computing the generating series of a linear recursive table.
In-depth comparison of the Berlekamp – Massey – Sakata and the Scalar-FGLM algorithms: the non adaptive variants
In [22], we compare thoroughly the Berlekamp – Massey – Sakata algorithm and the Scalar-FGLM algorithm, which compute both the ideal of relations of a multidimensional linear recurrent sequence.
Suprisingly, their behaviors differ. We detail in which way they do and prove that it is not possible to tweak one of the algorithms in order to mimic exactly the behavior of the other.
Resultants and Discriminants for Bivariate Tensor-product Polynomials
Optimal resultant formulas have been systematically constructed mostly for unmixed polynomial systems, that is, systems of polynomials which all have the same support. However, such a condition is restrictive, since mixed systems of equations arise frequently in practical problems. In [16] we present a square, Koszul-type matrix expressing the resultant of arbitrary (mixed) bivariate tensor-product systems. The formula generalizes the classical Sylvester matrix of two univariate polynomials, since it expresses a map of degree one, that is, the entries of the matrix are simply coefficients of the input polynomials. Interestingly, the matrix expresses a primal-dual multiplication map, that is, the tensor product of a univariate multiplication map with a map expressing derivation in a dual space. Moreover, for tensor-product systems with more than two (affine) variables, we prove an impossibility result: no universal degree-one formulas are possible, unless the system is unmixed. We also present applications of the new construction in the computation of discriminants and mixed discriminants as well as in solving systems of bivariate polynomials with tensor-product structure.
Sparse Rational Univariate Representation
In [15] we present explicit worst case degree and height bounds for the rational univariate representation of the isolated roots of polynomial systems based on mixed volume. We base our estimations on height bounds of resultants and we consider the case of 0-dimensional, positive dimensional, and parametric polynomial systems.
Multi-homogeneous polynomial systems arise in many applications. In [11], we provide bit complexity estimates for representing the solutions of these systems. These are the best currently known bounds. The assumptions essentially imply that the Jacobian matrix of the system under study has maximal rank at the solution set and that this solution set is finite.
We do not only obtain bounds but an algorithm is also given for solving such systems. We give bit complexity estimates which, up to a few extra other factors, are quadratic in the number of solutions and linear in the height of the input system, under some genericity assumptions.
The algorithm is probabilistic and a probability analysis is provided. Next, we apply these results to the problem of optimizing a linear map on the real trace of an algebraic set. Under some genericity assumptions, we provide bit complexity estimates for solving this polynomial minimization problem.
Improving Root Separation Bounds
Let
Accelerated Approximation of the Complex Roots and Factors of a Univariate Polynomial
The known algorithms approximate the roots of a complex univariate polynomial in nearly optimal arithmetic and Boolean time. They are, however, quite involved and require a high precision of computing when the degree of the input polynomial is large, which causes numerical stability problems. In [8] we observe that these difficulties do not appear at the initial stages of the algorithms, and in our present paper we extend one of these stages, analyze it, and avoid the cited problems, still achieving the solution within a nearly optimal complexity estimates, provided that some mild initial isolation of the roots of the input polynomial has been ensured. The resulting algorithms promise to be of some practical value for root-finding and can be extended to the problem of polynomial factorization, which is of interest on its own right. We conclude with outlining such an extension, which enables us to cover the cases of isolated multiple roots and root clusters.
Nearly optimal computations with structured matrices
In [9] we estimate the Boolean complexity of
multiplication of structured matrices by a vector and the solution of
nonsingular linear systems of equations with these matrices. We study
four basic and most popular classes, that is, Toeplitz, Hankel, Cauchy
and Vandermonde matrices, for which the cited computational problems
are equivalent to the task of polynomial multiplication and division
and polynomial and rational multipoint evaluation and
interpolation. The Boolean cost estimates for the latter problems have
been obtained by Kirrinnis, except for rational interpolation. We
supply them now as well as the Boolean complexity estimates for the
important problems of multiplication of transposed Vandermonde matrix
and its inverse by a vector. All known Boolean cost estimates
for such problems rely on using Kronecker product. This implies
the d-fold precision increase for the
Sliding solutions of second-order differential equations with discontinuous right-hand side
In [2], we consider second-order ordinary differential equations with discontinuous right-hand side. We analyze the concept of solution of this kind of equations and determine analytical conditions that are satisfied by typical solutions. Moreover, the existence and uniqueness of solutions and sliding solutions are studied.
Sparse FGLM algorithms
Given a zero-dimensional ideal