The main objective of the SALSA project is to solve systems of polynomial equations and inequations. We emphasize on algebraic methods which are more robust and frequently more efficient than purely numerical tools.
Polynomial systems have many applications in various scientific  academic as well as industrial  domains. However much work is yet needed in order to define specifications for the output of the algorithms which are well adapted to the problems.
The variety of these applications implies that our software needs to be robust. In fact, almost all problems we are dealing with are highly numerically unstable, and therefore, the correctness of the result needs to be guaranteed.
Thus, a key target is to provide software which are competitive in terms of efficiency but preserve certified outputs. Therefore, we restrict ourselves to algorithms which verify the assumptions made on the input, check the correctness of possible random choices done during a computation without sacrificing the efficiency. Theoretical complexity for our algorithms is only a preliminary step of our work which culminates with efficient implementations which are designed to solve significant applications.
A consequence of our way of working is that many of our contributions are related to applicative topics such as cryptography, error correcting codes, robotics and signal theory. We have to emphasize that these applied contributions rely on a longterm and global management of the project with clear and constant objectives leading to theoretical and deep advances.
For polynomial system solving, the mathematical specification of the result of a computation, in particular when the number of solutions is infinite, is itself a difficult problem , , , . Sorting the most frequently asked questions appearing in the applications, one distinguishes several classes of problems which are different either by their mathematical structure or by the significance that one can give to the word "solving".
Some of the following questions have a different meaning in the real case or in the complex case, others are posed only in the real case :
zerodimensional systems (with a finite number of complex solutions  which include the particular case of univariate polynomials); The questions in general are well defined (numerical approximation, number of solutions, etc) and the handled mathematical objects are relatively simple and wellknown;
parametric systems; They are generally zerodimensional for almost all the parameters' values. The goal is characterize the solutions of the system (number of real solutions, existence of a parameterization, etc.) with respect to parameters' values.
positive dimensional systems; For a direct application, the first question is the existence of zeros of a particular type (for example real, real positive, in a finite field). The resolution of such systems can be considered as a black box for the study of more general problems (semialgebraic sets for example) and information to be extracted is generally the computation of a point per connected component in the real case.
constructible and semialgebraic sets; As opposed to what occurs numerically, the addition of constraints or inequalities complicates the problem. Even if semialgebraic sets represent the basic object of the real geometry, their automatic "and effective study" remains a major challenge. To date, the state of the art is poor since only two classes of methods are existing :
the Cylindrical Algebraic Decomposition which basically computes a partition of the ambient space in cells where the signs of a given set of polynomials are constant;
deformations based methods that turn the problem into solving algebraic varieties.
The first solution is limited in terms of performances (maximum 3 or 4 variables) because of a recursive treatment variable by variable, the second also because of the use of a sophisticated arithmetic (formal infinitesimals).
quantified formulas; deciding efficiently if a first order formula is valid or not is certainly one of the greatest challenges in "effective" real algebraic geometry. However this problem is relatively well encircled since it can always be rewritten as the conjunction of (supposed to be) simpler problems like the computation of a point per connected component of a semialgebraic set.
As explained in some parts of this document, the iniquity of the studied mathematical objects does not imply the uncut of the related algorithms.
The pressure is relatively strong on zerodimensional systems since we enter directly in competition with numerical methods (Newton, homotopy, etc), seminumerical methods (interval analysis, eigenvalues computations, etc.) and formal methods (geometrical resolution, resultant based strategies, XL algorithm in cryptography, etc.). The pressure is much less on the other subjects: except the groups working on the Cylindrical Algebraic Decomposition, very few studies with practical vocation are to be counted and even less software achievements.
The priorities we put on our algorithmic work are generally dictated by the applications. Thus, one finds the cutting stated above in the identified research topics of our project for the algorithmic part. For each of these goals, our work is to design the most efficient possible algorithms: there is thus a strong correlation between implementations and applications, but a significant part of work is dedicated to the identification of blackbox allowing a modular approach of the problems. For example, the resolution of the zerodimensional systems is a prerequisite for the algorithms treating of parametric or positive dimensional systems.
An essential class of blackbox developed in the project does not appear directly in the absolute objectives counted above : the "algebraic or complex" resolutions. They are mostly reformulations, more algorithmically usable, of the studied systems. One distinguishes two categories of complementary objects :
ideals representations; From a computational point of view these are the structures which are used in the first steps;
varieties representations; The algebraic variety, or more generally the constructible or semialgebraic set is the studied object.
To give a simple example, in
the variety
{(0, 0)}can be seen like the zeros set of more or less complicated ideals (for example, ideal(
X,
Y), ideal(
X^{2},
Y), ideal(
X^{2},
X,
Y,
Y^{3}), etc). The entry which is given to us is a system of equations, i.e. an ideal. It is essential, in many cases, to understand the structure of this object to be able to correctly treat the degenerated cases. A striking example is certainly the study of the
singularities. To take again the preceding example, the variety is not singular, but this cannot be detected by the blind application of the Jacobian criterion (one could wrongfully think that all the points are singular, contradicting, for example, Sard's lemma).
The basic tools that we develop and use to understand in an automatic way the algebraic and geometrical structures are on the one hand Gröbner bases (most known object used to represent an ideal without loss of information) and on the other hand triangular sets (effective way to represent the varieties).
On these two points, the pressure is strong since many teams work on these two objects. To date, our project however has a consequent advance in the computation of Gröbner bases (algorithms
F_{5}and
F_{7}of Faugère for Gröbner bases) and is a main contributor in the homogenization and comprehension of the triangular structures
.
Let us denote by
K[
X_{1}, ...,
X_{n}]the ring of polynomials with coefficients in a field
Kand indeterminates
X_{1}, ...,
X_{n}and
S= {
P_{1}, ...,
P_{s}}any subset of
K[
X_{1}, ...,
X_{n}]. A point
is a zero of
Sif
P_{i}(
x) = 0
i[1...
s].
The ideal
generated by
P_{1}, ...,
P_{s}is the set of polynomials in
K[
X_{1}, ...,
X_{n}]constituted by all the combinations
with
. Since every element of
vanishes at each zero of
S, we denote by
(resp.
), the set of complex (resp. real) zeros of
S, where
Ris a real closed field containing
Kand
Cits algebraic closure.
One Gröbner basis' main property is to provide an algorithmic method for deciding if a polynomial belongs or not to an ideal through a reduction function denoted " " from now.
If
Gis a Gröbner basis of an ideal
for any monomial ordering
<.
a polynomial belongs to if and only if ,
Reduce(
p,
G,
<) does not depend on the order of the polynomials in the list
G, thus, this is a canonical reduced expression modulus
, and the Reduce function can be used as a
simplificationfunction.
Gröbner bases are computable objects. The most popular method for computing them is Buchberger's algorithm ( , ). It has several variants and it is implemented in most of general computer algebra systems like Maple or Mathematica. The computation of Gröbner bases using Buchberger's original strategies has to face to two kind of problems :
(A) arbitrary choices : the order in which are done the computations has a dramatic influence on the computation time;
(B) useless computations : the original algorithm spends most of its time in computing 0.
For problem (A), J.C. Faugère proposed (
 algorithm
F_{4}) a new generation of powerful algorithms (
) based on the intensive use of linear algebra technics. In short, the arbitrary choices are left to computational
strategies related to classical linear algebra problems (matrix inversions, linear systems, etc.).
For problem (B), J.C. Faugère proposed ( ) a new criterion for detecting useless computations. Under some regularity conditions on the system, it is now proved that the algorithm do never perform useless computations.
A new algorithm named
F_{5}was built using these two key results. Even if it still computes a Gröbner basis, the gap with existing other strategies is consequent. In particular, due to the range of examples that become computable, Gröbner basis can be considered as a reasonable computable object in large
applications.
We pay a particular attention to Gröbner bases computed for elimination orderings since they provide a way of "simplifying" the system (a equivalent system with a structured shape). For example, a lexicographic Gröbner basis has always the following shape :
(some of the polynomials may be identically null). A well known property is that the zeros of the first non null polynomial define the Zariski closure (classical closure in the case of complex coefficients) of the projection on the coordinate's space associated with the smallest variables.
A triangular set is a system with the following shape :
(some polynomials may be identically null).
Such kinds of systems are algorithmically easy to use, for computing numerical approximations of the solutions in the zerodimensional case or for the study of the singularities of the associated variety (triangular minors in the Jacobian matrices). Except if they are linear, algebraic systems cannot, in general, be rewritten as a single triangular set, one speaks then of decomposition of the systems in several triangular sets.
Triangular sets appear under various names in the field of algebraic systems. In 1932 J.F. Ritt ( ) introduced them as characteristic sets for prime ideals in the context of differential algebra. His constructive algebraic tools were adapted by W.T. Wu in the late seventies for geometric applications. Wu presented an algorithm for computing characteristic sets of finite polynomial sets which do not generate necessarily prime ideals ( , ). With Wu, several authors such S.C. Chou, X.S. Gao, G. Gallo, B. Mishra, D. Wang then developed this approach to make it more efficient.
In 1991, Lazard and Kalkbrener presented triangular decomposition algorithms with nicer properties for their outputs, based on additional requirements for the triangular sets and a generalization of the gcd of univariate polynomials over a product of fields.
The concept of regular chain introduced in
and
is adapted for recursive computations in a univariate way and provides a membership test and a zerodivisor test for
the strongly unmixed dimensional ideal it defines. Kalkbrenner defined regular triangular sets and showed how to decompose algebraic varieties as a union of Zariski closures of zeros of regular triangular sets. Gallo showed that the principal component of a triangular decomposition can be
computed in
O(
d^{O(
n2)})(
n= number of variables,
d=degree in the variables). During the 90s, implementations of various strategies of decompositions multiply, but they drain relatively heterogeneous specifications.
Following Kalkbrener's work, Aubry presented an algorithm for decomposing the radical of an ideal into separable regular chains that define radical strongly unmixed dimensional ideals ( ).
P. Aubry and D. Lazard contributed to the homogenization of the work completed in this field by proposing a series of specifications and definitions gathering the whole of former work . Two essential concepts for the use of these sets (regularity, separability) at the same time allow from now on to establish a simple link with the studied varieties and to specify the computed objects precisely.
For
, we denote by
m v a r (
p)(and we call
main variableof
p) the greatest variable appearing in
pw.r.t. a fixed lexicographic ordering.
h_{i}the leading coefficient of
t_{i}(when
t_{i}0it is seen as a univariate polynomial in its main variable), and
.
the separant of
t_{i}(when
t_{i}0)
.
; the variety of
Tis
V( s a t (
T))and we have
(elementary property of localization).
A triangular set
is said to be regular (resp. separable) if
i{1, ...,
n}such that
t_{i}0, the normalization of its initial
h_{i}(resp. of its separant
s_{i}) is a non zero polynomial.
One can always decompose a variety as the union of the varieties of regular and separable triangular sets ( , ) :
A remarkable and fundamental property in the use we have of the triangular sets is that the ideals
s a t (
T
_{i}), for regular and separable triangular sets, are radical and equidimensional. These properties are essential for some of our algorithms. For example, having radical and equidimensional ideals allows us to compute straightforwardly the singular locus of a variety by canceling
minors of good dimension in the Jacobian matrix of the system. This is naturally a basic tool for some algorithms in real algebraic geometry
,
,
.
Triangular sets based technics are efficient for specific problems like computing Galois ideals
, but the implementations of direct decompositions into triangular sets do not currently reach the level of
efficiency of Gröbner bases in terms of computable classes of examples. Anyway, our team benefits from the progress carried out in this last field since we currently perform decompositions into regular and separable triangular sets through lexicographical Gröbner bases computations (the
process provides in the meantime Gröbner bases of the ideals
s a t (
T
_{i})).
A system is zerodimensional if the set of the solutions in an algebraically closed field is finite. In this case, the set of solutions does not depend on the chosen algebraically closed field.
Such a situation can easily be detected on a Gröbner basis for any admissible monomial ordering.
These systems are mathematically particular since one can systematically bring back to solve linear algebra problems. More precisely, the algebra
K[
X_{1}, ...,
X_{n}]/
Iis in fact a
K vector space of dimension equal to the number of complex roots of the system (counted with multiplicities). We chose to exploit this structure. Accordingly, computing a base of
K[
X_{1}, ...,
X_{n}]/
Iis essential. A Gröbner basis gives a canonical projection from
K[
X_{1}, ...,
X_{n}]to
K[
X_{1}, ...,
X_{n}]/
I, and thus provides a base of the quotient algebra and many other informations more or less straightforwardly (number of complex roots for example).
The use of this vectorspace structure is well known and at the origin of the one of the most known algorithms of the field ( ) : it allows to deduce, starting from a Gröbner basis for any ordering, a Gröbner base for any other ordering (in practice, a lexicographic basis, which are very difficult to calculate directly). It is also common to certain seminumerical methods since it allows to obtain quite simply (by a computation of eigenvalues for example) the numerical approximation of the solutions (this type of algorithms is developed, for example, in the INRIA Galaad project).
Contrary to what is written in a certain literature, the computation of Gröbner bases is not "doubly exponential" for all the classes of problems. In the case of the zerodimensional systems, it is even shown that it is simply exponential in the number of variables, for a degree ordering and for the systems without zeros at infinity. Thus, an effective strategy consists in computing a Gröbner basis for a favorable ordering and then to deduce, by linear algebra technics, a Gröbner base for a lexicographic ordering .
The case of the zerodimensional systems is also specific for triangular sets. Indeed, in this particular case, we have designed algorithms that allow to compute them efficiently starting from a lexicographic Gröbner basis. Note that, in the case of zerodimensional systems, regular triangular sets are Gröbner bases for a lexicographical order.
Many teams work on Gröbner bases and some use triangular sets in the case of the zerodimensional systems, but up to our knowledge, very few continue the work until a numerical resolution and even less tackle the specific problem of computing the real roots. It is illusory, in practice, to hope to obtain numerically and in a reliable way a numerical approximation of the solutions straightforwardly from a lexicographical basis and even from a triangular set. This is mainly due to the size of the coefficients in the result (rational number).
Our specificity is to carry out the computations until their term thanks to two types of results :
the computation of the Rational Univariate Representation
: we shown that any zerodimensional system, depending on variables
X_{1}, ...
X_{n}, can systematically be rewritten, without loss of information (multiplicities, real roots), in the form
f(
T) = 0,
X_{i}=
g_{i}(
T)/
g(
T),
i= 1...
nwhere the polynomials
f,
g,
g_{1}, ...
g_{n}have coefficients in the same ground field as those of the system and where
Tis a new variable (independent from
X_{1}, ...
X_{n}).
efficient algorithms for solving (real roots isolation and counting) univariate polynomials , .
Thus, the use of innovative algorithms for Gröbner bases computations , , Rational Univariate representations ( for the "shape position" case and for the general case), allows to use zerodimensional solving as subtask in other algorithms.
When a system is positive dimensional(with an infinite number of complex roots), it is no more possible to enumerate the solutions. Therefore, the solving process reduces to decomposing the set of the solutions into subsets which have a welldefined geometry. One may perform such a decomposition from an algebraic point of view or from a geometrical one, the latter meaning not taking the multiplicities into account (structure of primary components of the ideal is lost).
Although there exist algorithms for both approaches, the algebraic point of view is presently out of the possibilities of practical computations, and we restrict ourselves to geometrical decompositions.
When one studies the solutions in an algebraically closed field, the decompositions which are useful are the equidimensional decomposition (which consists in considering separately the isolated solutions, the curves, the surfaces, ...) and the prime decomposition (decomposes the variety into irreducible components). In practice, our team works on algorithms for decomposing the system into regular separable triangular sets, which corresponds to a decomposition into equidimensional but not necessarily irreducible components. These irreducible components may be obtained eventually by using polynomial factorization.
However, in many situations one is looking only for real solutions satisfying some inequalities (
P_{i}>0or
P_{i}0)
There are general algorithms for such tasks, which rely on Tarski's quantifier elimination. Unfortunately, these problems have a very high complexity, usually doubly exponential in the number of variables or the number of blocks of quantifiers, and these general algorithms are intractable. It follows that the output of a solver should be restricted to a partial description of the topology or of the geometry of the set of solutions, and our research consists in looking for more specific problems, which are interesting for the applications, and which may be solved with a reasonable complexity.
We focus on 2 main problems :
computing one point on each connected components of a semialgebraic set;
solving systems of equalities and inequalities depending on parameters.
The most widespread algorithm computing sampling points in a semialgebraic set is the Cylindrical Algebraic Decomposition Algorithm due to Collins . With slight modifications, this algorithm also solves the problem of Quantifier Elimination. It is based on the recursive elimination of variables one after an other ensuring nice properties between the components of the studied semialgebraic set and the components of semialgebraic sets defined by polynomial families obtained by the elimination of variables. It is doubly exponential in the number of variables and its best implementations are limited to problems in 3 or 4 variables.
Since the end of the eighties, alternative strategies (see , , , , ) with a single exponential complexity in the number of variables have been developed. They are based on the progressive construction of the following subroutines:
solving zerodimensional systems: this can be performed by computing a Rational Univariate Representation (see );
computing sampling points in a real hypersurface: after some infinitesimal deformations, this is reduced to problem (a) by computing the critical locus of a polynomial mapping reaching its extrema on each connected component of the real hypersurface;
computing sampling points in a real algebraic variety defined by a polynomial system: this is reduced to problem (b) by considering the sum of squares of the polynomials;
computing sampling points in a semialgebraic set: this is reduced to problem (c) by applying an infinitesimal deformation.
On the one hand, the relevance of this approach is based on the fact that its complexity is asymptotically optimal. On the other hand, some important algorithmic developments have been necessary to obtain efficient implementations of subroutines (b) and (c).
During the last years, we focused on providing efficient algorithms solving the problems (b) and (c). The used method rely on finding a polynomial mapping reaching its extrema on each connected component of the studied variety such that its critical locus is zerodimensional. For example, in the case of a smooth hypersurface whose real counterpart is compact choosing a projection on a line is sufficient. This method is called in the sequel the critical point method. We started by studying problem (b) .
Even if we showed that our solution may solve new classes of problems ( ), we have chosen to skip the reduction to problem (b), which is now considered as a particular case of problem (c), in order to avoid an artificial growth of degree and the introduction of singularities and infinitesimals.
Putting the critical point method into practice in the general case requires to drop some hypotheses. First, the compactness assumption, which is in fact intimately related to an implicit properness assumption, has to be dropped. Second, algebraic characterizations of critical loci are based on assumptions of nondegeneracy on the rank of the Jacobian matrix associated to the studied polynomial system. These hypotheses are not satisfied as soon as this system defines a nonradical ideal and/or a non equidimensional variety, and/or a nonsmooth variety. Our contributions consist in overcoming efficiently these obstacles ( , ) and several strategies have been developed , , .
The properness assumption can be dropped by considering the square of a distance function to a generic point instead of a projection function: indeed each connected component contains at least a point minimizing locally this function. Performing a radical and equidimensional decomposition of the ideal generated by the studied polynomial system allows to avoid some degeneracies of its associated Jacobian matrix. At last, the recursive study of overlapped singular loci allows to deal with the case of nonsmooth varieties. These algorithmic issues allow to obtain a first algorithm with reasonable practical performances.
Since projection functions are linear while the distance function is quadratic, computing their critical points is easier. Thus, we have also investigated their use. A first approach consists in studying recursively the critical locus of projection functions on overlapped affine subspaces containing coordinate axes combined with the study of their set of nonproperness. A more efficient one , avoiding the study of sets of nonproperness is obtained by considering iteratively projections on genericaffine subspaces restricted to the studied variety and fibers on arbitrary points of these subspaces intersected with the critical locus of the corresponding projection. The underlying algorithm is the most efficient we obtained.
The algorithms of , are provided in the Maple Library RAGLib. It is built upon the softwares Gb and RS. It contains functionalities for computing sample points in real algebraic varieties, semialgebraic sets defined by nonstrict inequalities, and the radical and equidimensional decomposition of an ideal. The experimental version of the algorithm provided in will be included in the next release which is in preparation.
Most of the applications we recently solved (celestial mechanics, cuspidal robots, statistics, etc.) require the study of semialgebraic systems depending on parameters. Although we covered these subjects in an independent way, some general algorithms for the resolution of this type of systems can be proposed from these experiments.
The general philosophy consists in studying the generic solutions independently from algebraic subvarieties (which we call from now on discriminant varieties) of dimension lower than the semialgebraic set considered. The study of the varieties thus excluded can be done separately to obtain a complete answer to the problem, or is simply neglected if one is interested only in the generic solutions, which is the case in some applications.
We recently proposed a new framework for studying basic constructible (resp. semialgebraic) sets defined as systems of equations and inequations (resp. inequalities) depending on parameters. Let's consider the basic semialgebraic set
and the basic constructible set
where
p_{i},
f_{j}are polynomials with rational coefficients.
[
U,
X] = [
U
_{1}, ...
U
_{d},
X
_{d+ 1}, ...
X
_{n}]is the set of
indeterminatesor variables, while
U= [
U_{1}, ...
U_{d}]is the set of
parametersand
X= [
X_{d+ 1}, ...
X_{n}]the set of
unknowns;
is the set of polynomials defining the equations;
is the set of polynomials defining the inequations in the complex case (resp. the inequalities in the real case);
For any
uC^{d}let
be the specialization
;
denotes the canonical projection on the parameter's space ;
Given any ideal
Iwe denote by
the associated (algebraic) variety. If a variety is defined as the zero set of polynomials with coefficients in
we call it a
algebraic variety; we extend naturally this notation in order to talk about
irreducible components,
Zariski closure, etc.
for any set , will denote its Zariski closure in .
In most applications,
as well as
are finite and not empty for almost all parameter's
u. Most algorithms that study
or
(number of real roots w.r.t. the parameters, parameterizations of the solutions, etc.) compute in any case a
Zariski closed set
such that for any
, there exists a neighborhood
of
uwith the following properties :
is an analytic covering of ; this implies that the elements of do not vanish (and so have constant sign in the real case) on the connected components of ;
We recently show that the parameters' set such that there doesn't exist any neighborhood with the above analytic covering property is a Zariski closed set which can exactly be computed. We name it the minimal discriminant variety of with respect to _{U}and propose also a definition in the case of non generically zerodimensional systems.
Being able to compute the minimal discriminant variety allows to simplify the problem depending on
nvariables to a similar problem depending on
dvariables (the parameters) : it is sufficient to describe its complementary in the parameters' space (or in the closure of the projection of the variety in the general case) to get the full information about the generic solutions (here generic means for
parameters' values outside the discriminant variety).
Then being able to describe the connected components of the complementary of the discriminant variety in becomes a main challenge which is strongly linked to the work done on positive dimensional systems. Moreover, rewriting the systems involved and solving zerodimensional systems are major components of the algorithms we plan to build up.
We currently propose several computational strategies. An a priori decomposition into equidimensional components as zeros of radical ideals simplifies the computation and the use of the discriminant varieties. This preliminary computation is however sometimes expensive, so we are developing adaptive solutions where such decompositions are called by need. The main progress is that the resulting methods are fast on easy problems (generic) and slower on the problems with strong geometrical contents.
We also defined (large)
discriminant varieties of
with respect to
_{U}as being any
Zariski closed set
Wcontaining the minimal discriminant variety of
with respect to
_{U}(the minimal discriminant variety is the smallest discriminant variety and it is uniquely defined).
The existing implementations of algorithms able to "solve" (to get some information about the roots) parametric systems do all compute (directly or indirectly) discriminant varieties but none computes optimal objects (strict discriminant variety). The consequence is that the output (case distinctions w.r.t. parameters' values) are huge compared with the results we can provide.
Algorithms based on "Comprehensive Gröbner bases" , , , compute also (implicitly or explicitly) discriminant varieties. In the case of parametric systems, such a discriminant variety contains the parameters' values for which a Gröbner basis do not specialize properly. Again, it is far from being optimal since it contains the parameter's values where the staircase varies, which depend on the strategy used (for example the choice of a monomial ordering).
Applications are fundamental for our research for several reasons.
The first one is that they are the only source of fair tests for the algorithms. In fact, the complexity of the solving process depends very irregularly of the problem itself. Therefore, random tests do not give a right idea of the practical behavior of a program, and the complexity analysis, when possible, does not necessarily provide realistic information.
A second reason is that, as quoted above, we need real world problems to determine which specifications of algorithms are really useful. Conversely, it is frequently by solving specific problems through ad hoc methods that we found new algorithms with general impact.
Finally, obtaining successes with problems which are intractable by the other known approaches is the best proof for the quality of our work.
On the other hand, there is a specific difficulty. The problems which may be solved with our methods may be formulated in many different ways, and their usual formulation is rarely well suited for polynomial system solving or for exact computations. Frequently, it is not even clear that the problem is purely algebraic, because researchers and engineers are used to formulate them in a differential way or to linearize them.
Therefore, our software may not be used as black boxes, and we have to understand the origin of the problem in order to translate it in a form which is well suited for our solvers.
It follows that many of our results, published or in preparation, are classified in scientific domains which are different from ours, like cryptography, error correcting codes, robotics, signal processing, statistics or biophysics.
The idea of using multivariate (quadratic) equations as a basis for building public key cryptosystems appeared with the MatsumotoImai cryptosystem. This system was first broken by Patarin and, shortly after, Patarin proposed to repair it and thus devised the hidden field equation (HFE) cryptosystem.
The basic idea of HFE is simple: build the secret key as a univariate polynomial
S(
x)over some (big) finite field (often
). Clearly, such a polynomial can be easily evaluated; moreover, under reasonable hypotheses, it can also be ``inverted'' quite efficiently. By inverting, we mean finding any solution to the equation
S(
x) =
y, when such a solution exists. The secret transformations (decryption and/or signature) are based on this efficient inversion. Of course, in order to build a cryptosystem, the polynomial
Smust be presented as a public transformation which hides the original structure and prevents inversion. This is done by viewing the finite field
as a vector space over
and by choosing two linear transformations of this vector space
L_{1}and
L_{2}. Then the public transformation is the composition of
L_{1},
Sand
L_{2}. Moreover, if all the terms in the polynomial
S(
x)have Hamming weight 2, then it is obvious that all the (multivariate) polynomials of the public key are of degree two.
By using fast algorithms for computing Gröbner bases, it was possible to break the first HFE challenge
(real cryptographic size 80 bits and a symbolic prize of 500 US$) in only two days of CPU time. More precisely we
have used the
F_{5}/2version of the fast
F_{5}algorithm for computing Gröbner bases (implemented in C). The algorithms available up to now (Buchberger) were extremely slow and could not have been used to break the code (they should have needed at least a few centuries of computation). The new algorithm is thousands of times faster
than previous algorithms. Several matrices have to be reduced (Echelon Form) during the computation: the biggest one has no less than 1.6 million columns, and requires 8 gigabytes of memory. Implementing the algorithm thus required significant programming work and especially efficient memory
management.
A new result is that the weakness of the systems of equations coming from HFE instances can be explainedby the algebraic properties of the secret key (work presented at Crypto 2003 in collaboration with A. Joux). From this study we are able to predict the maximal degree occurring in the Gröbner basis computation, so that we can establish precisely the complexity of the Gröbner attack and compare it with the theoretical bounds.
Since it is easy to transform many cryptographic problems into polynomial equations, our group is in a position to apply this general method to other cryptosystems. Thus we have a new general cryptanalysis approach, called algebraic cryptanalysis. The team is currently testing the robustness of cryptosystems based on nonlinear filter generators (with G. Ars, J.C. Faugère) and in collaboration with the Codes team and DGA (Celar).
Another relevant tool in the study of cryptographic problems is the LLL algorithm which is able to compute in polynomial time a ``good'' approximation for the shortest vector problem. Since a Gröbner basis can be seen as the set of smallest polynomials in an ideal with respect to the divisibility of leading terms, it is natural to compare both algorithms: an interesting link between LLL (polynomial version) and Gröbner bases was suggested by a member of our group.
A standard algorithm for implementing the arithmetic of Jacobian groups of curves is LLL. By replacing LLL by the FGLM algorithm we establish a new structure theorem for Gröbner bases; consequently, on a generic input we were able to establish explicit and optimized formulas for basic
arithmetic operations in the Jacobian groups of
C_{34}curves
.
As an application of the LLL algorithm we have presented , an algorithm for converting a Gröbner basis of an ideal with respect to any given ordering into a Gröbner basis with respect to any other ordering. This algorithm is based on a modified version of the LLL algorithm. In the worst case, the theoretical complexity of this algorithm is not necessarily better than the complexity of the FGLM algorithm; but when the output (the final Gröbner basis) is small this algorithm is experimentally more efficient.
The (parallel) manipulators we study are general parallel robots: the hexapods are complex mechanisms made up of six (often identical) kinematic chains, of a base (fixed rigid body including six joints or articulations) and of a platform (mobile rigid body containing six other joints).
The design and the study of parallel robots require the resolution of direct geometrical models (computation of the absolute coordinates of the joints of the platform knowing the position and the geometry of the base, the geometry of the platform as well as the distances between the joints of the kinematic chains at the base and the platform) and inverse geometrical models (distances between the joints of the kinematic chains at the base and the platform knowing the absolute positions of the base and the platform).
Since the inverse geometrical models can be easily solved, we focus on the resolution of the direct geometrical models.
The study of the direct geometrical model is a recurrent activity for several members of the project. One can say that the progress carried out in this field illustrates perfectly the evolution of the methods for the resolution of algebraic systems. The interest carried on this subject is old. The first work in which the members of the project took part in primarily concerned the study of the number of (complex) solutions of the problem , . The results were often illustrated by Gröbner bases done with Gb software. One of the remarkable points of this study is certainly the classification suggested in . The next efforts were related to the real roots and the effective computation of the solutions . The studies then continued following the various algorithmic progresses, until the developed tools made possible to solve nonacademic problems. In 1999, the various efforts were concretized by an industrial contract with the SME CMW ( Constructions Mécaniques des VosgesMarioni) for studying a robot dedicated to machine tools.
We conceived, in collaboration with the COPRIN project, a prototype of simulator for validating a fixed trajectory, i.e. :
check that the trajectory is nonsingular for a series of functions modelizing the length of the legs : let us recall that for given values of the legs' length, there exists up to 40 possible positions. To check that the trajectory is nonsingular, one must ensure, for example, that 2 possible trajectories do not intersect (in which case the robot cannot be controlled);
measure without ambiguity the difference between two trajectories corresponding to different legs' length functions (this is a tool for checking "numerical" singularities).
This tool is single in the world : concurrent solutions exist but can treat only particular robots (plan, symmetrical, less joints, etc). It is necessary to know to how to solve the general case because a small modification of the design parameters' (unavoidable in practice) has serious consequences on the behavior of the robot. For example, the theoretical robot we use (based on the lefthand parallel manipulator due to J.P. Merlet) for our study admits at most 36 solutions for the direct geometrical model (either up to 36 possible trajectories for fixed length legs' functions), whereas the presently built robot (with small errors on the positions of the joints) has up to 40 possible positions.
The main algorithmic tool present in this simulator (partially presented in ) is a hybrid method (mixing computer algebra, numerical computation and interval analysis) for the resolution of the direct geometrical model.
A part of the work was to develop a seminumerical method (based on Newton's method), powerful in terms of computation time (4000 computations per minute) and certified, i.e. always returning a correct result: the answer is either a set of numerical values with a certified precision
or a failure message. The strategy used combines interval arithmetics and convergence results. The failure remains exceptional (less than 10 percent of the practical problems) and, when it occurs, the result is obtained using a special version of the
F_{4}algorithm for Gröbner bases computation and an optimized version (adapted to that particular case of systems) of the Rational Univariate Representation algorithm.
Our simulator has been used to diagnose the problems related to the solutions currently employed (CAD, lookahead, algorithms for interpolating the trajectories, etc).
Industrial robotic (serial) manipulators with 3 degrees of freedom are currently designed with very simple geometric rules on the designed parameters, the ratios between them are always of the same kind. In order to enlarge the possibilities of such manipulators, it may be interesting to relax some constraints on the parameters.
However, the diversity of the tasks to be done carries out to the study of other types of robots whose parameters of design differ from what is usual and which may have new properties, like stability or existence of new kinds of trajectories.
An important difficulty slows down the industrial use of such new robots : recent studies ( , , and ) showed that they may have a behavior which is qualitatively different from those of the robots currently used in industry and allows new changes of posture. These robots, called cuspidal, cannot be controlled like the others. The majority of the robots are in fact cuspidal: the industrial robots currently on the market form a very restricted subclass of all the possible robots.
A full characterization of all the cuspidal robots is of a great interest for the designer and the user. Such a project forms part of a current tendency in robotics which consists in designing a robot in order that its performances are optimal for a given application while preserving the possibility of using it for another task, that is to say to specialize it to the maximum for an application in order to reduce its cost and to increase its operational safety.
The study of the behavior at a change of posture is identical, from the computer algebra point of view, to solving a system of equalities and inequalities depending on three or four parameters which correspond to the design parameters of this kind of robots. The method we basically used was "ad hoc", and no known automatic computer algebra methods were able to solve completely the problem before the work done in collaboration with COPRIN (INRIA Sophia), IRMAR (University of Rennes I) and IRRCyN (CNRS  Nantes) teams.
From a robotic point of view, the result obtained is a full classification of a class of serial robots with three degrees of freedom according to their cuspidal character. Since then, toward the end of the MathsStic project "Robots Cuspidaux", these results were simplified and analyzed to allow a better description of the workspace of such mechanisms.
The computations done for this application were critical also for the development of the general and systematic methods for solving parametric systems. We have shown that these general methods can now be used in place of ad hoc computations to calculate the same classification and a recent experiment even shows that they allow to relax one more parameter and thus to solve a more general problem.
Some problems in signal theory are naturally formulated in terms of algebraic systems. In , we had studied the KovacevicVetterli 's family of filters banks. To be used for image compression, a wavelet transformation must be defined by a function having a maximum of partial derivative that vanishes at the corners of the image. These conditions can be translated to polynomial systems that can be solved with our methods. We showed that to get physically acceptable solutions, it was necessary to choose the number of conditions so that the solutions' space is of dimension 0, 2 or 4 (according to the size of the filter). This result (parametric family of filters) is subject to a patent . To exploit these filters in practice, it remains to choose the best transformation, according to nonalgebraic criteria, which is easily done with traditional tools for optimization (with a reduced number of variables).
As for most of applications on which we work, it took more than three years to obtain concrete results bringing real practical progress (the results mentioned in are partial), and still a few years more to be able to disseminate information towards our community . Our software tools are now used to solve nearby problems .
Our activity in signal processing started again a few months ago through a collaboration with the APICS project (collaboration with F. Seyfert) on the synthesis and identification of hyperfrequency filters made of coupled resonant cavities. It is now part of our research goals.
One specificity in computer algebra is to manipulate huge objects with a size that varies along the algorithm. Having a specific memory manager, adapted to the objects handled in the various implementations is thus essential. Based on one concept suggested by JC. Faugère in his PhD thesis, several versions implemented in C are used in different software packages of the project ( GB, FGb, RS) as well as in implementations due to collaborators (F. Boulier  LIFL). The various suggested implementations are very simple and it seems preferable to precisely describe the process and its use in some key situations than to propose a standardized implementation as a library.
See http://fgbrs.lip6.fr/salsa/. The result of a Gröbner basis computation may be huge and is the main input of our highlevel algorithms. The time needed for transferring such an object using ascii files or pipes may be greater than the computation time.
UDXis a software for binary data exchange. It was initially developed to show the power of a new protocol, object of a patent by INRIA and UPMC . The resulting code, written in ANSI C (9500 lines), is very portable and very efficient, even when the patented protocol is not used. UDXis composed of five independent modules:
base: optimized system of buffers and synchronization of the input and output channels;
supports: read/write operations on various supports (sockets, files, shared memory, etc.);
protocols: various exchange protocols (patented protocol, XDR, etc.);
exchange of composite types: floatingpoint numbers (simple and double precision), multiprecision integers, rational numbers;
interfaces: user interfaces implementing high level callings to the four other modules.
MPFI is a library for multiprecision interval arithmetic, written in C (approximately 1000 lines), based on MPFR. It is developed in collaboration with N. Revol (ARENAIRE project). Initially, MPFI was developed for the needs of a new hybrid algorithm for the isolation of real roots of polynomials with rational coefficients. MPFI contains the same number of operations and functions as MPFR, the code is available and documented.
MPAIis a library for computing with algebraic infinitesimals. The infinitesimals are represented as truncated series. The library provides all the arithmetic functions needed to perform computations with infinitesimals. The interface is both GMPand RScompliant. It is implemented in the C language and represents approximatively 1000 lines of code. The algorithms proposed in MPAIinclude Karatsuba's product and the short product, ...The code is available.
In order to ease the use of the various software developed in the project, some conventions for the exchange of ASCII and binary files were developed and allow a flexible use of the servers Gb, FGbor RS.
To make transparent the use of our servers from general computer algebra systems such as Maple or MuPAD we currently propose a common distribution for Gb, FGband RSincluding the servers as well as the interfaces for Maple and MuPAD. The instructions are illustrated by concrete examples and a simple installation process.
Gb is one of the most powerful software for computing Gröbner bases currently diffused. Implemented in C/C++ (approximately 100000 lines), it is distributed since 1994 in the form of specific servers (direct computations, changes of orders, Hilbert's function, etc.). The initial interface
(interactive system of commands) was abandoned for lighter solutions (ASCII interface,
udxprotocol) which are more powerful but for the moment more rudimentary. With the new algorithms proposed by JC. Faugère (
F_{n},
n>1),
GBis always maintained but is not developed any more. Indeed, data structures as well as basic algorithms necessary to implementations of these new methods being radically different from precedents, the
GBservers will be gradually replaced by
FGbservers. The existing prototypes of interfaces (ASCII, Maple, MuPAD, RS) were homogenized in order to provide a framework, initially based on
GB, but evolutionary (towards
FGb) in a transparent way. They will be kept. The future evolutions will follow then algorithmic progress.
As mentioned above, current implementations of the
F_{5}algorithm depend on many options. For efficiency reasons, it is currently preferable to compile specific servers, setting algorithms parameters like matrices sizes, linear algebra strategies for sparse matrices by hand. This has already successfully been done for path planning problems
(parallel robots), and however helps to understand the main constraints in order to provide, in the future, software solutions that are independent from the type of system to be solved.
RSis a software dedicated to the study of real roots of algebraic systems. It is entirely developed in C (100000 lines approximately) and succeeds to RealSolvingdeveloped during the European projects PoSSo and FRISCO. RSmainly contains functions for counting and isolating of real zeros of zerodimensional systems. The user interfaces of RSare entirely compatible with those of Gb/FGb(ASCII, MuPAD, Maple). RSis used in the project since several years and several development versions have been installed by numerous other teams. The following evolutions will depend on algorithmic progress and the users' needs (many internal functions are exported on demand).
Triangular Decomposition is a library devoted to the decomposition of systems of polynomial equations and inequations and provides some tools for working with triangular sets. It decomposes the radical of the ideal generated by a family of polynomials into regular triangular sets that represents radical equidimensional ideals. It also performs the computation of polynomial gcd over an extension field or a product of such fields given by a triangular set.
A first version of this library is implemented in the Axiom computer algebra system. It is now developed in Magma.
The RAGLib( Real Algebraic Geometry Library) is a Maple library of symbolic algorithms devoted to some problems of Effective Real Algebraic Geometry, and more particularly, to the study of real solutions of polynomial systems of equations and inequalities. It contains algorithms performing:
the equidimensional decompositionof an ideal generated by a polynomial family.
the emptiness testof a real algebraic variety defined by a polynomial system of equations.
the computation of at least one point in each connected componentof a real algebraic variety defined by a polynomial system of equations.
the emptiness testof a semialgebraic set defined by a polynomial system of equations and non strict inequalities.
the computation of at least one point in each connected componentof a semialgebraic set defined by a polynomial system of equations and non strict inequalities.
As soon as they come from real applications, the polynomials resulting from processes of elimination (Gröbner bases, triangular sets) are very often too large to be studied by general computer algebra systems.
In the case of polynomials in two variables, a certified layout is enough in much cases to solve the studied problem (it was the case in particular for some applications in celestial mechanics). This type of layout is now possible thanks to the various tools developed in the project.
Two components are currently under development: the routine of computation (taking as input the polynomial function and returning a set of points) is stable (about 2000 lines in the C language, using the internal libraries of RS) and can be used as a black box in standalone mode or through Maple; the routine of layout is under study.
The purpose of project TSPR(Trajectory Simulator for Parallel Robots) is to homogenize and export the tools developed within the framework of our applications in parallel robotics. The software components of TSPR(about 1500 lines) are primarily written in C following the standards of RSand FGband algorithms implemented in Maple using the prototypes of interfaces for FGband RS. The encapsulation of all these components in a single distribution is available but not downloadable.
Prototypes of components for realtime resolution of certain types of systems now use also the ALIAS library developed in the INRIA project COPRIN.
We benefited during nearly one year of the assistance of Étienne Petitjean (SEDRELORIA) for this development.
The developed tool (about 11,000 lines in C++) makes possible the use of software installed on a network which then becomes usable even if it is not ready to be distributed and thus allows the experimental validation of algorithms proposed in articles or allows an external user to have a more precise idea of the possibilities offered by recent algorithmic progress. It will be useful in our project to export some of our prototypes.
NetTask is very flexible since it doesn't need any change in the software to be demonstrated. It is compatible with the current safety requirements and contains some powerful tools:
allocation of the tasks launched by the users according to the availability of the software but also the load of the nodes on a given network of machines;
system of queue for the tasks;
complete control of the launched tasks by the user himself;
many options of configuration such as for example the declaration of the machines and tasks available on a given network, the maximum number of tasks per user.
In , we explain how Bernstein's basis, widely used in Computer Aided Geometric Design, provides an efficient method for real root isolation, using De Casteljau's algorithm. We explain also the link between this approach and more classical methods for real root isolation . Most of the content of the paper can be found in , . However, we present a new improved method for isolating real roots in Bernstein's basis inspired by .
Let
k[
x] =
k[
x_{1}, ...,
x_{n}]be the ring of
nvariate polynomials over the field
k, and
Ian ideal of
k[
x]. The problem tackled here is : Finding an effective representation of the quotient algebra
k[
x]/
I. This is the cornerstone of many studies of the variety defined by
I. Efficient ways to reach this goal already exist, and the most used is to first compute a Gröbner basis, and next to derive an exact parameterization of the solution set. Though the work of Faugère made the Gröbner step rather efficient, the computation of Gröbner
bases still suffer from very unstable behavior : the time and space needed for computing a Gröbner basis can vary greatly from a system to one another. Generalized Normal Forms can be thought of as an extension of the Gröbner bases computations where numerical issues can be paid attention to.
Up to now, the computation of Generalized Normal Form shall rely on choice functions refining the usual degree of polynomials. This assumption forbid the computation of elimination ideals with Generalized Normal Forms. In
the authors show how to drop this assumption.
First we define the notion of a refining graduation, and from this we state the definition of a choice function refining a reducing graduation. Next we give an algorithm to compute the normal form of any monomial modulo
I. We prove that, given a monomial
mand if we use a choice function refining a reducing graduation, then there exists a moment in the algorithm when we can compute the normal form of
m. As far as
Iis zero dimensional, only the existence of such a moment need to be proved to insure termination and correctness of the whole method. The paper
has received the distinguished paper award at the conference ISSAC2005.
In
(an extended version has been submitted to Journal Of Symbolic Computation), we propose some new results about the
computation and use of the Rational Univariate Representation. The first result is an algorithm that computes a RURcandidate (no guarantee for the choice of a separating element), taking as input
, a multiplication table of
and performing
operations in
,which is slightly better than
. The proposed algorithm is mainly the same as in
(based on the babystep/giantstep algorithm
) but computes differently the so called ''transposed product'' (main operation). In addition, the output preserves
the multiplicities. Basically,
generalizes the formulas from
: the coefficients of the RURcandidate can directly be expressed with respect to
l(
X_{j}t^{i})where
lis a ``sufficiently generic'' linear form. Taking for
lthe trace map (trace of the multiplication in
), one recovers the formulas from
.
The second result shows that one can check that the obtained RURcandidate is a RUR performing
arithmetic operations, so that it produces the same output as
with the same computing time as the probabilistic computation of the RURcandidate from
. We also propose an algorithm that computes all the sign conditions realized by a set of polynomials at the zeros
of a zerodimensional system. Its computing time complexity is
Dtimes less than the one of algorithm 11.18 p. 375 in
for the same input (
). Given any polynomial
f, the formulas from
or
can be used to compute a rational function
g_{t,
f}/
g_{t}which takes the same values as
fat the zeroes of
I. According to the above results, given a set of polynomials
, one can compute a RUR and the rational functions
g_{t,
fi}in
O(
t_{1}+
sD^{2})if
t_{1}is the computing time of the RUR. Thus, computing the sign conditions realized by the
f_{i}leads to computing the sign conditions realized by the
g_{t,
fi}at the roots of
f_{t}. We can then substitute the ``Sturm query'' algorithm based on Hermite's quadratic form in
(computing time in
O(
D^{3})) by its equivalent for univariate polynomials (computing time in
O(
D^{2})for the naive version).
In
and
(
for a preliminary extended version, submitted to Journal Of Symbolic Computation  the article is currently under
revision), we present a new algorithm for solving basic parametric constructible or semialgebraic systems like
or
, where
,
U= [
U_{1}, ...,
U_{d}]is the set of parameters and
X= [
X_{d+ 1}, ...,
X_{n}]the set of unknowns.
If
_{U}denotes the canonical projection on the parameter's space, solving
leads to compute submanifolds
such that
is an analytic covering of
(we say that
has the

covering property). This guarantees that the cardinal of
is locally constant on
and that
is a finite collection of sheets which are all locally homeomorphic to
. In the case where
is dense in
, known algorithms for solving
or
(
,
,
) compute implicitly or explicitly a Zariski closed subset
Wsuch that any submanifold of
has the (
)covering property.
We introduce the discriminant varieties of with respect to _{U}which are algebraic sets with the above property. We then show that the set of points of which do not have any neighborhood with the ( )covering property is a Zariski closed set and thus define the minimal discriminant variety of with respect to _{U}, and we propose an algorithm to compute it efficiently. Thus, solving (resp. ) is reduced to describing (resp. ) which can be done using critical point methods such as in or partial CAD based strategies .
A preliminary implementation of the Maplepackage DVhas been presented at ISSAC by G. Moroz and first results about the theoretical complexity of computing Discriminant Varieties (simply exponential algorithm) can be founded in .
The knowledge on the complexity of Gröbner basis computation has been improved in two direction. Firstly, in the few known bounds of complexity, the size of the input is measured by
d^{n}where
dis the maximal degree of the input polynomials and
nthe number of variables. The maximal degree has been replaced by the mean value of the degree
. Secondly, a bit complexity polynomial in
d^{n}(mean value of the degrees to the number of variables) has been obtained for zerodimensional systems of equations
.
For subclasses of polynomial systems, the complexity may be much smaller. Of particular importance is the class of regular sequences of polynomials.
It is well known that (after a generic linear change of variables) the complexity of the computation for the degreereverselexicographic order is simply exponential in the number of variables. Moreover, the maximal degree occurring in the computation of such a Gröbner basis is bounded by the Macaulay bound (the sum of the total degree of the equations minus the number of equations) In we give similar complexity bounds for overdeterminedsystems, for a class of systems that we call semiregular.
The interest in overdetermined systems is not purely academic: there are a number of applications, such as error correcting codes (decoding of cyclic codes), robotics, calibration, cryptography,.... In we have computed the asymptotic expansion of the degree of regularity for semiregular sequences of algebraic equations. This degree implies complexity bounds of Gröbner bases algorithms, in particular the F5 algorithm or algorithms like XL more well known in cryptographic community.
A byproduct of this study is to give a classification of the complexity of computing Gröbner bases of ``generic'' systems depending on the ratio
number of equationsdivided by the
number of unknowns. For instance a Gröbner basis of
nlog(
n)algebraic independent polynomials in
nvariables can be computed in sub exponential time (in this context independent means semiregular).
In , we had designed an efficient algorithm computing at least one point in each connected component of the real counterpart of a smooth equidimensional algebraic variety defined by a radical ideal. This has been obtained by considering critical points of some generic projection functions. These critical points are defined by the vanishing of some minors of the Jacobian matrix of the input polynomial system and leads to solve overdetermined zerodimensional polynomial systems for which classical Bézout bounds are naturally pessimistic. In , we have :
generalized the algorithm of to the non equidimensional case;
proved that the set of computed critical points has cardinality which is bounded by the bihomogeneous Bézout bound.
proved a strong bihomogeneous Bézout theorem in the case of a radical ideal, which corrects some mistakes in .
The complexity of our algorithm is polynomial in the bihomogeneous Bézout bound. This work is submitted to Journal of Complexity.
This work follows the ones of and . We consider here the case of singular hypersurfaces and provide an algorithm which computes at least one point in each connected component of the real counterpart of the hypersurface under consideration. This is based on a generalization of and , i.e. the efficientuse of deformation techniques without introducing explicitly an infinitesimal and the computation of the critical points of some generic projections. We prove that the classical Bézout bounds are never reached in the singular case. We also relate the decrease of degree to the degree of the set generalized critical values (see ). The complexity of our algorithm is polynomial in the Bézout bound. In practice, this algorithm is more efficient than the strategy proposed in and . An implementation which is based on SALSA softwares FGb(written by J.C. Faugère) and RS(written by F. Rouillier) is available in the RAGLiband a Magma package based on the KroneckerMagma package (written by G. Lecerf) has been developed.
This work has been accepted for presentation at the MEGA conference and is submitted to Journal of Symbolic Computation .
We consider here the problem of computing roadmaps in a smooth real algebraic set . A roadmap is an algebraic variety contained in which has a nonempty and connected intersection with each connected component of . The computation of such objects has applications in robotics (mainly in robot motion planning) and are used to count the number of connected components of a real algebraic set or provide a semialgebraic description of the connected components of the considered real algebraic sets.
Such an object can be extracted from a cylindrical algebraic decomposition but the complexity of such a strategy is doubly exponential in the number of variables. More recent algorithms (see
) which follow the geometric strategy of
based on the computation of some critical loci, have a complexity within
arithmetic operations where
Dbounds the degree of the input polynomials and
nis the number of variables.
This work has been done during the predoctoral internship of M. Mezzarobba (ENS Paris). It is submitted to the conference TC'06.
A collaboration with the project Vegas (LORIA) has ended to a robust and efficient algorithm to compute a (non rational) parameterization of the intersection of two quadric surfaces. For any input, this algorithm is nearly optimal with respect to the number of square roots to extract ( and ). The contribution of SALSA lies mainly in the management of the numerous singular or decomposed cases..
A new trend consists in devising efficient geometric algorithms using Gröbner bases and other algebraic tools as black boxes. Two results are related to this trend. The first one is the notion of discriminant variety (already described in this document), the second one concerns parameterized curves surfaces (and more generally varieties).
Given a smooth surface, a blue (red) ridge is a curve along which the maximum (minimum) principal curvature has an extremum along its curvature line. Ridges are curves of extremal curvature and therefore encode important informations used in segmentation, registration, matching and surface analysis. State of the art methods for ridge extraction either report red and blue ridges simultaneously or separately, in which case need a local orientation procedure of principal directions is needed, but no method developed so far topologically certifies the curves reported.
In
, on the way to developing certified algorithms independent from local orientation procedures, we make the following
fundamental contribution. For any smooth parametric surface, we exhibit the implicit equation
P= 0of a singular curve encoding all ridges of the surface (blue and red), and show how to recover the colors from factors of
P. Exploiting
P= 0, we also derive a zero dimensional system coding the socalled turning points, from which elliptic and hyperbolic ridge sections of the two colors can be derived. Both contributions exploit properties of the Weingarten map of the surface and require computer
algebra. Algorithms exploiting the structure of
Pfor algebraic surfaces are developed in
.
In
and
, we present some efficient tools for solving some problems in robotics. The article
is mostly an introduction to some modern algorithms from computer algebra but also contains recent advances in
solving the famous
Direct Kinematics Problem for parallel manipulators. We show that a judicious use of recent algorithms like
F_{4}or
F_{5}mainly reduces the problem to the study of an algebraic system depending on 3 variables. One can then obtain all the real solutions in a certified way (real character as well as accuracy) using basic algorithms from linear algebra jointly with a multiprecision interval
arithmetics.
The security of many cryptographic primitives depends on the difficulty of systemsolving. Sometimes primitives can be cracked if one can find a solution to an associated system of algebraic equations over a finite field. This is known as Algebraic Cryptanalysis and is currently one of the ``hot'' topics in cryptography. Unfortunately Computer Algebra tools are not known in the Crypto community; for instance the main used algorithm for solving Algebraic Equations is the XL algorithm (Shamir, Patarin, Courtois). In we have made the proof that the XL is always slower than any algorithm for computing Groebner bases; in practice it is much slower (by several order of magnitude) than the most efficient algorithms (F4 and F5 algorithms).
Several cryptosystems (mainly stream ciphers) have been proved very weak against algebraic attacks : for instance the boolean function used as filtered function in a stream cipher must be of high algebraic degree in order to resist to BerlekampMassey attack ; for the algebraic point of view the most important criteria is the minimal degree of the relations induced by the boolean function: this is what is called the algebraic immunity of a boolean function. In we give theoretical upper bound of the algebraic immunity.
Another use of multivariate polynomials in cryptography is the design of new cryptosystems (HFE, SFlash, TTM, 2R, Quartz, IP, Cstar ...). Recently we have broken two of them:
In
we present an efficient attack of the
C^{*}cryptosystem based on fast algorithms for computing Gröbner basis. The attack consists in computing a Gröbner basis of the public key. The efficiency of this attack depends strongly on the choice of the algorithm for computing the Gröbner basis: it was possible to break the
cryptochallenge 11 (proposed by Hans Dobbertin
http://www.mysterytwister.com/) in only 3 hours and 11 minutes of CPU time (PC
Pentium 2.8 Ghz Xeon) by using the algorithm
F_{5}implemented in C. As a result, we recommend to increase the value of the design parameter to
n26.
Due to its numerous applications, the Isomorphism of Polynomial (Patarin) is one of the most fundamental problems in multivariate cryptography. The problem consists in recovering a particular transformation between two sets of multivariate polynomials. In , we address two complementary aspects of IP: its theoretical and practical difficulty. Using classical results of groups theory, we present an upper bound on the theoretical complexity IPlike problems. Our bound is obtained by introducing a generic description of these From a practical point of view, we employed a fast Gröbner bases algorithm (F5), for solving the corresponding algebraic system. This approach is efficient in practice and obliges to modify the current security criteria for IP. We have indeed broken several challenges proposed in the literature. For instance, we solved a challenge proposed by Billet and Gilbert at Asiacryt03 in less than one second.
A contract as been signed with the Canadian company Waterloo Maple Incin 2005. The objective is to integrate SALSAsoftware into one of the most well known general computer algebra system ( Maple).
The basic term of the contract is of four years (renewable).
In a first contract (finished in 2005), the objective is to evaluate, on examples of realistic size, attacks of algebraic kind (and in particular Gröbner bases) on filtered registers (stream ciphers).
A second contract was signed in June 2005 : the goal is to study the impact of algebraic cryptanalysis on symmetric cryptosystems like AES.
The objective of this research is to find solutions to speedup the core of Gröbner bases computation (algorithm F5 and F4) which is the linear algebra part. A CIFRE grant (S. Lacharte) is related to the contract.
Chineese Salsais an associate team created in January 2006. It brings together most of the members of SALSA and researchers from Beihang university, Beijing (university and academy of science). The general objectives of ChineseSalsaare mainly the same as those of SALSA.
The project SILA( http://wwwsop.inria.fr/apics/SILA/WebPage/) has been approved by INRIA in early 2005 for two years. It is managed by Fabrien Seifert (APICS project  INRIA Sophia Antipolis) and involves three teams (SALSA,APICS and IRCOMM from University of Limoges).
The goal of this project is to study (synthesis and identification) hyperfrequency filters made of coupled resonant cavities.
This project started in 2002 and will end in 2006.
The SPACES project takes part in the European project RAAG (Real Algebraic and Analytic Geometry), F. Rouillier being involved in the applications and connection with industry item.
RAAG is a Research Training Network of the "Human Potential" program of the European Commission, sponsored for 48 months starting from March 2002. The network is managed by the University of Passau (Germany), the coordination of the French team being ensured by the team of real geometry and computer algebra of the University of Rennes I.
The main goal of this project is to increase the links between the various fields of research listed in the topics of the project (real algebraic geometry, analytical geometry, complexity, formal calculation, applications, etc.) by means of conferences, schools and exchanges of young researchers. It brings together a great number of European teams and in particular the majority of the French teams working in the scientific fields falling under the topics of the project.
J.C. Faugère : BFCA 2005 (First Workshop on Boolean Functions : Cryptography and Applications), CASC 2005 (International Workshop on Computer Algebra in Scientific Computing), Groebner Bases Semester in Linz;
F. Rouillier : JNCF 2005 (Journées Nationales de Calcul Formel), Algebraic Geometric Methods in Engineering during the IMA Semester at University of Minessota.
J.C. Faugère : JNRR 2005 , JNCF 2005 (Journées Nationales de Calcul Formel);
F. Rouillier : Banff  Passau .
J.C. Faugère and F. Rouillier are guest editors of a special issue of the Journal of Symbolic Computation.