The main objective of the SALSA project is to solve systems of polynomial equations and inequations. We emphasize on algebraic methods which are more robust and frequently more efficient than purely numerical tools.
Polynomial systems have many applications in various scientific  academic as well as industrial  domains. However much work is yet needed in order to define specifications for the output of the algorithms which are well adapted to the problems.
The variety of these applications implies that our software needs to be robust. In fact, almost all problems we are dealing with are highly numerically unstable, and therefore, the correctness of the result needs to be guaranteed.
Thus, a key target is to provide software which are competitive in terms of efficiency but preserve certified outputs. Therefore, we restrict ourselves to algorithms which verify the assumptions made on the input, check the correctness of possible random choices done during a computation without sacrificing the efficiency. Theoretical complexity for our algorithms is only a preliminary step of our work which culminates with efficient implementations which are designed to solve significant applications.
A consequence of our way of working is that many of our contributions are related to applicative topics such as cryptography, error correcting codes, robotics and signal theory. We have to emphasize that these applied contributions rely on a longterm and global management of the project with clear and constant objectives leading to theoretical and deep advances.
For polynomial system solving, the mathematical specification of the result of a computation, in particular when the number of solutions is infinite, is itself a difficult problem , , , . Sorting the most frequently asked questions appearing in the applications, one distinguishes several classes of problems which are different either by their mathematical structure or by the significance that one can give to the word "solving".
Some of the following questions have a different meaning in the real case or in the complex case, others are posed only in the real case :
zerodimensional systems (with a finite number of complex solutions  which include the particular case of univariate polynomials); The questions in general are well defined (numerical approximation, number of solutions, etc) and the handled mathematical objects are relatively simple and wellknown;
parametric systems; They are generally zerodimensional for almost all the parameters' values. The goal is to characterize the solutions of the system (number of real solutions, existence of a parameterization, etc.) with respect to parameters' values.
positive dimensional systems; For a direct application, the first question is the existence of zeros of a particular type (for example real, real positive, in a finite field). The resolution of such systems can be considered as a black box for the study of more general problems (semialgebraic sets for example) and information to be extracted is generally the computation of a point per connected component in the real case.
constructible and semialgebraic sets; As opposed to what occurs numerically, the addition of constraints or inequalities complicates the problem. Even if semialgebraic sets represent the basic object of the real geometry, their automatic "and effective study" remains a major challenge. To date, the state of the art is poor since only two classes of methods are existing :
the Cylindrical Algebraic Decomposition which basically computes a partition of the ambient space in cells where the signs of a given set of polynomials are constant;
deformations based methods that turn the problem into solving algebraic varieties.
The first solution is limited in terms of performances (maximum 3 or 4 variables) because of a recursive treatment variable by variable, the second also because of the use of a sophisticated arithmetic (formal infinitesimals).
quantified formulas; deciding efficiently if a first order formula is valid or not is certainly one of the greatest challenges in "effective" real algebraic geometry. However this problem is relatively well encircled since it can always be rewritten as the conjunction of (supposed to be) simpler problems like the computation of a point per connected component of a semialgebraic set.
As explained in some parts of this document, the iniquity of the studied mathematical objects does not imply the uncut of the related algorithms.
The pressure of the concurrence is relatively strong on zerodimensional systems since we enter directly in competition with numerical methods (Newton, homotopy, etc), seminumerical methods (interval analysis, eigenvalues computations, etc.) and formal methods (geometrical resolution, resultant based strategies, XL algorithm in cryptography, etc.). The pressure is much less on the other subjects: except the groups working on the Cylindrical Algebraic Decomposition, very few studies with practical vocation are to be counted and even less software achievements.
The priorities we put on our algorithmic work are generally dictated by the applications. Thus, above items naturally structure the algorithmic part of our research topics For each of these goals, our work is to design the most efficient possible algorithms: there is thus a strong correlation between implementations and applications, but a significant part of the work is dedicated to the identification of blackbox allowing a modular approach of the problems. For example, the resolution of the zerodimensional systems is a prerequisite for the algorithms treating of parametric or positive dimensional systems.
An essential class of blackbox developed in the project does not appear directly in the absolute objectives counted above : the "algebraic or complex" resolutions. They are mostly reformulations, more algorithmically usable, of the studied systems. One distinguishes two categories of complementary objects :
ideals representations; From a computational point of view these are the structures which are used in the first steps;
varieties representations; The algebraic variety, or more generally the constructible or semialgebraic set is the studied object.
To give a simple example, in
the variety
{(0, 0)}can be seen like the zeros set of more or less complicated ideals (for example, ideal(
X,
Y), ideal(
X^{2},
Y), ideal(
X^{2},
X,
Y,
Y^{3}), etc). The entry which is given to us is a system of equations, i.e. an ideal. It is essential, in many cases, to understand the structure of this object to be
able to correctly treat the degenerated cases. A striking example is certainly the study of the singularities. To take again the preceding example, the variety is not singular, but this cannot
be detected by the blind application of the Jacobian criterion (one could wrongfully think that all the points are singular, contradicting, for example, Sard's lemma).
The basic tools that we develop and use to understand in an automatic way the algebraic and geometrical structures are on the one hand Gröbner bases (the most known object used to represent an ideal without loss of information) and on the other hand triangular sets (effective way to represent the varieties).
On these two points, the pressure is strong since many teams work on these two objects. To date, our project however has a consequent advance in the computation of Gröbner bases (algorithms
F_{5}and
F_{7}of Faugère for Gröbner bases) and is a main contributor in the homogenization and comprehension of the triangular structures
.
Let us denote by
K[
X_{1}, ...,
X_{n}]the ring of polynomials with coefficients in a field
Kand indeterminates
X_{1}, ...,
X_{n}and
S= {
P_{1}, ...,
P_{s}}any subset of
K[
X_{1}, ...,
X_{n}]. A point
is a zero of
Sif
P_{i}(
x) = 0
i[1...
s].
The ideal
generated by
P_{1}, ...,
P_{s}is the set of polynomials in
K[
X_{1}, ...,
X_{n}]constituted by all the combinations
with
. Since every element of
vanishes at each zero of
S, we denote by
(resp.
), the set of complex (resp. real) zeros of
S, where
Ris a real closed field containing
Kand
Cits algebraic closure.
One Gröbner basis' main property is to provide an algorithmic method for deciding if a polynomial belongs or not to an ideal through a reduction function denoted " " from now.
If
Gis a Gröbner basis of an ideal
for any monomial ordering
<.
a polynomial belongs to if and only if ,
Reduce(
p,
G,
<) does not depend on the order of the polynomials in the list
G, thus, this is a canonical reduced expression modulus
, and the Reduce function can be used as a
simplificationfunction.
Gröbner bases are computable objects. The most popular method for computing them is Buchberger's algorithm ( , ). It has several variants and it is implemented in most of general computer algebra systems like Maple or Mathematica. The computation of Gröbner bases using Buchberger's original strategies has to face to two kind of problems :
(A) arbitrary choices : the order in which are done the computations has a dramatic influence on the computation time;
(B) useless computations : the original algorithm spends most of its time in computing 0.
For problem (A), J.C. Faugère proposed (
 algorithm
F_{4}) a new generation of powerful algorithms (
) based on the intensive use of linear algebra technics. In short, the arbitrary choices are left to computational
strategies related to classical linear algebra problems (matrix inversions, linear systems, etc.).
For problem (B), J.C. Faugère proposed ( ) a new criterion for detecting useless computations. Under some regularity conditions on the system, it is now proved that the algorithm do never perform useless computations.
A new algorithm named
F_{5}was built using these two key results. Even if it still computes a Gröbner basis, the gap with existing other strategies is consequent. In particular, due to the range of examples that
become computable, Gröbner basis can be considered as a reasonable computable object in large applications.
We pay a particular attention to Gröbner bases computed for elimination orderings since they provide a way of "simplifying" the system (a equivalent system with a structured shape). For example, a lexicographic Gröbner basis has always the following shape :
(some of the polynomials may be identically null). A well known property is that the zeros of the first non null polynomial define the Zariski closure (classical closure in the case of complex coefficients) of the projection on the coordinate's space associated with the smallest variables.
A triangular set is a system with the following shape :
(some polynomials may be identically null).
Such kinds of systems are algorithmically easy to use, for computing numerical approximations of the solutions in the zerodimensional case or for the study of the singularities of the associated variety (triangular minors in the Jacobian matrices). Except if they are linear, algebraic systems cannot, in general, be rewritten as a single triangular set, one speaks then of decomposition of the systems in several triangular sets.
Triangular sets appear under various names in the field of algebraic systems. In 1932 J.F. Ritt ( ) introduced them as characteristic sets for prime ideals in the context of differential algebra. His constructive algebraic tools were adapted by W.T. Wu in the late seventies for geometric applications. Wu presented an algorithm for computing characteristic sets of finite polynomial sets which do not generate necessarily prime ideals ( , ). With Wu, several authors such S.C. Chou, X.S. Gao, G. Gallo, B. Mishra, D. Wang then developed this approach to make it more efficient.
In 1991, Lazard and Kalkbrener presented triangular decomposition algorithms with nicer properties for their outputs, based on additional requirements for the triangular sets and a generalization of the gcd of univariate polynomials over a product of fields.
The concept of regular chain introduced in
and
is adapted for recursive computations in a univariate way and provides a membership test and a zerodivisor test
for the strongly unmixed dimensional ideal it defines. Kalkbrenner defined regular triangular sets and showed how to decompose algebraic varieties as a union of Zariski closures of zeros of
regular triangular sets. Gallo showed that the principal component of a triangular decomposition can be computed in
O(
d^{O(
n2)})(
n= number of variables,
d=degree in the variables). During the 90s, implementations of various strategies of decompositions multiply, but they drain relatively heterogeneous specifications.
Following Kalkbrener's work, Aubry presented an algorithm for decomposing the radical of an ideal into separable regular chains that define radical strongly unmixed dimensional ideals ( ).
P. Aubry and D. Lazard contributed to the homogenization of the work completed in this field by proposing a series of specifications and definitions gathering the whole of former work . Two essential concepts for the use of these sets (regularity, separability) at the same time allow from now on to establish a simple link with the studied varieties and to specify the computed objects precisely.
For
, we denote by
mvar (
p)(and we call
main variableof
p) the greatest variable appearing in
pw.r.t. a fixed lexicographic ordering.
h_{i}the leading coefficient of
t_{i}(when
t_{i}0it is seen as a univariate polynomial in its main variable), and
.
the separant of
t_{i}(when
t_{i}0)
.
; the variety of
Tis
and we have
(elementary property of localization).
A triangular set
is said to be regular (resp. separable) if
i{1, ...,
n}such that
t_{i}0, the normalization of its initial
h_{i}(resp. of its separant
s_{i}) is a non zero polynomial.
One can always decompose a variety as the union of the varieties of regular and separable triangular sets ( , ) :
A remarkable and fundamental property in the use we have of the triangular sets is that the ideals
sat (
T
_{i}), for regular and separable triangular sets, are radical and equidimensional. These properties are essential for some of our algorithms. For example, having radical and
equidimensional ideals allows us to compute straightforwardly the singular locus of a variety by canceling minors of good dimension in the Jacobian matrix of the system. This is naturally a
basic tool for some algorithms in real algebraic geometry
,
,
.
Triangular sets based technics are efficient for specific problems like computing Galois ideals
, but the implementations of direct decompositions into triangular sets do not currently reach the level of
efficiency of Gröbner bases in terms of computable classes of examples. Anyway, our team benefits from the progress carried out in this last field since we currently perform decompositions into
regular and separable triangular sets through lexicographical Gröbner bases computations (the process provides in the meantime Gröbner bases of the ideals
sat (
T
_{i})).
A system is zerodimensional if the set of the solutions in an algebraically closed field is finite. In this case, the set of solutions does not depend on the chosen algebraically closed field.
Such a situation can easily be detected on a Gröbner basis for any admissible monomial ordering.
These systems are mathematically particular since one can systematically bring them back to linear algebra problems. More precisely, the algebra
K[
X_{1}, ...,
X_{n}]/
Iis in fact a
Kvector space of dimension equal to the number of complex roots of the system (counted with multiplicities). We chose to exploit this structure. Accordingly, computing a base of
K[
X_{1}, ...,
X_{n}]/
Iis essential. A Gröbner basis gives a canonical projection from
K[
X_{1}, ...,
X_{n}]to
K[
X_{1}, ...,
X_{n}]/
I, and thus provides a base of the quotient algebra and many other informations more or less straightforwardly (number of complex roots for example).
The use of this vectorspace structure is well known and at the origin of the one of the most known algorithms of the field ( ) : it allows to deduce, starting from a Gröbner basis for any ordering, a Gröbner base for any other ordering (in practice, a lexicographic basis, which are very difficult to compute directly). It is also common to certain seminumerical methods since it allows to obtain quite simply (by a computation of eigenvalues for example) the numerical approximation of the solutions (this type of algorithms is developed, for example, in the INRIA Galaad project).
Contrary to what is written in a certain literature, the computation of Gröbner bases is not "doubly exponential" for all the classes of problems. In the case of the zerodimensional systems, it is even shown that it is simply exponential in the number of variables, for a degree ordering and for the systems without zeros at infinity. Thus, an effective strategy consists in computing a Gröbner basis for a favorable ordering and then to deduce, by linear algebra technics, a Gröbner base for a lexicographic ordering .
The case of the zerodimensional systems is also specific for triangular sets. Indeed, in this particular case, we have designed algorithms that allow to compute them efficiently starting from a lexicographic Gröbner basis. Note that, in the case of zerodimensional systems, regular triangular sets are Gröbner bases for a lexicographical order.
Many teams work on Gröbner bases and some use triangular sets in the case of the zerodimensional systems, but up to our knowledge, very few continue the work until a numerical resolution and even less tackle the specific problem of computing the real roots. It is illusory, in practice, to hope to obtain numerically and in a reliable way a numerical approximation of the solutions straightforwardly from a lexicographical basis and even from a triangular set. This is mainly due to the size of the coefficients in the result (rational number).
Our specificity is to carry out the computations until their term thanks to two types of results :
the computation of the Rational Univariate Representation
: we shown that any zerodimensional system, depending on variables
X_{1}, ...
X_{n}, can systematically be rewritten, without loss of information (multiplicities, real roots), in the form
f(
T) = 0,
X_{i}=
g_{i}(
T)/
g(
T),
i= 1...
nwhere the polynomials
f,
g,
g_{1}, ...
g_{n}have coefficients in the same ground field as those of the system and where
Tis a new variable (independent from
X_{1}, ...
X_{n}).
efficient algorithms for solving (real roots isolation and counting) univariate polynomials , .
Thus, the use of innovative algorithms for Gröbner bases computations , , Rational Univariate representations ( for the "shape position" case and for the general case), allows to use zerodimensional solving as subtask in other algorithms.
When a system is positive dimensional(with an infinite number of complex roots), it is no more possible to enumerate the solutions. Therefore, the solving process reduces to decomposing the set of the solutions into subsets which have a welldefined geometry. One may perform such a decomposition from an algebraic point of view or from a geometrical one, the latter meaning not taking the multiplicities into account (structure of primary components of the ideal is lost).
Although there exist algorithms for both approaches, the algebraic point of view is presently out of the possibilities of practical computations, and we restrict ourselves to geometrical decompositions.
When one studies the solutions in an algebraically closed field, the decompositions which are useful are the equidimensional decomposition (which consists in considering separately the isolated solutions, the curves, the surfaces, ...) and the prime decomposition (decomposes the variety into irreducible components). In practice, our team works on algorithms for decomposing the system into regular separable triangular sets, which corresponds to a decomposition into equidimensional but not necessarily irreducible components. These irreducible components may be obtained eventually by using polynomial factorization.
However, in many situations one is looking only for real solutions satisfying some inequalities (
P_{i}>0or
P_{i}0)
There are general algorithms for such tasks, which rely on Tarski's quantifier elimination. Unfortunately, these problems have a very high complexity, usually doubly exponential in the number of variables or the number of blocks of quantifiers, and these general algorithms are intractable. It follows that the output of a solver should be restricted to a partial description of the topology or of the geometry of the set of solutions, and our research consists in looking for more specific problems, which are interesting for the applications, and which may be solved with a reasonable complexity.
We focus on 2 main problems :
computing one point on each connected components of a semialgebraic set;
solving systems of equalities and inequalities depending on parameters.
The most widespread algorithm computing sampling points in a semialgebraic set is the Cylindrical Algebraic Decomposition Algorithm due to Collins . With slight modifications, this algorithm also solves the problem of Quantifier Elimination. It is based on the recursive elimination of variables one after an other ensuring nice properties between the components of the studied semialgebraic set and the components of semialgebraic sets defined by polynomial families obtained by the elimination of variables. It is doubly exponential in the number of variables and its best implementations are limited to problems in 3 or 4 variables.
Since the end of the eighties, alternative strategies (see , , , , ) with a single exponential complexity in the number of variables have been developed. They are based on the progressive construction of the following subroutines:
solving zerodimensional systems: this can be performed by computing a Rational Univariate Representation (see );
computing sampling points in a real hypersurface: after some infinitesimal deformations, this is reduced to problem (a) by computing the critical locus of a polynomial mapping reaching its extrema on each connected component of the real hypersurface;
computing sampling points in a real algebraic variety defined by a polynomial system: this is reduced to problem (b) by considering the sum of squares of the polynomials;
computing sampling points in a semialgebraic set: this is reduced to problem (c) by applying an infinitesimal deformation.
On the one hand, the relevance of this approach is based on the fact that its complexity is asymptotically optimal. On the other hand, some important algorithmic developments have been necessary to obtain efficient implementations of subroutines (b) and (c).
During the last years, we focused on providing efficient algorithms solving the problems (b) and (c). The used method rely on finding a polynomial mapping reaching its extrema on each connected component of the studied variety such that its critical locus is zerodimensional. For example, in the case of a smooth hypersurface whose real counterpart is compact choosing a projection on a line is sufficient. This method is called in the sequel the critical point method. We started by studying problem (b) .
Even if we showed that our solution may solve new classes of problems ( ), we have chosen to skip the reduction to problem (b), which is now considered as a particular case of problem (c), in order to avoid an artificial growth of degree and the introduction of singularities and infinitesimals.
Putting the critical point method into practice in the general case requires to drop some hypotheses. First, the compactness assumption, which is in fact intimately related to an implicit properness assumption, has to be dropped. Second, algebraic characterizations of critical loci are based on assumptions of nondegeneracy on the rank of the Jacobian matrix associated to the studied polynomial system. These hypotheses are not satisfied as soon as this system defines a nonradical ideal and/or a non equidimensional variety, and/or a nonsmooth variety. Our contributions consist in overcoming efficiently these obstacles ( , ) and several strategies have been developed , , .
The properness assumption can be dropped by considering the square of a distance function to a generic point instead of a projection function: indeed each connected component contains at least a point minimizing locally this function. Performing a radical and equidimensional decomposition of the ideal generated by the studied polynomial system allows to avoid some degeneracies of its associated Jacobian matrix. At last, the recursive study of overlapped singular loci allows to deal with the case of nonsmooth varieties. These algorithmic issues allow to obtain a first algorithm with reasonable practical performances.
Since projection functions are linear while the distance function is quadratic, computing their critical points is easier. Thus, we have also investigated their use. A first approach consists in studying recursively the critical locus of projection functions on overlapped affine subspaces containing coordinate axes combined with the study of their set of nonproperness. A more efficient one , avoiding the study of sets of nonproperness is obtained by considering iteratively projections on genericaffine subspaces restricted to the studied variety and fibers on arbitrary points of these subspaces intersected with the critical locus of the corresponding projection. The underlying algorithm is the most efficient we obtained.
The algorithms of , are provided in the Maple Library RAGLib. It is built upon the softwares Gb and RS. It contains functionalities for computing sample points in real algebraic varieties, semialgebraic sets defined by nonstrict inequalities, and the radical and equidimensional decomposition of an ideal. The experimental version of the algorithm provided in will be included in the next release which is in preparation.
Most of the applications we recently solved (celestial mechanics, cuspidal robots, statistics, etc.) require the study of semialgebraic systems depending on parameters. Although we covered these subjects in an independent way, some general algorithms for the resolution of this type of systems can be proposed from these experiments.
The general philosophy consists in studying the generic solutions independently from algebraic subvarieties (which we call from now on discriminant varieties) of dimension lower than the semialgebraic set considered. The study of the varieties thus excluded can be done separately to obtain a complete answer to the problem, or is simply neglected if one is interested only in the generic solutions, which is the case in some applications.
We recently proposed a new framework for studying basic constructible (resp. semialgebraic) sets defined as systems of equations and inequations (resp. inequalities) depending on parameters. Let's consider the basic semialgebraic set
and the basic constructible set
where
p_{i},
f_{j}are polynomials with rational coefficients.
[
U,
X] = [
U
_{1}, ...
U
_{d},
X
_{d+ 1}, ...
X
_{n}]is the set of
indeterminatesor variables, while
U= [
U_{1}, ...
U_{d}]is the set of
parametersand
X= [
X_{d+ 1}, ...
X_{n}]the set of
unknowns;
is the set of polynomials defining the equations;
is the set of polynomials defining the inequations in the complex case (resp. the inequalities in the real case);
For any
uC^{d}let
_{u}be the specialization
;
denotes the canonical projection on the parameter's space ;
Given any ideal
Iwe denote by
the associated (algebraic) variety. If a variety is defined as the zero set of polynomials with coefficients in
we call it a
algebraic variety; we extend naturally this notation in order to talk about
irreducible components,
Zariski closure, etc.
for any set , will denote its Zariski closure in .
In most applications,
as well as
are finite and not empty for almost all parameter's
u. Most algorithms that study
or
(number of real roots w.r.t. the parameters, parameterizations of the solutions, etc.) compute in any case a
Zariski closed set
such that for any
, there exists a neighborhood
of
uwith the following properties :
is an analytic covering of ; this implies that the elements of do not vanish (and so have constant sign in the real case) on the connected components of ;
We recently show that the parameters' set such that there doesn't exist any neighborhood with the above analytic covering property is a Zariski closed set which can exactly be computed. We name it the minimal discriminant variety of with respect to _{U}and propose also a definition in the case of non generically zerodimensional systems.
Being able to compute the minimal discriminant variety allows to simplify the problem depending on
nvariables to a similar problem depending on
dvariables (the parameters) : it is sufficient to describe its complementary in the parameters' space (or in the closure of the projection of the variety in the general case) to
get the full information about the generic solutions (here generic means for parameters' values outside the discriminant variety).
Then being able to describe the connected components of the complementary of the discriminant variety in becomes a main challenge which is strongly linked to the work done on positive dimensional systems. Moreover, rewriting the systems involved and solving zerodimensional systems are major components of the algorithms we plan to build up.
We currently propose several computational strategies. An a priori decomposition into equidimensional components as zeros of radical ideals simplifies the computation and the use of the discriminant varieties. This preliminary computation is however sometimes expensive, so we are developing adaptive solutions where such decompositions are called by need. The main progress is that the resulting methods are fast on easy problems (generic) and slower on the problems with strong geometrical contents.
We also defined (large)
discriminant varieties of
with respect to
_{U}as being any
Zariski closed set
Wcontaining the minimal discriminant variety of
with respect to
_{U}(the minimal discriminant variety is the smallest discriminant variety and it is uniquely defined).
The existing implementations of algorithms able to "solve" (to get some information about the roots) parametric systems do all compute (directly or indirectly) discriminant varieties but none computes optimal objects (strict discriminant variety). The consequence is that the output (case distinctions w.r.t. parameters' values) are huge compared with the results we can provide.
Algorithms based on "Comprehensive Gröbner bases" , , , compute also (implicitly or explicitly) discriminant varieties. In the case of parametric systems, such a discriminant variety contains the parameters' values for which a Gröbner basis do not specialize properly. Again, it is far from being optimal since it contains the parameter's values where the staircase varies, which depend on the strategy used (for example the choice of a monomial ordering).
Applications are fundamental for our research for several reasons.
The first one is that they are the only source of fair tests for the algorithms. In fact, the complexity of the solving process depends very irregularly of the problem itself. Therefore, random tests do not give a right idea of the practical behavior of a program, and the complexity analysis, when possible, does not necessarily provide realistic information.
A second reason is that, as quoted above, we need real world problems to determine which specifications of algorithms are really useful. Conversely, it is frequently by solving specific problems through ad hoc methods that we found new algorithms with general impact.
Finally, obtaining successes with problems which are intractable by the other known approaches is the best proof for the quality of our work.
On the other hand, there is a specific difficulty. The problems which may be solved with our methods may be formulated in many different ways, and their usual formulation is rarely well suited for polynomial system solving or for exact computations. Frequently, it is not even clear that the problem is purely algebraic, because researchers and engineers are used to formulate them in a differential way or to linearize them.
Therefore, our software may not be used as black boxes, and we have to understand the origin of the problem in order to translate it in a form which is well suited for our solvers.
It follows that many of our results, published or in preparation, are classified in scientific domains which are different from ours, like cryptography, error correcting codes, robotics, signal processing, statistics or biophysics.
The idea of using multivariate (quadratic) equations as a basis for building public key cryptosystems appeared with the MatsumotoImai cryptosystem. This system was first broken by Patarin and, shortly after, Patarin proposed to repair it and thus devised the hidden field equation (HFE) cryptosystem.
The basic idea of HFE is simple: build the secret key as a univariate polynomial
S(
x)over some (big) finite field (often
). Clearly, such a polynomial can be easily evaluated; moreover, under reasonable hypotheses, it can also be “inverted” quite efficiently. By inverting, we mean finding any solution to
the equation
S(
x) =
y, when such a solution exists. The secret transformations (decryption and/or signature) are based on this efficient inversion. Of course, in order to build a
cryptosystem, the polynomial
Smust be presented as a public transformation which hides the original structure and prevents inversion. This is done by viewing the finite field
as a vector space over
and by choosing two linear transformations of this vector space
L_{1}and
L_{2}. Then the public transformation is the composition of
L_{1},
Sand
L_{2}. Moreover, if all the terms in the polynomial
S(
x)have Hamming weight 2, then it is obvious that all the (multivariate) polynomials of the public key are of degree two.
By using fast algorithms for computing Gröbner bases, it was possible to break the first HFE challenge
(real cryptographic size 80 bits and a symbolic prize of 500 US$) in only two days of CPU time. More precisely we
have used the
F_{5}/2version of the fast
F_{5}algorithm for computing Gröbner bases (implemented in C). The algorithms available up to now (Buchberger) were extremely slow and could not have been used to break the code (they should
have needed at least a few centuries of computation). The new algorithm is thousands of times faster than previous algorithms. Several matrices have to be reduced (Echelon Form) during the
computation: the biggest one has no less than 1.6 million columns, and requires 8 gigabytes of memory. Implementing the algorithm thus required significant programming work and especially
efficient memory management.
A new result is that the weakness of the systems of equations coming from HFE instances can be explainedby the algebraic properties of the secret key (work presented at Crypto 2003 in collaboration with A. Joux). From this study we are able to predict the maximal degree occurring in the Gröbner basis computation, so that we can establish precisely the complexity of the Gröbner attack and compare it with the theoretical bounds.
Since it is easy to transform many cryptographic problems into polynomial equations, our group is in a position to apply this general method to other cryptosystems. Thus we have a new general cryptanalysis approach, called algebraic cryptanalysis. The team is currently testing the robustness of various cryptographic primitives using such approach , , . For instance, we are investigating the security of a nonlinear filter generators (with J.C. Faugère and L. Perret) in collaboration with the DGA (Celar).
Another relevant tool in the study of cryptographic problems is the LLL algorithm which is able to compute in polynomial time a “good” approximation for the shortest vector problem. Since a Gröbner basis can be seen as the set of smallest polynomials in an ideal with respect to the divisibility of leading terms, it is natural to compare both algorithms: an interesting link between LLL (polynomial version) and Gröbner bases was suggested by a member of our group.
A standard algorithm for implementing the arithmetic of Jacobian groups of curves is LLL. By replacing LLL by the FGLM algorithm we establish a new structure theorem for Gröbner bases;
consequently, on a generic input we were able to establish explicit and optimized formulas for basic arithmetic operations in the Jacobian groups of
C_{34}curves
.
As an application of the LLL algorithm we have presented , an algorithm for converting a Gröbner basis of an ideal with respect to any given ordering into a Gröbner basis with respect to any other ordering. This algorithm is based on a modified version of the LLL algorithm. In the worst case, the theoretical complexity of this algorithm is not necessarily better than the complexity of the FGLM algorithm; but when the output (the final Gröbner basis) is small this algorithm is experimentally more efficient.
The (parallel) manipulators we study are general parallel robots: the hexapods are complex mechanisms made up of six (often identical) kinematic chains, of a base (fixed rigid body including six joints or articulations) and of a platform (mobile rigid body containing six other joints).
The design and the study of parallel robots require the resolution of direct geometrical models (computation of the absolute coordinates of the joints of the platform knowing the position and the geometry of the base, the geometry of the platform as well as the distances between the joints of the kinematic chains at the base and the platform) and inverse geometrical models (distances between the joints of the kinematic chains at the base and the platform knowing the absolute positions of the base and the platform).
Since the inverse geometrical models can be easily solved, we focus on the resolution of the direct geometrical models.
The study of the direct geometrical model is a recurrent activity for several members of the project. One can say that the progress carried out in this field illustrates perfectly the evolution of the methods for the resolution of algebraic systems. The interest carried on this subject is old. The first work in which the members of the project took part in primarily concerned the study of the number of (complex) solutions of the problem , . The results were often illustrated by Gröbner bases done with Gb software. One of the remarkable points of this study is certainly the classification suggested in . The next efforts were related to the real roots and the effective computation of the solutions . The studies then continued following the various algorithmic progresses, until the developed tools made possible to solve nonacademic problems. In 1999, the various efforts were concretized by an industrial contract with the SME CMW ( Constructions Mécaniques des VosgesMarioni) for studying a robot dedicated to machine tools.
We conceived, in collaboration with the COPRIN project, a prototype of simulator for validating a fixed trajectory, i.e. :
check that the trajectory is nonsingular for a series of functions modelizing the length of the legs : let us recall that for given values of the legs' length, there exists up to 40 possible positions. To check that the trajectory is nonsingular, one must ensure, for example, that 2 possible trajectories do not intersect (in which case the robot cannot be controlled);
measure without ambiguity the difference between two trajectories corresponding to different legs' length functions (this is a tool for checking "numerical" singularities).
This tool is single in the world : concurrent solutions exist but can treat only particular robots (plan, symmetrical, less joints, etc). It is necessary to know to how to solve the general case because a small modification of the design parameters' (unavoidable in practice) has serious consequences on the behavior of the robot. For example, the theoretical robot we use (based on the lefthand parallel manipulator due to J.P. Merlet) for our study admits at most 36 solutions for the direct geometrical model (either up to 36 possible trajectories for fixed length legs' functions), whereas the presently built robot (with small errors on the positions of the joints) has up to 40 possible positions.
The main algorithmic tool present in this simulator (partially presented in ) is a hybrid method (mixing computer algebra, numerical computation and interval analysis) for the resolution of the direct geometrical model.
A part of the work was to develop a seminumerical method (based on Newton's method), powerful in terms of computation time (4000 computations per minute) and certified, i.e. always
returning a correct result: the answer is either a set of numerical values with a certified precision or a failure message. The strategy used combines interval arithmetics and convergence
results. The failure remains exceptional (less than 10 percent of the practical problems) and, when it occurs, the result is obtained using a special version of the
F_{4}algorithm for Gröbner bases computation and an optimized version (adapted to that particular case of systems) of the Rational Univariate Representation algorithm.
Our simulator has been used to diagnose the problems related to the solutions currently employed (CAD, lookahead, algorithms for interpolating the trajectories, etc).
Industrial robotic (serial) manipulators with 3 degrees of freedom are currently designed with very simple geometric rules on the designed parameters, the ratios between them are always of the same kind. In order to enlarge the possibilities of such manipulators, it may be interesting to relax some constraints on the parameters.
However, the diversity of the tasks to be done carries out to the study of other types of robots whose parameters of design differ from what is usual and which may have new properties, like stability or existence of new kinds of trajectories.
An important difficulty slows down the industrial use of such new robots : recent studies ( , , and ) showed that they may have a behavior which is qualitatively different from those of the robots currently used in industry and allows new changes of posture. These robots, called cuspidal, cannot be controlled like the others. The majority of the robots are in fact cuspidal: the industrial robots currently on the market form a very restricted subclass of all the possible robots.
A full characterization of all the cuspidal robots is of a great interest for the designer and the user. Such a project forms part of a current tendency in robotics which consists in designing a robot in order that its performances are optimal for a given application while preserving the possibility of using it for another task, that is to say to specialize it to the maximum for an application in order to reduce its cost and to increase its operational safety.
The study of the behavior at a change of posture is identical, from the computer algebra point of view, to solving a system of equalities and inequalities depending on three or four parameters which correspond to the design parameters of this kind of robots. The method we basically used was "ad hoc", and no known automatic computer algebra methods were able to solve completely the problem before the work done in collaboration with COPRIN (INRIA Sophia), IRMAR (University of Rennes I) and IRRCyN (CNRS  Nantes) teams.
From a robotic point of view, the result obtained is a full classification of a class of serial robots with three degrees of freedom according to their cuspidal character. Since then, toward the end of the MathsStic project "Robots Cuspidaux", these results were simplified and analyzed to allow a better description of the workspace of such mechanisms.
The computations done for this application were critical also for the development of the general and systematic methods for solving parametric systems. We have shown that these general methods can now be used in place of ad hoc computations to compute the same classification and a recent experiment even shows that they allow to relax one more parameter and thus to solve a more general problem.
Some problems in signal theory are naturally formulated in terms of algebraic systems. In , we had studied the KovacevicVetterli 's family of filters banks. To be used for image compression, a wavelet transformation must be defined by a function having a maximum of partial derivative that vanishes at the corners of the image. These conditions can be translated to polynomial systems that can be solved with our methods. We showed that to get physically acceptable solutions, it was necessary to choose the number of conditions so that the solutions' space is of dimension 0, 2 or 4 (according to the size of the filter). This result (parametric family of filters) is subject to a patent . To exploit these filters in practice, it remains to choose the best transformation, according to nonalgebraic criteria, which is easily done with traditional tools for optimization (with a reduced number of variables).
As for most of applications on which we work, it took more than three years to obtain concrete results bringing real practical progress (the results mentioned in are partial), and still a few years more to be able to disseminate information towards our community . Our software tools are now used to solve nearby problems .
Our activity in signal processing started again a few months ago through a collaboration with the APICS project (collaboration with F. Seyfert) on the synthesis and identification of hyperfrequency filters made of coupled resonant cavities. It is now part of our research goals.
One specificity in computer algebra is to manipulate huge objects with a size that varies along the algorithm. Having a specific memory manager, adapted to the objects handled in the various implementations is thus essential. Based on one concept suggested by JC. Faugère in his PhD thesis, several versions implemented in C are used in different software packages of the project ( GB, FGb, RS) as well as in implementations due to collaborators (F. Boulier  LIFL). The various suggested implementations are very simple and it seems preferable to precisely describe the process and its use in some key situations than to propose a standardized implementation as a library.
See
http://
UDXis a software for binary data exchange. It was initially developed to show the power of a new protocol, object of a patent by INRIA and UPMC . The resulting code, written in ANSI C (9500 lines), is very portable and very efficient, even when the patented protocol is not used. UDXis composed of five independent modules:
base: optimized system of buffers and synchronization of the input and output channels;
supports: read/write operations on various supports (sockets, files, shared memory, etc.);
protocols: various exchange protocols (patented protocol, XDR, etc.);
exchange of composite types: floatingpoint numbers (simple and double precision), multiprecision integers, rational numbers;
interfaces: user interfaces implementing high level callings to the four other modules.
MPFI is a library for multiprecision interval arithmetic, written in C (approximately 1000 lines), based on MPFR. It is developed in collaboration with N. Revol (ARENAIRE project). Initially, MPFI was developed for the needs of a new hybrid algorithm for the isolation of real roots of polynomials with rational coefficients. MPFI contains the same number of operations and functions as MPFR, the code is available and documented.
MPAIis a library for computing with algebraic infinitesimals. The infinitesimals are represented as truncated series. The library provides all the arithmetic functions needed to perform computations with infinitesimals. The interface is both GMPand RScompliant. It is implemented in the C language and represents approximatively 1000 lines of code. The algorithms proposed in MPAIinclude Karatsuba's product and the short product, ...The code is available.
In order to ease the use of the various software developed in the project, some conventions for the exchange of ASCII and binary files were developed and allow a flexible use of the servers Gb, FGbor RS.
To make transparent the use of our servers from general computer algebra systems such as Maple or MuPAD we currently propose a common distribution for Gb, FGband RSincluding the servers as well as the interfaces for Maple and MuPAD. The instructions are illustrated by concrete examples and a simple installation process.
Gb is one of the most powerful software for computing Gröbner bases currently diffused. Implemented in C/C++ (approximately 100000 lines), it is distributed since 1994 in the form of
specific servers (direct computations, changes of orders, Hilbert's function, etc.). The initial interface (interactive system of commands) was abandoned for lighter solutions (ASCII interface,
udxprotocol) which are more powerful but for the moment more rudimentary. With the new algorithms proposed by JC. Faugère (
F_{n},
n>1),
GBis always maintained but is not developed any more. Indeed, data structures as well as basic algorithms necessary to implementations of these new
methods being radically different from precedents, the
GBservers will be gradually replaced by
FGbservers. The existing prototypes of interfaces (ASCII, Maple, MuPAD, RS) were homogenized in order to provide a framework, initially based on
GB, but evolutionary (towards
FGb) in a transparent way. They will be kept. The future evolutions will follow then algorithmic progress.
As mentioned above, current implementations of the
F_{5}algorithm depend on many options. For efficiency reasons, it is currently preferable to compile specific servers, setting algorithms parameters like matrices sizes, linear algebra
strategies for sparse matrices by hand. This has already successfully been done for path planning problems (parallel robots), and however helps to understand the main constraints in order to
provide, in the future, software solutions that are independent from the type of system to be solved.
RSis a software dedicated to the study of real roots of algebraic systems. It is entirely developed in C (100000 lines approximately) and succeeds to RealSolvingdeveloped during the European projects PoSSo and FRISCO. RSmainly contains functions for counting and isolating of real zeros of zerodimensional systems. The user interfaces of RSare entirely compatible with those of Gb/FGb(ASCII, MuPAD, Maple). RSis used in the project since several years and several development versions have been installed by numerous other teams. The following evolutions will depend on algorithmic progress and the users' needs (many internal functions are exported on demand).
The RAGLib( Real Algebraic Geometry Library) is a Maple library of symbolic algorithms devoted to some problems of Effective Real Algebraic Geometry, and more particularly, to the study of real solutions of polynomial systems of equations and inequalities. It contains algorithms performing:
the equidimensional decompositionof an ideal generated by a polynomial family.
the emptiness testof a real algebraic variety defined by a polynomial system of equations.
the computation of at least one point in each connected componentof a real algebraic variety defined by a polynomial system of equations.
the emptiness testof a semialgebraic set defined by a polynomial system of equations and non strict inequalities.
the computation of at least one point in each connected componentof a semialgebraic set defined by a polynomial system of equations and non strict inequalities.
As soon as they come from real applications, the polynomials resulting from processes of elimination (Gröbner bases, triangular sets) are very often too large to be studied by general computer algebra systems.
In the case of polynomials in two variables, a certified layout is enough in much cases to solve the studied problem (it was the case in particular for some applications in celestial mechanics). This type of layout is now possible thanks to the various tools developed in the project.
Two components are currently under development: the routine of computation (taking as input the polynomial function and returning a set of points) is stable (about 2000 lines in the C language, using the internal libraries of RS) and can be used as a black box in standalone mode or through Maple; the routine of layout is under study.
The purpose of project TSPR(Trajectory Simulator for Parallel Robots) is to homogenize and export the tools developed within the framework of our applications in parallel robotics. The software components of TSPR(about 1500 lines) are primarily written in C following the standards of RSand FGband algorithms implemented in Maple using the prototypes of interfaces for FGband RS. The encapsulation of all these components in a single distribution is available but not downloadable.
Prototypes of components for realtime resolution of certain types of systems now use also the ALIAS library developed in the INRIA project COPRIN.
The developed tool (about 11,000 lines in C++) makes possible the use of software installed on a network which then becomes usable even if it is not ready to be distributed and thus allows the experimental validation of algorithms proposed in articles or allows an external user to have a more precise idea of the possibilities offered by recent algorithmic progress. It will be useful in our project to export some of our prototypes.
NetTask is very flexible since it doesn't need any change in the software to be demonstrated. It is compatible with the current safety requirements and contains some powerful tools:
allocation of the tasks launched by the users according to the availability of the software but also the load of the nodes on a given network of machines;
system of queue for the tasks;
complete control of the launched tasks by the user himself;
many options of configuration such as for example the declaration of the machines and tasks available on a given network, the maximum number of tasks per user.
We show how to use efficiently an API named OpenMPand POSIX Threads to achieve scalability. On multicore architectures, we obtain a speedup equivalent to the number of cores on addition and multiplication on univariate polynomials even for moderate degrees.
Variants of our general framework for solving zerodimensional systems have been designed for specific uses in challenging applications such as the study of ridges in computational geometry, the resolution of the direct kinematics problem for parallel robots or the synthesis of hyperfrequencies filters .
If
_{U}denotes the canonical projection on the parameter's space, solving
leads to compute submanifolds
such that
is an analytic covering of
(we say that
has the

covering property). This guarantees that the cardinal of
is locally constant on
and that
is a finite collection of sheets which are all locally homeomorphic to
. In the case where
is dense in
, known algorithms for solving
or
(
,
,
) compute implicitly or explicitly a Zariski closed subset
Wsuch that any submanifold of
has the (
)covering property.
We introduce the discriminant varieties of with respect to _{U}which are algebraic sets with the above property. We then show that the set of points of which do not have any neighborhood with the ( )covering property is a Zariski closed set and thus define the minimal discriminant variety of with respect to _{U}, and we propose an algorithm to compute it efficiently. Thus, solving (resp. ) is reduced to describing (resp. ) which can be done using critical point methods such as in or partial CAD based strategies .
Discriminant varieties have been successfully used for certifying a numerical global optimization process and .
Let
fbe a polynomial in
of degree
D. We focus on testing the emptiness and computing at least one point in each connected component of the semialgebraic set defined by
f>0(or
f<0or
f0). To this end, the problem is reduced to computing at least one point in each
connected component of a hypersurface defined by
f
e= 0for
positive and small enough. We provide an algorithm allowing us to determine a positive rational number
ewhich is small enough in this sense. This is based on the efficient computation of the set of
generalized critical valuesof the mapping
which is the union of the classical set of critical values of the mapping
fand the set of
asymptotic critical valuesof the mapping
f. In
, we provide a first algorithm allowing us to compute the set of generalized critical values of
. This one is based on the computation of the critical locus of a projection on a plane
P. This plane
Pmust be chosen such that some global properness properties of some projections are satisfied. First practical experiments have shown the relevance of this approach.
Nevertheless, these properties of global properness, which are generically satisfied, can be difficult to check in practice. Moreover, choosing randomly the plane
Pinduces a growth of the coefficients appearing in the computations.
This latter algorithm is the most efficient in practice to compute generalized critical values. We also give complexity estimates for the algorithms designed in and .
The implementations obtained from this work are integrated in the RAGlib Maple package. Their efficiency have allowed us to solve reallife applications. In particular, they have been used for the problem of determining topology changes of the Voronoi diagram of three lines in the space , (see below). This illustrates that an important general application of the critical point method is the proof that a multivariate polynomial is always positive or never negative.
In , we consider the case of polynomial systems of inequalities
defining boundedsemialgebraic sets. The case of bounded semialgebraic sets appears frequently in applications. We provide an algorithm computing at least one point in each connected component of . This algorithm follows the spirit of the ones given in , : infinitesimal deformations occuring in previous works are substituted by computations of critical values of some polynomial mappings. Finally, the initial problem is reduced to computing sampling points in several real algebraic sets defined by polynomial systems with coefficients in . The subsequent implementation is integrated in the RAGlib Maple package and allows us to deal with problems coming from computational geometry, robotics and artifical vision which are not tractable by the best implementations of the Cylindrical Algebraic Decomposition algorithm.
Many algorithms for solving polynomial systems involve the computation of the discriminant of a polynomial with respect to some variable. Especially Collins CAD has to compute and factor discriminants of discriminants. In , we show that the discriminant of a discriminant factors naturally in seven factors such that four of them are generically equal to one. The three others are have respectively a multiplicity of at least 1, 2, 3 in the iterated discriminant. As these factor may be computed directly, this provides a dramatic improvement in the computation of the irreducible factors of the discriminant of a discriminant.
It is well known that in the computation of Gröbner bases an arbitrarily small perturbation in the coefficients of polynomials may lead to a completely different staircase even if the roots
of the polynomials change continuously. In
, this phenomenon is called pseudo singularity. We show how such phenomenon may be detected and even “repaired” by
adding a new variable and a binomial relation each time. To investigate how often likely pseudo singularities may happen in numerical computation of Gröbner bases, two algorithms eBuchberger
and eMatrixF5 are provided. The two algorithms are the modified versions of the Buchberger's algorithm and matrix
F_{5}algorithm. Using these methods, we can compute “more stable” Gr¨obner bases of equivalent ideals (with the same set of zeros) and thus they are suitable for the computation of Gröbner
bases for ideals generated by polynomials with floatingpoint coefficients. The main theorem of
is that any monomial basis (containing 1) of the quotient ring can be found out using VSGB strategy; this theorem
provides us a great freedom to repair pseudo singularities. Experiments show that the algorithms can be used to solve some nontrivial problems.
Ridges are curves of extremal curvature and therefore encode important informations used in segmentation, registration, matching and surface analysis. our first result was to show that ridges on parametric polynomial surfaces can be viewed as the zero set of an implicit plane curve whose singular points are well identified by specialists (so called ombilics or purple points). A consequence was that we were able to provide a certified drawing of such curves using the univariate capabilities of RS. A second result was to exploit the particular structure of the singular locus of this plane curve in order to propose a new algorithm for computing its topology (and thus the topology of the ridges) .
We have a long term collaboration with project VEGAS which began with the complete classification and the parameterization of real quadric intersections , , .
This collaboration is now pursued by the study of the Voronoi of linear structures in . This is a continuation of the preceding work, as the bisector of two linear objects is a quadric and the trisectors are thus intersections of quadrics.
We have given a complete description of the Voronoi diagram, in , of three lines in general position, that is, that are pairwise skew and not all parallel to a common plane , . In particular, we show that the topology of the Voronoi diagram is invariant for three such lines. For the proof, we needed to use heavily the software of the team, especially the Gröbner basis engine fgband the work on the critical point method (see above).
This opens the possibility to design an efficiently implementable algorithm to compute the Voronoi diagram of a finite set of linear structures (points, segments, lines, polygons, polyhedrons, polytopes, etc.). This has been sketchly described by D. Lazard at the workshop without proceedings WRSO (Workshop on Robust Shape Operations), SophiaAntipolis, September 2007. This algorithm will involve, as basic operations, the zerodimensional solving and the computation of the Voronoi diagram of three basic objects, the case of three lines, just solved, being the most difficult.
In “Block Ciphers Sensitive to Gröbner Basis Attacks”(CT RSA 2006), Buchmann, Pyshkin and Weinmann have described two families of Feistel and SPN blockciphers called Flurry and Curry respectively. These two families of ciphers are fully parametrizable and have a sound design strategy against classical statistical attacks (i.e. linear and differential attacks). In addition, the encryption process can be easily described by a set of algebraic equations. These ciphers are then targets of choices for algebraic attacks. In particular, the authors have reduced the keyrecovery problem to the problem of changing the order of a Gröbner basis. This attack – although being more efficient that linear and differential attacks – remains quite limited. By using more efficient algorithms, we first propose a practical improvement of this attack , . However, this will not permit to change the theoretical complexity of the attack. We have then investigate the possibility of using a small number of suitably chosen pairs of message/ciphertext for improving algebraic attacks. It turns out that this approach permits to go one step further in the (algebraic) cryptanalysis of Flurry and Curry. From our experiments, we estimate that this last approach is of polynomialtime complexity when the Sbox is a power function. For instance, we have been able to break a 128bit Flurry – with 7 rounds – in less that one hour. Note that this work has been initiated by a previous study which was partially financed by a contract with DGA.
In 2004, a new attack against SHA1 has been proposed by a team leaded by Wang. The aim of is to sophisticate and improve Wang's attack by using algebraic techniques. To do so, we introduce new notions (semineutral bit, adjuster) and propose then an improved message modification technique based on algebraic techniques. In the case of the 58round SHA1, the experimental complexity of our improved attack is 2 ^{31}SHA1 computations, whereas Wang's method needs 2 ^{34}SHA1 computations. We have found many new collisions for the 58round SHA1. We also study the complexity of our attack for the full SHA1.
Synthesis and identification of hyperfrequency filters made of coupled resonant cavities. Our contribution to the ARC SILA is summarized in : we propose a new approach to the synthesis of coupling matrices for microwave filters is presented. This new approach represents an advance on existing direct and optimization methods for coupling matrix synthesis in that it will exhaustively discover all possible coupling matrix solutions for a network if more than one exists. We illustrate this by carrying out the synthesis process of two asymmetric filters of 8th and 10th degree.
Global convergence of a numerical optimization process. The global convergence of a recently proposed constant modulus (CM) and crosscorrelation (CC)based algorithm (CCCMA) is studied in and . The convergence of this process, and more generally global convergent analysis of gradient stochastic algorithms including is strongly related to the study of some parametric semialgebraic sets. In , we show that CCCMA can converge globally if the parameter which mix the CM and CC terms is properly selected.
We work for a long time on the direct kinematics problem for parallel robots. Our last results consists in exploiting some linear relations very early in the computations so that the difficult part is now reduced to the resolution of a non linear system of equations depending on 3 variables (instead of 7 in the best known alternatives). A direct consequence is that its resolution using pure algebraic methods decreases to less than 1 sec for any kind of robot to get all the possible solutions in a certified way .
A contract as been signed with the Canadian company Waterloo Maple Incin 2005. The objective is to integrate SALSAsoftware into one of the most well known general computer algebra system ( Maple).
The basic term of the contract is of four years (renewable).
The new contract begin in september 2007 and the objective is to evaluate, on examples of realistic size, how to apply multivariate decomposition technique to recover the secret key on some symmetric cryptosystems like Trivium or Bivium. New algorithms for solving efficiently the problem of recovering a decomposition in the case of multivariate systems in 2006 and 2007 by Faugère and Perret.
In collaboration with France Telecom and ENSTA.
This project is to be replaced in the more general context of information protection. Its research areas are cryptography and symbolic computation. We are here essentially – but not exclusively – concerned with public key cryptography. One of the main issues in public key cryptography is to identify hard problems, and propose new schemes that are not based on number theory. Following this line of research, multivariate schemeshave been introduced in the mid eighties [Diffie and Fell 85, Matsumoto and Imai 85].
In order to evaluate the security of new proposed schemes, strong and efficient cryptanalytic methods have to be developped. The main theme we shall address in this project is the evaluation of the security of cryptographic primitives by means of algebraic methods. The idea is to model a cryptographic primitive as a system of algebraic equations. The system is constructed in such a way as to have a correspondence between the solutions of this system, and a secret information of the considered primitive. Once this modeling is done, the problem is then to solve an algebraic system. Up to now, Gröbner bases appear to yield the best algorithms to do so.
Chineese Salsais an associate team created in January 2006. It brings together most of the members of SALSA and researchers from Beihang university, Beijing (university and academy of science). The general objectives of ChineseSalsaare mainly the same as those of SALSA.
journal “Mathematics in Computer Science” (Birkhauser). J.C. Faugère
journal “Cryptography and Communications  Discrete Structures, Boolean Functions and Sequences” Springer J.C. Faugère
Special issue of the Journal of Symbolic Computation on Gröbner Bases and Applications in Cryptology and Error Correcting Codes. Editors J.C. Faugère, L. Perret.
RISC book series (Springer, Heidelberg), “Gröbner Bases, Coding, and Cryptography”. Editors: L. Perret (with T. Mora, S. Sakata, M. Sala, C. Traverso).
Special issue of the Journal of Symbolic Computation on Efficient Computation of Gröbner Bases. Editors J.C. Faugère.
Special issue of the Journal of Symbolic Computation on Polynomial System Solving. Editors J.C. Faugère, F. Rouillier.
Special issue of the Journal “Mathematical Aspects in Computer Science”. Editor F. Rouillier.
International Conference on Mathematical Aspects of Computer and Information Sciences MACIS : J.C. Faugère and F. Rouillier
Parallel Symbolic Computation, PASCO 2007 : J.C. Faugère
Advisory Board MEGA Conference: J.C. Faugère
MACIS 2007 : F. Rouillier (general chair)  M. Safey El Din (local chair)
MACIS 2008 : F. Rouillier (chair of the Steering commitee)
ECRYPT PhD SUMMER SCHOOL on “Emerging Topics in Cryptographic Design and Cryptanalysis" (2007) : L. Perret (with C. Cid)
SCC 2008: J.C. Faugère and L. Perret.
M. Safey El Din : Journées Nationales du Calcul Formel, 2007