The main objective of the SALSA project is to solve systems of polynomial equations and inequations. We emphasize on algebraic methods which are more robust and frequently more efficient than purely numerical tools.

Polynomial systems have many applications in various scientific - academic as well as industrial - domains. However much work is yet needed in order to define specifications for the output of the algorithms which are well adapted to the problems.

The variety of these applications implies that our software needs to be robust. In fact, almost all problems we are dealing with are highly numerically unstable, and therefore, the correctness of the result needs to be guaranteed.

Thus, a key target is to provide software which are competitive in terms of efficiency but preserve certified outputs. Therefore, we restrict ourselves to algorithms which verify the assumptions made on the input, check the correctness of possible random choices done during a computation without sacrificing the efficiency. Theoretical complexity for our algorithms is only a preliminary step of our work which culminates with efficient implementations which are designed to solve significant applications.

A consequence of our way of working is that many of our contributions are related to applicative topics such as cryptography, error correcting codes, robotics and signal theory. We have to emphasize that these applied contributions rely on a long-term and global management of the project with clear and constant objectives leading to theoretical and deep advances.

**Computer Algebra**. Best Student Paper award at
Issac 2010.

**Computer Algebra**. New record of complexity for the
roadmap.

**Maple**. Maple 14 release : inclusion of the Raglib
package.

**Chinese-SALSA**: Joint LIAMA Project ECCA (Reliable
Software Theme) INRIA/CNRS/UPMC/CAS.

For polynomial system solving, the mathematical specification of the result of a computation, in particular when the number of solutions is infinite, is itself a difficult problem , , . Sorting the most frequently asked questions appearing in the applications, one distinguishes several classes of problems which are different either by their mathematical structure or by the significance that one can give to the word "solving".

Some of the following questions have a different meaning in the real case or in the complex case, others are posed only in the real case :

zero-dimensional systems (with a finite number of complex solutions - which include the particular case of univariate polynomials); The questions in general are well defined (numerical approximation, number of solutions, etc) and the handled mathematical objects are relatively simple and well-known;

parametric systems; They are generally zero-dimensional for almost all the parameters' values. The goal is to characterize the solutions of the system (number of real solutions, existence of a parameterization, etc.) with respect to parameters' values.

positive dimensional systems; For a direct application, the first question is the existence of zeros of a particular type (for example real, real positive, in a finite field). The resolution of such systems can be considered as a black box for the study of more general problems (semi-algebraic sets for example) and information to be extracted is generally the computation of a point per connected component in the real case.

constructible and semi-algebraic sets; As opposed to what occurs numerically, the addition of constraints or inequalities complicates the problem. Even if semi-algebraic sets represent the basic object of the real geometry, their automatic "and effective study" remains a major challenge. To date, the state of the art is poor since only two classes of methods are existing :

the Cylindrical Algebraic Decomposition which basically computes a partition of the ambient space in cells where the signs of a given set of polynomials are constant;

deformations based methods that turn the problem into solving algebraic varieties.

The first solution is limited in terms of performances (maximum 3 or 4 variables) because of a recursive treatment variable by variable, the second also because of the use of a sophisticated arithmetic (formal infinitesimals).

quantified formulas; deciding efficiently if a first order formula is valid or not is certainly one of the greatest challenges in "effective" real algebraic geometry. However this problem is relatively well encircled since it can always be rewritten as the conjunction of (supposed to be) simpler problems like the computation of a point per connected component of a semi-algebraic set.

As explained in some parts of this document, the iniquity of the studied mathematical objects does not imply the uncut of the related algorithms. The priorities we put on our algorithmic work are generally dictated by the applications. Thus, above items naturally structure the algorithmic part of our research topics.

For each of these goals, our work is to design the most efficient possible algorithms: there is thus a strong correlation between implementations and applications, but a significant part of the work is dedicated to the identification of black-box allowing a modular approach of the problems. For example, the resolution of the zero-dimensional systems is a prerequisite for the algorithms treating of parametric or positive dimensional systems.

An essential class of black-box developed in the project does not appear directly in the absolute objectives counted above : the "algebraic or complex" resolutions. They are mostly reformulations, more algorithmically usable, of the studied systems. One distinguishes two categories of complementary objects :

ideals representations; From a computational point of view these are the structures which are used in the first steps;

varieties representations; The algebraic variety, or more generally the constructible or semi-algebraic set is the studied object.

To give a simple example, in
the variety
{(0, 0)}can be seen like the zeros
set of more or less complicated ideals (for example, ideal(
X,
Y), ideal(
X^{2},
Y), ideal(
X^{2},
X,
Y,
Y^{3}), etc). The entry which is given to us is
a system of equations, i.e. an ideal. It is
essential, in many cases, to understand the structure of this
object to be able to correctly treat the degenerated cases. A
striking example is certainly the study of the singularities.
To take again the preceding example, the variety is not
singular, but this cannot be detected by the blind
application of the Jacobian criterion (one could wrongfully
think that all the points are singular, contradicting, for
example, Sard's lemma).

The basic tools that we develop and use to understand in an automatic way the algebraic and geometrical structures are on the one hand Gröbner bases (the most known object used to represent an ideal without loss of information) and on the other hand triangular sets (effective way to represent the varieties).

Let us denote by
K[
X_{1}, ...,
X_{n}]the ring of polynomials with coefficients in a
field
Kand indeterminates
X_{1}, ...,
X_{n}and
S= {
P_{1}, ...,
P_{s}}any subset of
K[
X_{1}, ...,
X_{n}]. A point
is a zero of
Sif
P_{i}(
x) = 0
i[1...
s].

The ideal
generated by
P_{1}, ...,
P_{s}is the set of polynomials in
K[
X_{1}, ...,
X_{n}]constituted by all the combinations
with
. Since every element of
vanishes at each zero of
S, we denote by
(resp.
), the set of complex (resp. real) zeros of
S, where
Ris a real closed field containing
Kand
Cits algebraic closure.

One Gröbner basis' main property is to provide an algorithmic method for deciding if a polynomial belongs or not to an ideal through a reduction function denoted " " from now.

If
Gis a Gröbner basis of an ideal
for any monomial ordering
<.

a polynomial belongs to if and only if ,

Reduce(
p,
G,
<) does not depend on the
order of the polynomials in the list
G, thus, this is a canonical reduced expression
modulus
, and the Reduce function can be used as a
*simplification*function.

Gröbner bases are computable objects. The most popular method for computing them is Buchberger's algorithm ( , ). It has several variants and it is implemented in most of general computer algebra systems like Maple or Mathematica. The computation of Gröbner bases using Buchberger's original strategies has to face to two kind of problems :

(A) arbitrary choices : the order in which are done the computations has a dramatic influence on the computation time;

(B) useless computations : the original algorithm spends most of its time in computing 0.

For problem (A), J.C. Faugère proposed (
- algorithm
F_{4}) a new generation of powerful algorithms (
) based on the intensive use of
linear algebra technics. In short, the arbitrary choices are
left to computational strategies related to classical linear
algebra problems (matrix inversions, linear systems,
etc.).

For problem (B), J.C. Faugère proposed ( ) a new criterion for detecting useless computations. Under some regularity conditions on the system, it is now proved that the algorithm do never perform useless computations.

A new algorithm named
F_{5}was built using these two key results. Even if it
still computes a Gröbner basis, the gap with existing other
strategies is consequent. In particular, due to the range of
examples that become computable, Gröbner basis can be
considered as a reasonable computable object in large
applications.

We pay a particular attention to Gröbner bases computed for elimination orderings since they provide a way of "simplifying" the system (equivalent system with a structured shape). A well known property is that the zeros of the first non null polynomial define the Zariski closure (classical closure in the case of complex coefficients) of the projection on the coordinate's space associated with the smallest variables.

Such kinds of systems are algorithmically easy to use, for computing numerical approximations of the solutions in the zero-dimensional case or for the study of the singularities of the associated variety (triangular minors in the Jacobian matrices).

Triangular sets have a simplier structure, but, except if they are linear, algebraic systems cannot, in general, be rewritten as a single triangular set, one speaks then of decomposition of the systems in several triangular sets.

Lexicographic Gröbner bases | Triangular sets |

Triangular sets appear under various names in the field of algebraic systems. J.F. Ritt ( ) introduced them as characteristic sets for prime ideals in differential algebra. His constructive algebraic tools were adapted by W.T. Wu in the late seventies for geometric applications. The concept of regular chain (see and ) is adapted for recursive computations in a univariate way.

It provides a membership test and a zero-divisor test for
the strongly unmixed dimensional ideal it defines.
Kalkbrenner defined regular triangular sets and showed how to
decompose algebraic varieties as a union of Zariski closures
of zeros of regular triangular sets. Gallo showed that the
principal component of a triangular decomposition can be
computed in
O(
d^{O(
n2)})(
n= number of variables,
d=degree in the variables). During the 90s,
implementations of various strategies of decompositions
multiply, but they drain relatively heterogeneous
specifications.

D. Lazard contributed to the homogenization of the work completed in this field by proposing a series of specifications and definitions gathering the whole of former work . Two essential concepts for the use of these sets (regularity, separability) at the same time allow from now on to establish a simple link with the studied varieties and to specify the computed objects precisely.

A remarkable and fundamental property in the use we have of the triangular sets is that the ideals induced by regular and separable triangular sets, are radical and equidimensional. These properties are essential for some of our algorithms. For example, having radical and equidimensional ideals allows us to compute straightforwardly the singular locus of a variety by canceling minors of good dimension in the Jacobian matrix of the system. This is naturally a basic tool for some algorithms in real algebraic geometry , , .

In 1993, Wang
proposed a method for decomposing
any polynomial system into
*fine*triangular systems which have additional
properties such as the projection property that may be used
for solving parametric systems (see Section
).

Triangular sets based techniques are efficient for specific problems, but the implementations of direct decompositions into triangular sets do not currently reach the level of efficiency of Gröbner bases in terms of computable classes of examples. Anyway, our team benefits from the progress carried out in this last field since we currently perform decompositions into regular and separable triangular sets through lexicographical Gröbner bases computations.

A system is zero-dimensional if the set of the solutions in an algebraically closed field is finite. In this case, the set of solutions does not depend on the chosen algebraically closed field.

Such a situation can easily be detected on a Gröbner basis for any admissible monomial ordering.

These systems are mathematically particular since one can
systematically bring them back to linear algebra problems.
More precisely, the algebra
K[
X_{1}, ...,
X_{n}]/
Iis in fact a
K-vector space of dimension equal to the number of
complex roots of the system (counted with multiplicities). We
chose to exploit this structure. Accordingly, computing a
base of
K[
X_{1}, ...,
X_{n}]/
Iis essential. A Gröbner basis
gives a canonical projection from
K[
X_{1}, ...,
X_{n}]to
K[
X_{1}, ...,
X_{n}]/
I, and thus provides a base of the
quotient algebra and many other informations more or less
straightforwardly (number of complex roots for example).

The use of this vector-space structure is well known and at the origin of the one of the most known algorithms of the field ( ) : it allows to deduce, starting from a Gröbner basis for any ordering, a Gröbner base for any other ordering (in practice, a lexicographic basis, which are very difficult to compute directly). It is also common to certain semi-numerical methods since it allows to obtain quite simply (by a computation of eigenvalues for example) the numerical approximation of the solutions (this type of algorithms is developed, for example, in the INRIA Galaad project).

Contrary to what is written in a certain literature, the computation of Gröbner bases is not "doubly exponential" for all the classes of problems. In the case of the zero-dimensional systems, it is even shown that it is simply exponential in the number of variables, for a degree ordering and for the systems without zeros at infinity. Thus, an effective strategy consists in computing a Gröbner basis for a favorable ordering and then to deduce, by linear algebra technics, a Gröbner base for a lexicographic ordering .

The case of the zero-dimensional systems is also specific for triangular sets. Indeed, in this particular case, we have designed algorithms that allow to compute them efficiently starting from a lexicographic Gröbner basis. Note that, in the case of zero-dimensional systems, regular triangular sets are Gröbner bases for a lexicographical order.

Many teams work on Gröbner bases and some use triangular sets in the case of the zero-dimensional systems, but up to our knowledge, very few continue the work until a numerical resolution and even less tackle the specific problem of computing the real roots. It is illusory, in practice, to hope to obtain numerically and in a reliable way a numerical approximation of the solutions straightforwardly from a lexicographical basis and even from a triangular set. This is mainly due to the size of the coefficients in the result (rational number).

Our specificity is to carry out the computations until their term thanks to two types of results :

the computation of the Rational
Univariate Representation
: we proved that any
zero-dimensional system, depending on variables
X_{1}, ...
X_{n}, can systematically be rewritten, without
loss of information (multiplicities, real roots), in the
form
f(
T) = 0,
X_{i}=
g_{i}(
T)/
g(
T),
i= 1...
nwhere the polynomials
f,
g,
g_{1}, ...
g_{n}have coefficients in the same ground field
as those of the system and where
Tis a new variable (independent from
X_{1}, ...
X_{n}).

efficient algorithms for isolating and counting the real roots of univariate polynomials .

Thus, the use of innovative algorithms for Gröbner bases computations , , Rational Univariate representations ( for the "shape position" case and for the general case), allows to use zero-dimensional solving as sub-task in other algorithms.

When a system is
**positive dimensional**(with an infinite number of
complex roots), it is no more possible to enumerate the
solutions. Therefore, the solving process reduces to
decomposing the set of the solutions into subsets which have
a well-defined geometry. One may perform such a decomposition
from an algebraic point of view or from a geometrical one,
the latter meaning not taking the multiplicities into account
(structure of primary components of the ideal is lost).

Although there exist algorithms for both approaches, the algebraic point of view is presently out of the possibilities of practical computations, and we restrict ourselves to geometrical decompositions.

When one studies the solutions in an algebraically closed
field, the decompositions which are useful are the
equidimensional decomposition (which consists in considering
separately the isolated solutions, the curves, the surfaces,
...) and the prime decomposition (decomposes the variety into
irreducible components). In practice, our team works on
algorithms for decomposing the system into
*regular separable triangular sets*, which corresponds
to a decomposition into equidimensional but not necessarily
irreducible components. These irreducible components may be
obtained eventually by using polynomial factorization.

However, in many situations one is looking only for real
solutions satisfying some inequalities (
P_{i}>0or
P_{i}0)

There are general algorithms for such tasks, which rely on Tarski's quantifier elimination. Unfortunately, these problems have a very high complexity, usually doubly exponential in the number of variables or the number of blocks of quantifiers, and these general algorithms are intractable. It follows that the output of a solver should be restricted to a partial description of the topology or of the geometry of the set of solutions, and our research consists in looking for more specific problems, which are interesting for the applications, and which may be solved with a reasonable complexity.

We focus on 2 main problems:

1. computing one point on each connected components of a semi-algebraic set;

2. solving systems of equalities and inequalities depending on parameters.

The most widespread algorithm computing sampling points in a semi-algebraic set is the Cylindrical Algebraic Decomposition Algorithm due to Collins . With slight modifications, this algorithm also solves the problem of Quantifier Elimination. It is based on the recursive elimination of variables one after an other ensuring nice properties between the components of the studied semi-algebraic set and the components of semi-algebraic sets defined by polynomial families obtained by the elimination of variables. It is doubly exponential in the number of variables and its best implementations are limited to problems in 3 or 4 variables.

Since the end of the eighties, alternative strategies (see , and references therein) with a single exponential complexity in the number of variables have been developed. They are based on the progressive construction of the following subroutines:

(a) solving zero-dimensional systems: this can be performed by computing a Rational Univariate Representation (see );

(b) computing sampling points in a real hypersurface: after some infinitesimal deformations, this is reduced to problem (a) by computing the critical locus of a polynomial mapping reaching its extrema on each connected component of the real hypersurface;

(c) computing sampling points in a real algebraic variety defined by a polynomial system: this is reduced to problem (b) by considering the sum of squares of the polynomials;

(d) computing sampling points in a semi-algebraic set: this is reduced to problem (c) by applying an infinitesimal deformation.

On the one hand, the relevance of this approach is based on the fact that its complexity is asymptotically optimal. On the other hand, some important algorithmic developments have been necessary to obtain efficient implementations of subroutines (b) and (c).

During the last years, we focused on providing efficient algorithms solving the problems (b) and (c). The used method rely on finding a polynomial mapping reaching its extrema on each connected component of the studied variety such that its critical locus is zero-dimensional. For example, in the case of a smooth hypersurface whose real counterpart is compact choosing a projection on a line is sufficient. This method is called in the sequel the critical point method. We started by studying problem (b) . Even if we showed that our solution may solve new classes of problems ( ), we have chosen to skip the reduction to problem (b), which is now considered as a particular case of problem (c), in order to avoid an artificial growth of degree and the introduction of singularities and infinitesimals.

Putting the critical point method into practice in the general case requires to drop some hypotheses. First, the compactness assumption, which is in fact intimately related to an implicit properness assumption, has to be dropped. Second, algebraic characterizations of critical loci are based on assumptions of non-degeneracy on the rank of the Jacobian matrix associated to the studied polynomial system. These hypotheses are not satisfied as soon as this system defines a non-radical ideal and/or a non equidimensional variety, and/or a non-smooth variety. Our contributions consist in overcoming efficiently these obstacles and several strategies have been developed , .

The properness assumption can be dropped by considering the square of a distance function to a generic point instead of a projection function: indeed each connected component contains at least a point minimizing locally this function. Performing a radical and equidimensional decomposition of the ideal generated by the studied polynomial system allows to avoid some degeneracies of its associated Jacobian matrix. At last, the recursive study of overlapped singular loci allows to deal with the case of non-smooth varieties. These algorithmic issues allow to obtain a first algorithm with reasonable practical performances.

Since projection functions are linear
while the distance function is quadratic, computing their
critical points is easier. Thus, we have also investigated
their use. A first approach
consists in studying
recursively the critical locus of projection functions on
overlapped affine subspaces containing coordinate axes
combined with the study of their set of non-properness. A
more efficient one
, avoiding the study of sets of
non-properness is obtained by considering iteratively
projections on
*generic*affine subspaces restricted to the studied
variety and fibers on arbitrary points of these subspaces
intersected with the critical locus of the corresponding
projection. The underlying algorithm is the most efficient
we obtained.

Most of the applications we recently solved (celestial mechanics, cuspidal robots, statistics, etc.) require the study of semi-algebraic systems depending on parameters. Although we covered these subjects in an independent way, some general algorithms for the resolution of this type of systems can be proposed from these experiments.

The general philosophy consists in studying the generic solutions independently from algebraic subvarieties (which we call from now on discriminant varieties) of dimension lower than the semi-algebraic set considered. The study of the varieties thus excluded can be done separately to obtain a complete answer to the problem, or is simply neglected if one is interested only in the generic solutions, which is the case in some applications.

We recently proposed a new framework for studying basic constructible (resp. semi-algebraic) sets defined as systems of equations and inequations (resp. inequalities) depending on parameters. Let's consider the basic semi-algebraic set

and the basic constructible set

where
p_{i},
f_{j}are polynomials with rational
coefficients.

[
U,
X] = [
U
_{1}, ...
U
_{d},
X
_{d+ 1}, ...
X
_{n}]is the set of
*indeterminates*or variables,
U= [
U_{1}, ...
U_{d}]is the set of
*parameters*and
X= [
X_{d+ 1}, ...
X_{n}]the set of
*unknowns*;

is the set of polynomials defining the equations;

is the set of polynomials defining the inequations in the complex case (resp. the inequalities in the real case);

For any
uC^{d}let
_{u}be the specialization
;

denotes the canonical projection on the parameter's space

;

Given any ideal
Iwe denote by
the associated (algebraic) variety. If a variety
is defined as the zero set of polynomials with
coefficients in
we call it a
-algebraic variety; we extend naturally this
notation in order to talk about
-irreducible components,
-Zariski closure, etc.

for any set , will denote its -Zariski closure in .

In most applications,
as well as
are finite and not empty for almost all parameter's
u. Most algorithms that study
or
(number of real roots w.r.t. the parameters,
parameterizations of the solutions, etc.) compute in any
case a
-Zariski closed set
such that for any
, there exists a neighborhood
of
uwith the following properties :

is an analytic covering of ; this implies that the elements of do not vanish (and so have constant sign in the real case) on the connected components of ;

Being able to compute the minimal discriminant variety
allows to simplify the problem depending on
nvariables to a similar problem depending on
dvariables (the parameters) : it is sufficient
to describe its complementary in the parameters' space (or
in the closure of the projection of the variety in the
general case) to get the full information about the generic
solutions (here generic means for parameters' values
outside the discriminant variety).

Then being able to describe the connected components of the complementary of the discriminant variety in becomes a main challenge which is strongly linked to the work done on positive dimensional systems. Moreover, rewriting the systems involved and solving zero-dimensional systems are major components of the algorithms we plan to build up.

We currently propose several computational strategies. An a priori decomposition into equidimensional components as zeros of radical ideals simplifies the computation and the use of the discriminant varieties. This preliminary computation is however sometimes expensive, so we are developing adaptive solutions where such decompositions are called by need. The main progress is that the resulting methods are fast on easy problems (generic) and slower on the problems with strong geometrical contents.

The existing implementations of algorithms able to "solve" (to get some information about the roots) parametric systems do all compute (directly or indirectly) discriminant varieties but none computes optimal objects (strict discriminant variety). This is the case, for example of the Cylindrical Algebraic Decomposition adapted to , of algorithms based on "Comprehensive Gröbner bases" , , or of methods that compute parameterizations of the solutions (see for example). The consequence is that the output (case distinctions w.r.t. parameters' values) are huge compared with the results we can provide.

A fundamental problem in cryptography is
to evaluate the security of cryptosystems against the most
powerful techniques. To this end, several
*general*methods have been proposed: linear
cryptanalysis, differential cryptanalysis,
*etc*...
*Algebraic cryptanalysis*is another general method which
permits to study the security of the main public-key and
secret-key cryptosystems.

Algebraic cryptanalysis can be described as a general framework that permits to asses the security of a wide range of cryptographic schemes. In fact the recent proposal and development of algebraic cryptanalysis is now widely considered as an important breakthrough in the analysis of cryptographic primitives. It is a powerful technique that applies potentially to a large range of cryptosystems. The basic principle of such cryptanalysis is to model a cryptographic primitive by a set of algebraic equations. The system of equations is constructed in such a way as to have a correspondence between the solutions of this system, and a secret information of the cryptographic primitive (for instance, the secret key of an encryption scheme).

Although the principle of algebraic attacks can probably be traced back to the work of Shannon, algebraic cryptanalysis has only recently been investigated as a cryptanalytic tool. To summarize algebraic attack is divided into two steps :

Modeling, i.e. representing the cryptosystem as a polynomial system of equations

Solving, i.e. finding the solutions of the polynomial system constructed in Step 1.

Typically, the first step leads usually to rather “big” algebraic systems (at least several hundreds of variables for modern block ciphers). Thus, solving such systems is always a challenge. To make the computation efficient, we usually have to study the structural properties of the systems (using symmetries for instance). In addition, one also has to verify the consistency of the solutions of the algebraic system with respect to the desired solutions of the natural problem. Of course, all these steps must be constantly checked against the natural problem, which in many cases can guide the researcher to an efficient method for solving the algebraic system.

*Multivariate cryptography*comprises any cryptographic
scheme that uses multivariate polynomial systems. The use of
such polynomial systems in cryptography dates back to the mid
eighties
, and was motivated by the need
for alternatives to number theoretic-based schemes. Indeed,
multivariate systems enjoy low computational requirements and
can yield short signatures; moreover, schemes based on the
hard problem of solving multivariate equations over a finite
field are not concerned with the quantum computer threat,
whereas as it is well known that number theoretic-based
schemes like
RSA,
DH, or
ECDHare.
Multivariate cryptosystems represent a target of choice for
algebraic cryptanalysis due to their intrinsic multivariate
repesentation.

The most famous
multivariate public key scheme is probably the Hidden Field
Equation (HFE) cryptosystem proposed by Patarin
. The basic idea of HFE is
simple: build the secret key as a univariate polynomial
S(
x)over some (big) finite field
(often
). Clearly, such a polynomial can be easily evaluated;
moreover, under reasonable hypotheses, it can also be
“inverted” quite efficiently. By inverting, we mean finding
any solution to the equation
S(
x) =
y, when such a solution exists. The
secret transformations (decryption and/or signature) are
based on this efficient inversion. Of course, in order to
build a cryptosystem, the polynomial
Smust be presented as a public transformation which
hides the original structure and prevents inversion. This is
done by viewing the finite field
as a vector space over
and by choosing two linear transformations of this
vector space
L_{1}and
L_{2}. Then the public transformation is the composition of
L_{1},
Sand
L_{2}. Moreover, if all the terms in the polynomial
S(
x)have Hamming weight 2, then it is
obvious that all the (multivariate) polynomials of the public
key are of degree two.

By using fast
algorithms for computing Gröbner bases, it was possible to
break the first HFE challenge
(real cryptographic size 80 bits
and a symbolic prize of 500 US$) in only two days of CPU
time. More precisely we have used the
F_{5}/2version of the fast
F_{5}algorithm for computing Gröbner bases (implemented in
C). The algorithms available up to now (Buchberger) were
extremely slow and could not have been used to break the code
(they should have needed at least a few centuries of
computation). The new algorithm is thousands of times faster
than previous algorithms. Several matrices have to be reduced
(Echelon Form) during the computation: the biggest one has no
less than 1.6 million columns, and requires 8 gigabytes of
memory. Implementing the algorithm thus required significant
programming work and especially efficient memory
management.

The weakness of the
systems of equations coming from HFE instances can be
*explained*by the algebraic properties of the secret key
(work presented at Crypto 2003 in collaboration with
A. Joux). From this study, it is possible to predict the
maximal degree occurring in the Gröbner basis computation.
This permits to establish precisely the complexity of the
Gröbner attack and compare it with the theoretical bounds.
The same kind of technique has since been used for
successfully attacking other types of multivariate
cryptosystems :
IP
,
2R
,
-IC
, and MinRank
.

On the one hand
algebraic techniques have been successfully applied against a
number of multivariate schemes and in stream cipher
cryptanalysis. On the other hand, the feasibility of
algebraic cryptanalysis remains the source of speculation for
block ciphers, and an almost unexplored approach for hash
functions. The scientific lock is that the size of the
corresponding algebraic systems are so huge (thousands of
variables and equations) that nobody is able to predict
correctly the complexity of solving such polynomial systems.
Hence one goal of the team is ultimately to design and
implement a new generation of efficient algebraic
cryptanalysis toolkits to be used against block ciphers and
hash functions. To achieve this goal, we will investigate
*non-conventional*approaches for modeling these
problems.

Applications are fundamental for our research for several reasons.

The first one is that they are the only source of fair tests for the algorithms. In fact, the complexity of the solving process depends very irregularly of the problem itself. Therefore, random tests do not give a right idea of the practical behavior of a program, and the complexity analysis, when possible, does not necessarily provide realistic information.

A second reason is that, as quoted above, we need real world problems to determine which specifications of algorithms are really useful. Conversely, it is frequently by solving specific problems through ad hoc methods that we found new algorithms with general impact.

Finally, obtaining successes with problems which are intractable by the other known approaches is the best proof for the quality of our work.

On the other hand, there is a specific difficulty. The problems which may be solved with our methods may be formulated in many different ways, and their usual formulation is rarely well suited for polynomial system solving or for exact computations. Frequently, it is not even clear that the problem is purely algebraic, because researchers and engineers are used to formulate them in a differential way or to linearize them.

Therefore, our software may not be used as black boxes, and we have to understand the origin of the problem in order to translate it in a form which is well suited for our solvers.

It follows that many of our results, published or in preparation, are classified in scientific domains which are different from ours, like cryptography, error correcting codes, robotics, signal processing, statistics or biophysics.

The (parallel) manipulators we study are general parallel robots: the hexapods are complex mechanisms made up of six (often identical) kinematic chains, of a base (fixed rigid body including six joints or articulations) and of a platform (mobile rigid body containing six other joints). The design and the study of parallel robots require the resolution of direct geometrical models (computation of the absolute coordinates of the joints of the platform knowing the position and the geometry of the base, the geometry of the platform as well as the distances between the joints of the kinematic chains at the base and the platform) and inverse geometrical models (distances between the joints of the kinematic chains at the base and the platform knowing the absolute positions of the base and the platform).

Since the inverse geometrical models can be easily solved, we focus on the resolution of the direct geometrical models. The study of the direct geometrical model is a recurrent activity for several members of the project. One can say that the progress carried out in this field illustrates perfectly the evolution of the methods for the resolution of algebraic systems. The interest carried on this subject is old. The first work in which the members of the project took part in primarily concerned the study of the number of (complex) solutions of the problem , . The results were often illustrated by Gröbner bases done with Gb software.

One of the remarkable points of this study is certainly
the classification suggested in
. The next efforts were related
to the real roots and the effective computation of the
solutions
. The studies then continued
following the various algorithmic progresses, until the
developed tools made possible to solve non-academic problems.
In 1999, the various efforts were concretized by an
industrial contract with the SME CMW (
*Constructions Mécaniques des Vosges-Marioni*) for
studying a robot dedicated to machine tools. Since 2002, we
are interested in the study of singularities of manipulators
(serial or parallel). The first results we obtained
(characterization of all the cuspidal serial robots with 3
D.O.F.) have been computed using a very primary variant of
the Discriminant Variety
. Since 2007, we are working on
the singularities of parallel planar robots (ANR grand
*SIROPA*).

FGb/Gb is a powerful software for computing Gröbner bases;
it is written in C/C++ (approximately 250000 lines counting
the old
*Gb*software).

RSis a software entirely developed in C (150000 lines approximately) dedicated to the study of real roots of algebraic systems.

RAGLib is a Maple library for computing sampling points in semi-algebraic sets.

DV stands for
*Discriminant Varieties*and is a software developped in
Maple language, contains algorithms for computing
Discriminant varieties, but also some variants of cylindrical
algebraic decompositions (CAD).

Epsilon is a library of functions implemented in Maple and Java for polynomial elimination and decomposition with (geometric) applications.

Discriminant varities are basic objects to compute for solving parametric systems of polynomial equations and inequalities. In , we show how to reduce this computation to the computation of the set of non-properness of a projection and we provide degree bounds on the minimal discriminant variety of a 0-dimensional parametric system under some assumptions.

Let
be a smooth bounded real hypersurface whose set of
singular points has dimension at most 0, defined by a
polynomial of degree
D. In
, we provide an algorithm
computing a
*roadmap*of
V, i.e. an algebraic curve having a non-empty and
connected intersection with each connected component of
V. The complexity of this algorithm is
(
n
D)
^{O(
n1.5)}. Even under the considered
assumptions, this result improves the best previous bound
D^{O(
n2)}obtained 20 years ago by J. Canny.

The above result is achieved by exploiting properties on polar varieties which are defined by the vanishing of some minors of truncated jacobian matrices. Sufficient conditions to ensure the smoothness (and other properties such as equidimensionality, etc.) of polar varieties are proved in .

Solving multihomogeneous systems, as a wide range of
*structured algebraic systems*occurring frequently in
practical problems, is of first importance. In
, we focus on bilinear systems
(i.e. bihomogeneous systems where all equations have bidegree
(1, 1)). We propose new techniques
to speed up the Gröbner basis computations by using the
multihomogeneous structure of those systems. The
contributions are theoretical and practical. First, we adapt
the classical
F_{5}criterion to avoid reductions to zero which occur when
the input is a set of bilinear polynomials. We also prove an
explicit form of the Hilbert series of bihomogeneous ideals
generated by generic bilinear polynomials and give a new
upper bound on the degree of regularity of generic affine
bilinear systems. Lastly, we investigate the complexity of
computing a Gröbner basis for the grevlex ordering of a
generic 0-dimensional affine bilinear system over
k[
x_{1}, ...,
x_{nx},
y_{1}, ...,
y_{ny}]. In particular, we show that this complexity
is upper bounded by
, which is polynomial in
n_{x}+
n_{y}(i.e. the number of unknowns) when
min(
n
_{x},
n
_{y})is constant.

Computing loci of rank defects of linear matrices (also
called the MinRank problem) is a fundamental NP-hard problem
of linear algebra which has applications in Cryptology, in
Error Correcting Codes and in Geometry. Given a square linear
matrix (i.e. a matrix whose entries are
k-variate linear forms) of size
nand an integer
r, the problem is to find points such that the
evaluation of the matrix has rank less than
r+ 1. In
we obtain the most efficient
algorithm to solve this problem. To this end, we give the
theoretical and practical complexity of computing Gröbner
bases of two algebraic formulations of the MinRank problem.
Both modelings lead to
*structured algebraic systems*.

In , , , we propose a new approach to investigate the security of the McEliece cryptosystem (based on error-correcting codes). Since its invention thirty years ago, no efficient attack had been devised that managed to recover the private key. We prove that the private key of the cryptosystem satisfies a system of bi-homogeneous polynomial equations. We have used these highly structured algebraic equations to mount an efficient key-recovery attack against two recent variants of the McEliece cryptosystems that aim at reducing public key sizes.

MQQ is a multivariate public key cryptosystem (MPKC) based
on multivariate quadratic quasigroups and a special transform
called “
*Dobbertin transformation*” The security of MQQ, as well
as any MPKC, reduces to the difficulty of solving a
non-linear system of equations easily derived from the public
key. It has been already observed that that the algebraic
systems obtained are much easier to solve that random
non-linear systems of the same size. In
,
, we go one step further in the
analysis of MQQ. We explain why systems arising in MQQ are so
easy to solve in practice.

Even if Algebraic cryptanalysis have been successfully applied against a number of multivariate schemes and stream ciphers. Yet, their feasibility against block ciphers remains the source of much speculation. At FSE 2009 Albrecht and Cid proposed to combine differential cryptanalysis with algebraic attacks against block ciphers. The proposed attacks required Gröbner basis computations during the online phase of the attack. In , we take a different approach and only perform Gröbner basis computations in a pre-computation (or offline) phase. In other words, we study how we can improve “classical” differential cryptanalysis using algebraic tools. We apply our techniques against the block ciphers PRESENT and KTANTAN.

In
, we present an extended version
of the hybrid approach
, suitable for polynomials of
higher degree. To easily access our tools, we provide a MAGMA
package available at
http://

The work
propose a constructive use of
computer algebra in Algorithmic Number Theory wit application
in Cryptography. More explicitly, techniques coming from
univariate polynomial solving by radical (Algorithmic Galois
Theory) is used for the construction of encoding in the set
of points of elliptic and hyperelliptic curves defined over a
finite field. Fixing a prime
pverifing some antural assumptions, the general method
proposed in
is the first one which provides
deterministic polynomial time encoding into the rational
points of all the elliptic curves and half of the genus 2
hyperelliptic curves defined over
p. These two types of curves are the main objects for
cryptography based on algebraic curves.

FGb: a library for computing Gröbner bases has been presented at ICMS in Japan. A key component of the efficiency of FGb is a dedicated linear algebra package: in , we present a new linear algebra package written in C which contains specific algorithms to compute Gaussian elimination as well as specific internal representation of matrices.

The Goppa Code Distinguishing (GD) problem consists in distinguishing the matrix of Goppa code from a random matrix. Up to now, it was widely believed that this problem is computationally hard. The hardness of this problem was a mandatory assumption to prove the security of code-based cryptographic primitives like McEliece's cryptosystem. In , we present a polynomial time distinguisher for alternant and Goppa codes of high rate over any field. The key ingredient is an algebraic technique already used to asses the security McEliece's cryptosystem.

In , , we investigate the existence conditions of cusp points in the design parameter space of the RPR-2PRR parallel manipulators. Cusp points make possible non-singular assemblymode changing motion, which can possibly increase the size of the aspect, i.e. the maximum singularity free workspace.

Since Vassiliev (1990), we know that any knot admits a polynomial parametrization. A natural question is to give explicit and minimal parametrizations.

In citeKPR10, we give (with D. Pecker and F. Rouillier) an exhaustive list of minimal parametrizations for two-bridge knots with 10 crossings or fewer. This result has been obtained by considering a zero-dimensional variety for which we need to isolate the elements. Its degree is that may be quite high. This results have been presented at the Mega 09 Conference and published in . We give a new algorithm based on a new geometric description of implicit Chebyshev curves and the computation of the real roots of polynomials in .

We proposed an algorithm to obtain the minimal polynomial of . This question is related to the diophantine trigonometric equation.

where
n_{i}and
r_{i}are rational numbers.

Chebyshev knots are polynomial analogues of Lissajous knots. They have been studied by many authors (Jones, Przytycki, Lamm, Hoste). In , we give explicit minimal parametrizations for infinite families f rational knots. In , we show that Fibonacci knots are not generally not Lissajous.

In
,
we give a complete classification
of harmonic knots
H(
a,
b,
c),
a4.

A contract as been signed with the Canadian company
*Waterloo Maple Inc*in 2005. The objective is to
integrate
*SALSA*software into one of the most well known general
computer algebra system (
*Maple*). The basic term of the contract is of four
years (renewable).

The goal of this contract (including a CIFRE PhD grant) is to mix side chanel attacks (DPA) and algebraic cryptanalysis.

In collaboration with COPRIN project-team (Sophia - Antipolis), IRCcYN and LINA (University / CNRS - Nantes), IRMAR (CNRS/University of Rennes I). The goal of this project is to study the singularities of parallel robots from theoretical aspects (classifications) to the practical ones (behavior).

In collaboration with France Telecom and ENSTA. This
project is to be replaced in the more general context of
information protection. Its research areas are cryptography
and symbolic computation. We are here essentially – but not
exclusively – concerned with public key cryptography. One
of the main issues in public key cryptography is to
identify hard problems, and propose new schemes that are
not based on number theory. Following this line of
research,
*multivariate schemes*have been introduced in the mid
eighties [Diffie and Fell 85, Matsumoto and Imai 85].

In order to evaluate the security of new proposed schemes, strong and efficient cryptanalytic methods have to be developped. The main theme we shall address in this project is the evaluation of the security of cryptographic primitives by means of algebraic methods. The idea is to model a cryptographic primitive as a system of algebraic equations. The system is constructed in such a way as to have a correspondence between the solutions of this system, and a secret information of the considered primitive. Once this modeling is done, the problem is then to solve an algebraic system. Up to now, Gröbner bases appear to yield the best algorithms to do so.

The new contract CAC“ Computer Algebra and Cryptography” begins in October 2009 for a period of 4 years. This project will investigate the areas of cryptography and computer algebra, and their influence on the security and integrity of digital data. This proposal is a follow-up of the ANR MACdescribed below. In CAC, we plan to follow the methodology proposed in MAC, namely using basic tools of computer algebra to evaluate the security of cryptographic schemes. However, whilst ANR MAC was mainly interested develop new algebraic tools for studying the security of multivariate public key cryptosystems, CACwill focus on three new challenging applications of algebraic techniques in cryptography; namely block ciphers, hash functions, and factorization with known bits. To this hand, we will use Gröbner bases techniques but also lattice tools. In this proposal, we will explore non-conventional approaches in the algebraic cryptanalysis of these problems.

ECRYPT II - European Network of Excellence for Cryptology II is a 4-year network of excellence funded within the Information & Communication Technologies (ICT) Programme of the European Commission's Seventh Framework Programme (FP7) under contract number ICT-2007-216676. It falls under the action line Secure, dependable and trusted infrastructures. ECRYPT II started on 1 August 2008. Its objective is to continue intensifying the collaboration of European researchers in information security. The ECRYPT II research roadmap is motivated by the changing environment and threat models in which cryptology is deployed, by the gradual erosion of the computational difficulty of the mathematical problems on which cryptology is based, and by the requirements of new applications and cryptographic implementations. Its main objective is to ensure a durable integration of European research in both academia and industry and to maintain and strengthen the European excellence in these areas. In order to reach this goal, 11 leading players have integrated their research capabilities within three virtual labs focusing on symmetric key algorithms (SymLab), public key algorithms and protocols (MAYA), and hardware and software implementations associate (VAMPIRE). They are joined by more than 20 adjoint members to the network who will closely collaborate with the core partners. The team joins the European Network of Excellence for Cryptology ECRYPT II this academic year as associate member.

Royal Society Project with the Crypto team Royal Holloway, University of London, UK.

*Chinese Salsa*is an associate team created in January
2006. It brings together most of the members of SALSA and
researchers from Beihang university, Beijing (university
and academy of science). The general objectives of
*Chinese-Salsa*are mainly the same as those of
*SALSA*.

ECCA (Exact/Certified Computation with Algebraic systems) is a LIAMA project (Reliable Software Theme). The partners are INRIA, CNRS, and CAS.

The main objective of this project is to study and compute the solutions of nonlinear algebraic systems and their structures and properties with selected target applications using exact or certified computation. The project consists of one main task of basic research on the design and implementation of fundamental algorithms and four tasks of applied research on computational geometry, algebraic cryptanalysis, global optimization, and algebraic biology. It will last for three years (2010–2012) with 300 person-months of workforce. Its consortium is composed of strong research teams from France and China (KLMM, SKLOIS, and LMIB) in the area of solving algebraic systems with applications.

J.-C. Faugère is member of the editorial board of Journal “Mathematics in Computer Science” (Birkhäuser) and Journal “Cryptography and Communications – Discrete Structures, Boolean Functions and Sequences” (Springer); guest editor for special issues in Journal of Symbolic Computation (Elsevier) and Journal “Mathematics in Computer Science” (Birkhäuser).

F. Rouillier is member of the editorial board of Journal of Symbolic Computation (Elsevier) and was guest editor for special issues in Journal “Mathematics in Computer Science” (Birkhäuser).

D. Wang is member of the editorial board of:

Editor-in-Chief and Managing Editor for the journal “Mathematics in Computer Science” (published by Birkhäuser/Springer, Basel).

Executive Associate Editor-in-Chief for the journal “SCIENCE CHINA Information Sciences” (published by Science China Press, Beijing and Springer, Berlin).

Member of the Editorial Boards for the

Journal of Symbolic Computation (published by Academic Press/Elsevier, London),

Frontiers of Computer Science in China (published by Higher Education Press, Beijing and Springer, Berlin),

Texts and Monographs in Symbolic Computation (published by Springer, Wien New York),

Book Series on Mathematics Mechanization (published by Science Press, Beijing),

Book Series on Fundamentals of Information Science and Technology (published by Science Press, Beijing).

Editor for the Book Series in Computational Science (published by Tsinghua University Press, Beijing).

J.-C. Faugère is member of the program committee for the 35th International Symposium on Symbolic and Algebraic Computation Issac'10 (Munich, Germany, July 25–28 2010), program committee of 6th China International Conference on Information Security and Cryptology (Beijing, China, October 2010) program co-chair of the 2nd International Conference on Symbolic Computation and Cryptography (Royal Holloway, University of London, June 2010), scientific and program committee of Yet Another Conference on Cryptography (October 4 – October 8, 2010, Porquerolles Island, France), member of the program committee of the PASCO (Parallel and Symbolic Computation) 2010 in Grenoble.

L. Perret was member of the program committee of the Workshop on Tools for Cryptanalysis 2010 (Royal Holloway, University of London, June 22-23 2010), program committee of the 2nd International Conference on Symbolic Computation and Cryptography (Royal Holloway, University of London, June 23-25 2010), program committee of 6th China International Conference on Information Security and Cryptology (Beijing, China, October 2010), program committee of Yet Another Conference on Cryptography (October 4 – October 8, 2010, Porquerolles Island, France).

G. Renault was member of the program committee of the Joint Conference of ASCM 2009 and MACIS 2009 (Fukuoka, Japan, December 14–17, 2009).

F. Rouillier was member of the program committees of the Joint Conference of ASCM 2009 and MACIS 2009 (Fukuoka, Japan, December 14–17, 2009).

M. Safey El Din was member of the program committee of the 12th International Workshop on Computer Algebra in Scientific Computing (Tsakhkadzor, Armenia, September 6–12, 2010) and is member of the program committees of the 36-th International Symposium on Symbolic and Algebraic Computation (San Jose, USA, June 8–11 2011) and the 13-th International Workshop on Computer Algebra in Scientific Computing (Kassel, Germany, September 5 - 9, 2011).

D.Wang was member of the program committee of:

Technical Session at ICCSA 2011 on Symbolic Computing for Dynamic Geometry (Santander, Spain, June 20–23, 2011),

International Conference on Algebraic and Numeric Biology (Hagenberg, Austria, July 31 – August 2, 2010),

8th International Workshop on Automated Deduction in Geometry (Munich, Germany, July 22–24, 2010),

9th International Conference on Mathematical Knowledge Management (Paris, France, July 8–10, 2010),

10th International Conference on Artificial Intelligence and Symbolic Computation (Paris, France, July 5–6, 2010),

Conference on Symbolic Computation and Its Applications (Maribor, Slovenia, June 30 – July 2, 2010),

7th Asian Workshop on Foundations of Software (Beijing, China, May 14–16, 2010).

Member of the Advisory Program Committee for the 3rd International Congress of Mathematical Software (Kobe, Japan, September 13–17, 2010).

Co-chair of the Track on Symbolic Computation at the 12th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (Timisoara, Romania, September 23–26, 2010).

G. Renault was invited 2 weeks in December 2010 by K. Yokoyama at the Mathematical Laboratory (Rykkyo University, Tokyo, Japan).

M. Safey El Din was invited 2 weeks in April 2010 by L. Zhi at the Key Laboratory of Mechnanization and Mathematics (Chinese Academy of Sciences, Beijing China) and 2 weeks in February 2020 by E. Schost at the Department of Computer Science at The University of Western Ontario (London, Canada).

J.-C. Faugère was invited 1 week in January 2010 to visit the DSO National Labs in Singapore.

J.-C. Faugère, L. Perret, G. Renault organized the second SCC conference in London.

J.-C. Faugère, is member of the MEGA Advisory Board.

F. Rouillier is member of the MACIS Steering Committee.

M. Safey El Din is co-organizer (with L. Zhi) of the First International Workshop on Certified and Reliable Computing, to be held in July 2011 at Nanning, China.

J.-C. Faugère was invited speaker at Mathematical Software - ICMS 2010 in Japan , ESC 2010 (Early Symmetric Crypto) Remich (Luxembourg) and to give a series of talk in Singapore (DSO National Labs).

M. Safey El Din was invited speaker at

*Minisymposium on Algebraic Geometry and
Optimization*, SIAM Conference on Optimization,
Darmstadt, Germany, 2011.

*SIAM/MSRI Workshop on Hybrid Methodologies for
Symbolic-Numeric Computation*, Berkeley, USA, 2010
.

*SIAM Workshop on Parallel Processing, Special
session on Symbolic Computation*, Seattle, USA, 2010
.

*SMAI MODE 2010*, Minisymposium on Computer
Algebra and Optimization, March 2010, Limoges.

*Journées Nationales du GDR Info-Math*, France,
2010.

D Wang has organized the following conferences:

General Co-chair of the 4th International Conference on Mathematical Aspects of Computer and Information Sciences (Beijing, China, October 19–21, 2011).

Chair of the Program Committee for the International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (Timisoara, Romania, September 26–29, 2011).

D Wang was invited speaker at

4th International Symposium on Multiagent Systems, Robotics and Cybernetics: Theory and Practice (Baden-Baden, Germany, August 4–5, 2010).

Research Institute for Symbolic Computation, Johannes Kepler University, Linz, Austria (July 13, 2010).

J.-C. Faugère was a member of the EQUIPEX jury (ANR).

J.-C. Faugère was a member of the evaluation committee (AERES) of the Jean Kuntzmann lab (Grenoble) and of the institut de mathématiques de Toulon et du Var. J.-C. Faugère was a member of the visiting committee of University of Limoges (JJ Aubert chairman).

F. Rouillier and J.-C. Faugère are member of the hiring committee in computer science at the <<Université Pierre et Marie Curie>>.

JC Faugère and M. Safey El Din are members of the hiring committee in Mathematics at the “Université de Limoges”

L. Perret was in the Ph.D committee of Gilles Macariot-Rat (Ph.D defended at ENS-ULM)

J.C. Faugère, L. Perret give a course on Polynomial System Solving, Computer Algebra and Applications at the “Master Parisien de Recherche en Informatique” (MPRI).

G. Renault gives a course on Computational Number Theory and Cryptology at the <<Master d'Informatique de l'Université Paris 6>>.