The main objective of the SALSA project is to solve systems of polynomial equations and inequations. We emphasize on algebraic methods which are more robust and frequently more efficient than purely numerical tools.
Polynomial systems have many applications in various scientific  academic as well as industrial  domains. However much work is yet needed in order to define specifications for the output of the algorithms which are well adapted to the problems.
The variety of these applications implies that our software needs to be robust. In fact, almost all problems we are dealing with are highly numerically unstable, and therefore, the correctness of the result needs to be guaranteed.
Thus, a key target is to provide software which are competitive in terms of efficiency but preserve certified outputs. Therefore, we restrict ourselves to algorithms which verify the assumptions made on the input, check the correctness of possible random choices done during a computation without sacrificing the efficiency. Theoretical complexity for our algorithms is only a preliminary step of our work which culminates with efficient implementations which are designed to solve significant applications.
A consequence of our way of working is that many of our contributions are related to applicative topics such as cryptography, error correcting codes, robotics and signal theory. We have to emphasize that these applied contributions rely on a longterm and global management of the project with clear and constant objectives leading to theoretical and deep advances.
Computer Algebra. Best Poster Award STOC 2011 (San Jose, USA) – PWE : Polynomial with Errors.
ANR GrantsTwo new projects (HPAC and GEOLMI) were accepted (4 years projetcs).
Maple. Maple 15 release : the contract with Maple was renewed until Dec. 2011.
For polynomial system solving, the mathematical specification of the result of a computation, in particular when the number of solutions is infinite, is itself a difficult problem , , . Sorting the most frequently asked questions appearing in the applications, one distinguishes several classes of problems which are different either by their mathematical structure or by the significance that one can give to the word "solving".
Some of the following questions have a different meaning in the real case or in the complex case, others are posed only in the real case :
zerodimensional systems (with a finite number of complex solutions  which include the particular case of univariate polynomials); The questions in general are well defined (numerical approximation, number of solutions, etc) and the handled mathematical objects are relatively simple and wellknown;
parametric systems: They are generally zerodimensional for almost all the parameters' values. The goal is to characterize the solutions of the system (number of real solutions, existence of a parameterization, etc.) with respect to parameters' values.
positive dimensional systems: For a direct application, the first question is the existence of zeros of a particular type (for example real, real positive, in a finite field). The resolution of such systems can be considered as a black box for the study of more general problems (semialgebraic sets for example) and information to be extracted is generally the computation of a point per connected component in the real case.
constructible and semialgebraic sets: As opposed to what occurs numerically, the addition of constraints or inequalities complicates the problem. Even if semialgebraic sets represent the basic object of the real geometry, their automatic "and effective study" remains a major challenge. To date, the state of the art is poor since only two classes of methods are existing :
the Cylindrical Algebraic Decomposition which basically computes a partition of the ambient space in cells where the signs of a given set of polynomials are constant;
deformations based methods that turn the problem into solving algebraic varieties.
The first solution is limited in terms of performances (maximum 3 or 4 variables) because of a recursive treatment variable by variable, the second also because of the use of a sophisticated arithmetic (formal infinitesimals).
quantified formulas; deciding efficiently if a first order formula is valid or not is certainly one of the greatest challenges in "effective" real algebraic geometry. However this problem is relatively well encircled since it can always be rewritten as the conjunction of (supposed to be) simpler problems like the computation of a point per connected component of a semialgebraic set.
As explained in some parts of this document, the iniquity of the studied mathematical objects does not imply the uncut of the related algorithms. The priorities we put on our algorithmic work are generally dictated by the applications. Thus, above items naturally structure the algorithmic part of our research topics.
For each of these goals, our work is to design the most efficient possible algorithms: there is thus a strong correlation between implementations and applications, but a significant part of the work is dedicated to the identification of blackbox allowing a modular approach of the problems. For example, the resolution of the zerodimensional systems is a prerequisite for the algorithms treating of parametric or positive dimensional systems.
An essential class of blackbox developed in the project does not appear directly in the absolute objectives counted above : the "algebraic or complex" resolutions. They are mostly reformulations, more algorithmically usable, of the studied systems. One distinguishes two categories of complementary objects :
ideals representations: From a computational point of view these are the structures which are used in the first steps;
varieties representations: The algebraic variety, or more generally the constructible or semialgebraic set is the studied object.
To give a simple example, in
The basic tools that we develop and use to understand in an automatic way the algebraic and geometrical structures are on the one hand Gröbner bases (the most known object used to represent an ideal without loss of information) and on the other hand triangular sets (effective way to represent the varieties).
Let us denote by
The ideal
One Gröbner basis' main property is to provide an algorithmic method for deciding if a polynomial belongs or not to an ideal through a reduction function denoted "
If
a polynomial
Reduce(
Gröbner bases are computable objects. The most popular method for computing them is Buchberger's algorithm ( , ). It has several variants and it is implemented in most of general computer algebra systems like Maple or Mathematica. The computation of Gröbner bases using Buchberger's original strategies has to face to two kind of problems :
(A) arbitrary choices : the order in which are done the computations has a dramatic influence on the computation time;
(B) useless computations : the original algorithm spends most of its time in computing 0.
For problem (A), J.C. Faugère proposed (
 algorithm
For problem (B), J.C. Faugère proposed ( ) a new criterion for detecting useless computations. Under some regularity conditions on the system, it is now proved that the algorithm do never perform useless computations.
A new algorithm named
We pay a particular attention to Gröbner bases computed for elimination orderings since they provide a way of "simplifying" the system (equivalent system with a structured shape). A well known property is that the zeros of the first non null polynomial define the Zariski closure (classical closure in the case of complex coefficients) of the projection on the coordinate's space associated with the smallest variables.
Such kinds of systems are algorithmically easy to use, for computing numerical approximations of the solutions in the zerodimensional case or for the study of the singularities of the associated variety (triangular minors in the Jacobian matrices).
Triangular sets have a simplier structure, but, except if they are linear, algebraic systems cannot, in general, be rewritten as a single triangular set, one speaks then of decomposition of the systems in several triangular sets.
Lexicographic Gröbner bases  Triangular sets 


Triangular sets appear under various names in the field of algebraic systems. J.F. Ritt ( ) introduced them as characteristic sets for prime ideals in differential algebra. His constructive algebraic tools were adapted by W.T. Wu in the late seventies for geometric applications. The concept of regular chain (see and ) is adapted for recursive computations in a univariate way.
It provides a membership test and a zerodivisor test for the strongly unmixed dimensional ideal it defines. Kalkbrenner defined regular triangular sets and showed how to decompose algebraic
varieties as a union of Zariski closures of zeros of regular triangular sets. Gallo showed that the principal component of a triangular decomposition can be computed in
D. Lazard contributed to the homogenization of the work completed in this field by proposing a series of specifications and definitions gathering the whole of former work . Two essential concepts for the use of these sets (regularity, separability) at the same time allow from now on to establish a simple link with the studied varieties and to specify the computed objects precisely.
A remarkable and fundamental property in the use we have of the triangular sets is that the ideals induced by regular and separable triangular sets, are radical and equidimensional. These properties are essential for some of our algorithms. For example, having radical and equidimensional ideals allows us to compute straightforwardly the singular locus of a variety by canceling minors of good dimension in the Jacobian matrix of the system. This is naturally a basic tool for some algorithms in real algebraic geometry , , .
In 1993, Wang proposed a method for decomposing any polynomial system into finetriangular systems which have additional properties such as the projection property that may be used for solving parametric systems (see Section ).
Triangular sets based techniques are efficient for specific problems, but the implementations of direct decompositions into triangular sets do not currently reach the level of efficiency of Gröbner bases in terms of computable classes of examples. Anyway, our team benefits from the progress carried out in this last field since we currently perform decompositions into regular and separable triangular sets through lexicographical Gröbner bases computations.
A system is zerodimensional if the set of the solutions in an algebraically closed field is finite. In this case, the set of solutions does not depend on the chosen algebraically closed field.
Such a situation can easily be detected on a Gröbner basis for any admissible monomial ordering.
These systems are mathematically particular since one can systematically bring them back to linear algebra problems. More precisely, the algebra
The use of this vectorspace structure is well known and at the origin of the one of the most known algorithms of the field ( ) : it allows to deduce, starting from a Gröbner basis for any ordering, a Gröbner base for any other ordering (in practice, a lexicographic basis, which are very difficult to compute directly). It is also common to certain seminumerical methods since it allows to obtain quite simply (by a computation of eigenvalues for example) the numerical approximation of the solutions (this type of algorithms is developed, for example, in the INRIA Galaad project).
Contrary to what is written in a certain literature, the computation of Gröbner bases is not "doubly exponential" for all the classes of problems. In the case of the zerodimensional systems, it is even shown that it is simply exponential in the number of variables, for a degree ordering and for the systems without zeros at infinity. Thus, an effective strategy consists in computing a Gröbner basis for a favorable ordering and then to deduce, by linear algebra technics, a Gröbner base for a lexicographic ordering .
The case of the zerodimensional systems is also specific for triangular sets. Indeed, in this particular case, we have designed algorithms that allow to compute them efficiently starting from a lexicographic Gröbner basis. Note that, in the case of zerodimensional systems, regular triangular sets are Gröbner bases for a lexicographical order.
Many teams work on Gröbner bases and some use triangular sets in the case of the zerodimensional systems, but up to our knowledge, very few continue the work until a numerical resolution and even less tackle the specific problem of computing the real roots. It is illusory, in practice, to hope to obtain numerically and in a reliable way a numerical approximation of the solutions straightforwardly from a lexicographical basis and even from a triangular set. This is mainly due to the size of the coefficients in the result (rational number).
The use of innovative algorithms for Gröbner bases computations , , Rational Univariate representations ( and for the "shape position" case, allows to use zerodimensional solving as subtask in other algorithms.
When a system is positive dimensional(with an infinite number of complex roots), it is no more possible to enumerate the solutions. Therefore, the solving process reduces to decomposing the set of the solutions into subsets which have a welldefined geometry. One may perform such a decomposition from an algebraic point of view or from a geometrical one, the latter meaning not taking the multiplicities into account (structure of primary components of the ideal is lost).
Although there exist algorithms for both approaches, the algebraic point of view is presently out of the possibilities of practical computations, and we restrict ourselves to geometrical decompositions.
When one studies the solutions in an algebraically closed field, the decompositions which are useful are the equidimensional decomposition (which consists in considering separately the isolated solutions, the curves, the surfaces, ...) and the prime decomposition (decomposes the variety into irreducible components). In practice, our team works on algorithms for decomposing the system into regular separable triangular sets, which corresponds to a decomposition into equidimensional but not necessarily irreducible components. These irreducible components may be obtained eventually by using polynomial factorization.
However, in many situations one is looking only for real solutions satisfying some inequalities (
There are general algorithms for such tasks, which rely on Tarski's quantifier elimination. Unfortunately, these problems have a very high complexity, usually doubly exponential in the number of variables or the number of blocks of quantifiers, and these general algorithms are intractable. It follows that the output of a solver should be restricted to a partial description of the topology or of the geometry of the set of solutions, and our research consists in looking for more specific problems, which are interesting for the applications, and which may be solved with a reasonable complexity.
We focus on 2 main problems:
1. computing one point on each connected components of a semialgebraic set;
2. solving systems of equalities and inequalities depending on parameters.
The most widespread algorithm computing sampling points in a semialgebraic set is the Cylindrical Algebraic Decomposition Algorithm due to Collins . With slight modifications, this algorithm also solves the problem of Quantifier Elimination. It is based on the recursive elimination of variables one after an other ensuring nice properties between the components of the studied semialgebraic set and the components of semialgebraic sets defined by polynomial families obtained by the elimination of variables. It is doubly exponential in the number of variables and its best implementations are limited to problems in 3 or 4 variables.
Since the end of the eighties, alternative strategies (see , and references therein) with a single exponential complexity in the number of variables have been developed. They are based on the progressive construction of the following subroutines:
(a) solving zerodimensional systems: this can be performed by computing a lexicographical Grobner basis;
(b) computing sampling points in a real hypersurface: after some infinitesimal deformations, this is reduced to problem (a) by computing the critical locus of a polynomial mapping reaching its extrema on each connected component of the real hypersurface;
(c) computing sampling points in a real algebraic variety defined by a polynomial system: this is reduced to problem (b) by considering the sum of squares of the polynomials;
(d) computing sampling points in a semialgebraic set: this is reduced to problem (c) by applying an infinitesimal deformation.
On the one hand, the relevance of this approach is based on the fact that its complexity is asymptotically optimal. On the other hand, some important algorithmic developments have been necessary to obtain efficient implementations of subroutines (b) and (c).
During the last years, we focused on providing efficient algorithms solving the problems (b) and (c). The used method rely on finding a polynomial mapping reaching its extrema on each connected component of the studied variety such that its critical locus is zerodimensional. For example, in the case of a smooth hypersurface whose real counterpart is compact choosing a projection on a line is sufficient. This method is called in the sequel the critical point method. We started by studying problem (b) . Even if we showed that our solution may solve new classes of problems ( ), we have chosen to skip the reduction to problem (b), which is now considered as a particular case of problem (c), in order to avoid an artificial growth of degree and the introduction of singularities and infinitesimals.
Putting the critical point method into practice in the general case requires to drop some hypotheses. First, the compactness assumption, which is in fact intimately related to an implicit properness assumption, has to be dropped. Second, algebraic characterizations of critical loci are based on assumptions of nondegeneracy on the rank of the Jacobian matrix associated to the studied polynomial system. These hypotheses are not satisfied as soon as this system defines a nonradical ideal and/or a non equidimensional variety, and/or a nonsmooth variety. Our contributions consist in overcoming efficiently these obstacles and several strategies have been developed , .
The properness assumption can be dropped by considering the square of a distance function to a generic point instead of a projection function: indeed each connected component contains at least a point minimizing locally this function. Performing a radical and equidimensional decomposition of the ideal generated by the studied polynomial system allows to avoid some degeneracies of its associated Jacobian matrix. At last, the recursive study of overlapped singular loci allows to deal with the case of nonsmooth varieties. These algorithmic issues allow to obtain a first algorithm with reasonable practical performances.
Since projection functions are linear while the distance function is quadratic, computing their critical points is easier. Thus, we have also investigated their use. A first approach consists in studying recursively the critical locus of projection functions on overlapped affine subspaces containing coordinate axes combined with the study of their set of nonproperness. A more efficient one , avoiding the study of sets of nonproperness is obtained by considering iteratively projections on genericaffine subspaces restricted to the studied variety and fibers on arbitrary points of these subspaces intersected with the critical locus of the corresponding projection. The underlying algorithm is the most efficient we obtained.
Most of the applications we recently solved (celestial mechanics, cuspidal robots, statistics, etc.) require the study of semialgebraic systems depending on parameters. Although we covered these subjects in an independent way, some general algorithms for the resolution of this type of systems can be proposed from these experiments.
The general philosophy consists in studying the generic solutions independently from algebraic subvarieties (which we call from now on discriminant varieties) of dimension lower than the semialgebraic set considered. The study of the varieties thus excluded can be done separately to obtain a complete answer to the problem, or is simply neglected if one is interested only in the generic solutions, which is the case in some applications.
We recently proposed a new framework for studying basic constructible (resp. semialgebraic) sets defined as systems of equations and inequations (resp. inequalities) depending on parameters. Let's consider the basic semialgebraic set
and the basic constructible set
where
For any
Given any ideal
for any set
In most applications,
Being able to compute the minimal discriminant variety allows to simplify the problem depending on
Then being able to describe the connected components of the complementary of the discriminant variety in
We currently propose several computational strategies. An a priori decomposition into equidimensional components as zeros of radical ideals simplifies the computation and the use of the discriminant varieties. This preliminary computation is however sometimes expensive, so we are developing adaptive solutions where such decompositions are called by need. The main progress is that the resulting methods are fast on easy problems (generic) and slower on the problems with strong geometrical contents.
The existing implementations of algorithms able to "solve" (to get some information about the roots) parametric systems do all compute (directly or indirectly) discriminant varieties but
none computes optimal objects (strict discriminant variety). This is the case, for example of the Cylindrical Algebraic Decomposition adapted to
A fundamental problem in cryptography is to evaluate the security of cryptosystems against the most powerful techniques. To this end, several generalmethods have been proposed: linear cryptanalysis, differential cryptanalysis, etc... Algebraic cryptanalysisis another general method which permits to study the security of the main publickey and secretkey cryptosystems.
Algebraic cryptanalysis can be described as a general framework that permits to asses the security of a wide range of cryptographic schemes. In fact the recent proposal and development of algebraic cryptanalysis is now widely considered as an important breakthrough in the analysis of cryptographic primitives. It is a powerful technique that applies potentially to a large range of cryptosystems. The basic principle of such cryptanalysis is to model a cryptographic primitive by a set of algebraic equations. The system of equations is constructed in such a way as to have a correspondence between the solutions of this system, and a secret information of the cryptographic primitive (for instance, the secret key of an encryption scheme).
Although the principle of algebraic attacks can probably be traced back to the work of Shannon, algebraic cryptanalysis has only recently been investigated as a cryptanalytic tool. To summarize algebraic attack is divided into two steps :
Modeling, i.e. representing the cryptosystem as a polynomial system of equations
Solving, i.e. finding the solutions of the polynomial system constructed in Step 1.
Typically, the first step leads usually to rather “big” algebraic systems (at least several hundreds of variables for modern block ciphers). Thus, solving such systems is always a challenge. To make the computation efficient, we usually have to study the structural properties of the systems (using symmetries for instance). In addition, one also has to verify the consistency of the solutions of the algebraic system with respect to the desired solutions of the natural problem. Of course, all these steps must be constantly checked against the natural problem, which in many cases can guide the researcher to an efficient method for solving the algebraic system.
Multivariate cryptographycomprises any cryptographic scheme that uses multivariate polynomial systems. The use of such polynomial systems in cryptography dates back to the mid eighties , and was motivated by the need for alternatives to number theoreticbased schemes. Indeed, multivariate systems enjoy low computational requirements and can yield short signatures; moreover, schemes based on the hard problem of solving multivariate equations over a finite field are not concerned with the quantum computer threat, whereas as it is well known that number theoreticbased schemes like RSA, DH, or ECDHare. Multivariate cryptosystems represent a target of choice for algebraic cryptanalysis due to their intrinsic multivariate repesentation.
By using fast algorithms for computing Gröbner bases, it was possible to break the first HFE challenge
(real cryptographic size 80 bits and a symbolic prize of 500 US$) in
only two days of CPU time. More precisely we have used the
The weakness of the systems of equations coming from HFE instances can be
explainedby the algebraic properties of the secret key (work presented at Crypto 2003 in collaboration with A. Joux). From this study, it is possible to predict the maximal degree
occurring in the Gröbner basis computation. This permits to establish precisely the complexity of the Gröbner attack and compare it with the theoretical bounds. The same kind of technique has
since been used for successfully attacking other types of multivariate cryptosystems :
IP
,
2R
,
On the one hand algebraic techniques have been successfully applied against a number of multivariate schemes and in stream cipher cryptanalysis. On the other hand, the feasibility of algebraic cryptanalysis remains the source of speculation for block ciphers, and an almost unexplored approach for hash functions. The scientific lock is that the size of the corresponding algebraic systems are so huge (thousands of variables and equations) that nobody is able to predict correctly the complexity of solving such polynomial systems. Hence one goal of the team is ultimately to design and implement a new generation of efficient algebraic cryptanalysis toolkits to be used against block ciphers and hash functions. To achieve this goal, we will investigate nonconventionalapproaches for modeling these problems.
Applications are fundamental for our research for several reasons.
The first one is that they are the only source of fair tests for the algorithms. In fact, the complexity of the solving process depends very irregularly of the problem itself. Therefore, random tests do not give a right idea of the practical behavior of a program, and the complexity analysis, when possible, does not necessarily provide realistic information.
A second reason is that, as quoted above, we need real world problems to determine which specifications of algorithms are really useful. Conversely, it is frequently by solving specific problems through ad hoc methods that we found new algorithms with general impact.
Finally, obtaining successes with problems which are intractable by the other known approaches is the best proof for the quality of our work.
On the other hand, there is a specific difficulty. The problems which may be solved with our methods may be formulated in many different ways, and their usual formulation is rarely well suited for polynomial system solving or for exact computations. Frequently, it is not even clear that the problem is purely algebraic, because researchers and engineers are used to formulate them in a differential way or to linearize them.
Therefore, our software may not be used as black boxes, and we have to understand the origin of the problem in order to translate it in a form which is well suited for our solvers.
It follows that many of our results, published or in preparation, are classified in scientific domains which are different from ours, like cryptography, error correcting codes, robotics, signal processing, statistics or biophysics.
The (parallel) manipulators we study are general parallel robots: the hexapods are complex mechanisms made up of six (often identical) kinematic chains, of a base (fixed rigid body including six joints or articulations) and of a platform (mobile rigid body containing six other joints). The design and the study of parallel robots require the resolution of direct geometrical models (computation of the absolute coordinates of the joints of the platform knowing the position and the geometry of the base, the geometry of the platform as well as the distances between the joints of the kinematic chains at the base and the platform) and inverse geometrical models (distances between the joints of the kinematic chains at the base and the platform knowing the absolute positions of the base and the platform).
Since the inverse geometrical models can be easily solved, we focus on the resolution of the direct geometrical models. The study of the direct geometrical model is a recurrent activity for several members of the project. One can say that the progress carried out in this field illustrates perfectly the evolution of the methods for the resolution of algebraic systems. The interest carried on this subject is old. The first work in which the members of the project took part in primarily concerned the study of the number of (complex) solutions of the problem , . The results were often illustrated by Gröbner bases done with Gb software.
One of the remarkable points of this study is certainly the classification suggested in .
FGb/Gb is a powerful software for computing Gröbner bases; it is written in C/C++ (approximately 250000 lines counting the old Gbsoftware).
FGb is a powerful software for computing Groebner bases.It includes the new generation of algorihms for computing Gröbner bases polynomial systems (mainly the F4,F5 and FGLM algorithms).It is implemented in C/C++ (approximately 250000 lines), standalone servers are available on demand. Since 2006, FGb is dynamically linked with Maple software (version 11 and higher) and is part of the official distribution of this software.
See also the web page
http://
ACM: I.1.2 Algebraic algorithms
Programming language: C/C++
RAGLib is a Maple library for computing sampling points in semialgebraic sets.
Epsilon is a library of functions implemented in Maple and Java for polynomial elimination and decomposition with (geometric) applications.
We also focused on the interaction of real solving polynomial system with global optimization. Let
Global optimization problems can also be tackled by computing algebraic certificates of positivity through sums of squares decompositions. Let
Let
Solving multihomogeneous systems, as a wide range of
structured algebraic systemsoccurring frequently in practical problems, is of first importance. Experimentally, solving these systems with Gröbner bases algorithms seems to be easier
than solving homogeneous systems of the same degree. Nevertheless, the reasons of this behaviour are not clear. In
, we focus on bilinear systems (i.e. bihomogeneous systems where all
equations have bidegree
The Goppa Code Distinguishing (GCD) problem consists in distinguishing the matrix of a Goppa code from a random matrix. Up to now, it is widely believed that the GCD problem is a hard decisional problem. In , we present the first technique allowing to distinguish alternant and Goppa codes over any field. Our technique can solve the GCD problem in polynomialtime provided that the codes have rates sufficiently large. The key ingredient is an algebraic characterization of the keyrecovery problem which reduces to the solving of a system of bihomogeneous polynomial equations. The idea is to consider the dimension of the solution space of a linearized system deduced from the algebraic system describing the keyrecovery. It turns out that experimentally this dimension depends on the type of code. Explicit formulas derived from extensive experimentations for the value of the dimension are provided for “generic” random, alternant, and Goppa code over any alphabet. Finally, we give explanations of these formulas in the case of random codes, alternant codes over any field and binary Goppa codes.
The Isomorphism of Polynomials (IP) is one of the most fundamental problems in multivariate public key cryptography (MPKC). In In , we introduce a new framework to study the counting problem associated to IP. Namely, we present tools of finite geometry allowing to investigate the counting problem associated to IP. Precisely, we focus on enumerating or estimating the number of isomorphism equivalence classes of homogeneous quadratic polynomial systems. These problems are equivalent to finding the scale of the key space of a multivariate cryptosystem and the total number of different multivariate cryptographic schemes respectively, which might impact the security and the potential capability of MPKC. We also consider their applications in the analysis of a specific multivariate public key cryptosystem. Our results not only answer how many cryptographic schemes can be derived from monomials and how big the key space is for a fixed scheme, but also show that quite many HFE cryptosystems are equivalent to a MatsumotoImai scheme.
The Elliptic Curve Discrete Logarithm Problem (ECDLP) has become the most attractive alternative to factoring for public key cryptography. Whereas subexponential factoring algorithms exist,
solving the ECDLP in general can only be done in exponential time. Provided that a certain heuristic assumption holds, we present in
an index calculus algorithm solving ECDLP over any binary field
The goal of this contract (including a CIFRE PhD grant) is to mix side chanel attacks (DPA) and algebraic cryptanalysis.
The new contract CAC“ Computer Algebra and Cryptography” begins in October 2009 for a period of 4 years. This project will investigate the areas of cryptography and computer algebra, and their influence on the security and integrity of digital data. In CAC, we plan to use basic tools of computer algebra to evaluate the security of cryptographic schemes. CACwill focus on three new challenging applications of algebraic techniques in cryptography; namely block ciphers, hash functions, and factorization with known bits. To this hand, we will use Gröbner bases techniques but also lattice tools. In this proposal, we will explore nonconventional approaches in the algebraic cryptanalysis of these problems.
The pervasive ubiquity of parallel architectures and memory hierarchy has led to a new quest for parallel mathematical algorithms and software capable of exploiting the various levels of
parallelism: from hardware acceleration technologies (multicore and multiprocessor system on chip, GPGPU, FPGA) to cluster and global computing platforms. For giving a greater scope to
symbolic and algebraic computing, beyond the optimization of the application itself, the effective use of a large number of resources (memory and specialized computing units) is expected to
enhance the performance multicriteria objectives: time, resource usage, reliability, even energy consumption. The design and the implementation of mathematical algorithms with provable,
adaptive and sustainable performance is a major challenge. In this context, this project
The GeoLMI project
ECRYPT II  European Network of Excellence for Cryptology II is a 4year network of excellence funded within the Information & Communication Technologies (ICT) Programme of the European Commission's Seventh Framework Programme (FP7) under contract number ICT2007216676. It falls under the action line Secure, dependable and trusted infrastructures. ECRYPT II started on 1 August 2008. Its objective is to continue intensifying the collaboration of European researchers in information security. The ECRYPT II research roadmap is motivated by the changing environment and threat models in which cryptology is deployed, by the gradual erosion of the computational difficulty of the mathematical problems on which cryptology is based, and by the requirements of new applications and cryptographic implementations. Its main objective is to ensure a durable integration of European research in both academia and industry and to maintain and strengthen the European excellence in these areas. In order to reach this goal, 11 leading players have integrated their research capabilities within three virtual labs focusing on symmetric key algorithms (SymLab), public key algorithms and protocols (MAYA), and hardware and software implementations associate (VAMPIRE). They are joined by more than 20 adjoint members to the network who will closely collaborate with the core partners. The team joins the European Network of Excellence for Cryptology ECRYPT II this academic year as associate member.
Royal Society Project with the Crypto team Royal Holloway, University of London, UK.
ECCA (Exact/Certified Computation with Algebraic systems) is a LIAMA project (Reliable Software Theme) focusing on polynomial system solving. The partners are INRIA, CNRS, and CAS (Chinese Academy of Sciences). The general objectives of this project are mainly the same as those of SALSA.
The main objective of this project is to study and compute the solutions of nonlinear algebraic systems and their structures and properties with selected target applications using exact or certified computation. The project consists of one main task of basic research on the design and implementation of fundamental algorithms and four tasks of applied research on computational geometry, algebraic cryptanalysis, global optimization, and algebraic biology. It will last for three years (2010–2012) with 300 personmonths of workforce. Its consortium is composed of strong research teams from France and China (KLMM, SKLOIS, and LMIB) in the area of solving algebraic systems with applications.
J.C. Faugère is member of the editorial board of Journal “Mathematics in Computer Science” (Birkhäuser) and Journal “Cryptography and Communications – Discrete Structures, Boolean Functions and Sequences” (Springer); guest editor for special issues in Journal of Symbolic Computation (Elsevier) and Journal “Mathematics in Computer Science” (Birkhäuser).
M. Safey el Din is member of the editorial board of Journal of Symbolic Computation (Elsevier).
J.C. Faugère is PC cochair of the third SCC conference (Santander, 2012).
D. Wang is member of the editorial board of:
EditorinChief and Managing Editor for the journal “Mathematics in Computer Science” (published by Birkhäuser/Springer, Basel).
Executive Associate EditorinChief for the journal “SCIENCE CHINA Information Sciences” (published by Science China Press, Beijing and Springer, Berlin).
Member of the Editorial Boards for the
Journal of Symbolic Computation (published by Academic Press/Elsevier, London),
Frontiers of Computer Science in China (published by Higher Education Press, Beijing and Springer, Berlin),
Texts and Monographs in Symbolic Computation (published by Springer, Wien New York),
Book Series on Mathematics Mechanization (published by Science Press, Beijing),
Book Series on Fundamentals of Information Science and Technology (published by Science Press, Beijing).
Editor for the Book Series in Computational Science (published by Tsinghua University Press, Beijing).
M. Safey El Din was member of the program committees of the 36th International Symposium on Symbolic and Algebraic Computation (San Jose, USA, June 8–11 2011) and the 13th International Workshop on Computer Algebra in Scientific Computing (Kassel, Germany, September 5  9, 2011) and is member of the program committee of the 13th International Workshop on Computer Algebra in Scientific Computing (Maribor, Slovenia, September 3  6, 2012).
D.Wang was member of the program committee of:
Technical Session at ICCSA 2011 on Symbolic Computing for Dynamic Geometry (Santander, Spain, June 20–23, 2011),
M. Safey El Din was invited 2 weeks in July 2011 by L. Zhi at the Key Laboratory of Mechnanization and Mathematics (Chinese Academy of Sciences, Beijing China), 1 week at the department of Computer Science at Aarhus University (Denmark), 2 weeks in October 2011 by E. Schost at the Department of Computer Science at The University of Western Ontario (London, Canada). He is a coorganizer of the next National Days of Computer Algebra in 2012.
L. Perret was invited 2 weeks in 2011 (in July and December) by D. Lin at the SKLOIS (Chinese Academy of Sciences, Beijing China), 1 week (April, 2011) at the Stevens Institute (NewYork, USA) by A. Miasnikov.
J.C. Faugère was invited 1 week in July 2011 by D. Lin at the SKLOIS (Chinese Academy of Sciences, Beijing China).
J.C. Faugère was plenary invited speaker at ECC 2011, the 15th workshop on Elliptic Curve Cryptography.
J.C. Faugère, is member of the MEGA Advisory Board.
M. Safey El Din is coorganizer (with L. Zhi) of the First International Workshop on Certified and Reliable Computing, held in July 2011 at Nanning, China, coorganizer of the minisymposia on Algebraic Complexity (with E. Schost) and Algorithms in Real Algebraic Geometry (with H. Hong) which have been held on the occasion of the SIAM conference on Applications of Algebraic Geometry (Raleigh, Oct. 2011).
M. Safey El Din was invited speaker at the minisymposium on Algebraic Geometry and Optimization (SIAM conference on Optimization), the MaGIX conference (LIX, Palaiseau) and gave several talks in the minisymposia organized during the SIAM Conference on Applications of Algebraic Geometry. He was also invited to give a talk at the joint MathematicsComputer Science seminar at the University of Western Ontario and gave a talk at the first workshop of the GeoLMI project (Rennes, Nov. 2011).
J.C. Faugère was invited speaker at the MaGIX conference (LIX, Palaiseau) and in the minisymposium on Linear Algebra organized during the SIAM Conference on Applications of Algebraic Geometry (Raleigh, USA). He was also invited to give a talk at the joint MathematicsComputer Science seminar at the University of Aarhus (Danmark).
J.C. Faugère was a member of the evaluation committee (AERES) of the institut de mathématiques de Toulon et du Var.
M. Safey El Din is a designated member of the French National Council of the Universities (CNU).
J.C. Faugère is member of the hiring committee in computer science at the <<Université Pierre et Marie Curie>>, <<Université de Toulon>> and <<Université Joseph Fourier>>.
J.C. Faugère, L. Perret give a course on Polynomial System Solving, Computer Algebra and Applications at the “Master Parisien de Recherche en Informatique” (MPRI).
G. Renault gives a course on Computational Number Theory and Cryptology at the <<Master d'Informatique de l'Université Paris 6>>.