Section: Overall Objectives
Scientific ground
Basic computable objects and algorithms
The basic computable objects and algorithms we study, use, optimize or develop are among the most classical ones in computer algebra and are studied by many people around the world: they mainly focus on basic computer arithmetic, linear algebra, lattices and polynomial system solving.
Our approach for tackling these basic problems, whose solution is important for the work of the whole team, is three-fold. First, for some selected problems, we do propose and develop general algorithms (isolation of real roots of univariate polynomials, parametrizations of solutions of zero-dimensional polynomial systems, solutions of parametric equations, etc.). Second, for a selection of well-known problems, we propose different computational strategies (for example the use of approximate arithmetic to speed up LLL algorithm or root isolators, still certifying the final result). Last, we propose specialized variants of known algorithms optimized for a given problem (for example, dedicated solvers for degenerated bivariate polynomials to be used in the computation of the topology of plane curves).
In the context of OURAGAN, it is important to avoid reinventing the wheel and to re-use wherever possible existing objects and algorithms. The main effort being focused on finding good formulations/modelizations for an efficient use. However, on demand, we will propose implementations at many different levels. For example, for our ongoing work on hybrid strategies for LLL, mixing interval arithmetics and basic linear algebra operations, we have replaced our general reliable multiprecision interval artihmetic package (MPFI (https://gforge.inria.fr/projects/mpfi/)) by a dedicated one and managed to save an important factor.
In the activity of OURAGAN, many key objects or algorithms around the resolution of algebraic systems are developed within the team, such as the resolution of polynomials in one variable with real coefficients [77], [66], rational parameterizations of solutions of zero-dimensional systems with rational coefficients [76], [34] or discriminant varieties for solving systems depending on parameters [73].
For our studies in number theory and applications to the security of cryptographic systems, our team works on three categories of basic algorithms: discrete logarithm computations [64] (for example to make progress on the computation of class groups in number fields [56]), network reductions by means of LLL variants [45] and obviously various computations in linear algebra, for example dedicated to almost sparse matrices [65].
These two directions of development are linked at several levels. For example, working with number fields, in particular finding good representations of number fields, lead to the same computational problems as working with roots of polynomial systems by means of triangular systems (towers of number fields) or rational parameterizations (unique number field). Making any progress in one direction will probably have direct consequences for almost all the problems we want to tackle.
Several strategies are also shared between these directions such as the use of approximate arithmetic to speed up certified computations. Sometimes these can also lead to improvement for a different purpose (for example computations over the rationals, deeply used in geometry can often be parallelized combining computations in finite fields together with fast Chinese remaindering and modular evaluations).
As single highlighted example of this sharing of tools and strategies, the use of approximate arithmetic [75] is common to the work on LLL [45] (use in the evaluation of the security of cryptographic systems), resolutions of real-world algebraic systems [66] (used in our applications in robotics and control theory), computations of signs of trigonometric expressions used in knot theory [12] or to certified evaluations of dilogarithm functions on an algebraic variety for the computation of volumes of representations in our work in topology [52].
Algorithmic Number Theory
The frontiers between computable objects, algorithms (above section), computational number theory and applications to security of cryptographic systems are very porous. This union of research fields is mainly driven by the algorithmic improvement to solve presumably hard problems relevant to cryptography, such as computation of discrete logarithms, resolution of hard subset-sum problems, decoding of random binary codes and search for close and short vectors in lattices. While factorization and discrete logarithm problems have a long history in cryptography, the recent post-quantum cryptosystems introduce a new variety of presumably hard problems/objects/algorithms with cryptographic relevance: the shortest vector problem (SVP), the closest vector problem (CVP) or the computation of isogenies between elliptic curves, especially in the supersingular case.
Solving the discrete logarithm problem in finite fields is a key question for the security of Diffie-Hellman based crypto and was the focus of a lot of academic research over the past 40 years. It is one of the expertise domain in the OURAGAN team.
Members of OURAGAN started working on the topic of discrete logarithms around 1998, with several computation records that were announced on the NMBRTHRY mailing list. In large characteristic, especially for the case of prime fields, the best current method is the number field sieve (NFS) algorithm. In particular, they published the first NFS based record computation [63]. Despite huge practical improvements, the prime field case algorithm hasn't really changed since that first record. Around the same time, we also presented small characteristic computation record based on simplifications of the Function Field Sieve (FFS) algorithm [62].
In 2006, important changes occurred concerning the FFS and NFS algorithms, indeed, while the algorithms only covered the extreme case of constant characteristic and constant extension degree, two papers extended their ranges of applicability to all finite fields. At the same time, this permitted a big simplification of the FFS, removing the need for function fields.
Starting from 2012, new results appeared in small characteristic. Initially based on a simplification of the 2006 result, they quickly blossomed into the Frobenial representation methods, with quasi-polynomial time complexity [28], [64], [57]. Recent progress were also made in larger characteristic [30], [29], [27], [26].
An interesting side-effect of this research was the need to revisit the key sizes of pairing-based cryptography. This type of cryptography is also a topic of interest for OURAGAN. In particular, it was introduced in 2000 [61]. Recent re-evaluation of the necessary key size [26], making use of the overview of the possible discrete logarithm constructions are discussed [25].
The computations of class groups in number fields has strong links with the computations of discrete logarithms or factorizations using the NFS (number field sieve) strategy which as the name suggests is based on the use of number fields. Roughly speaking, the NFS algorithm uses two number fields and the strategy consists in choosing number fields with small sized coefficients in their definition polynomials. On the contrary, in class group computations, there is a single number field, which is clearly a simplification, but this field is given as input by some fixed definition polynomial. Obviously, the degree of this polynomial as well as the size of its coefficients are both influencing the complexity of the computations so that finding other polynomials representing the same class group but with a better characterization (degree or coefficient's sizes) is a mathematical problem with direct practical consequences. We proposed a method to address the problem in [56], but many issues remain open.
Computing generators of principal ideals of cyclotomic fields is also strongly related to the computation of class groups in number fields. Ideals in cyclotomic fields are used in a number of recent public-key cryptosystems. Among the difficult problems that ensure the safety of these systems, there is one that consists in finding a small generator, if it exists, of an ideal. The case of cyclotomic fields is considered in [33].
We also use the computations of class numbers to search for examples and counter-examples for mathematical conjectures. For example a study of cyclic cubic fields [26] allowed to progress in the study of Greenberg's conjecture (R. Greenberg, « On the Iwasawa invariants of totally real number fields », American J. of Math., vol. 98, 1976, p. 263-284.).
Another consecrated problem in algorithmic number theory is smoothness testing, i.e. given an integer, decide if all its prime factors are smaller than a given bound. The only subexponential algorithm for this is H. Lenstra's elliptic curve method. Many of the families of elliptic curves here were found (according to the authors) by ad-hoc methods. We introduced a new point of view which allows to make rapidly a finite list of families which are guaranteed to contain the good families for the elliptic curve method of factorization [31].
Topology in small dimension
Character varieties
There is a tradition of using computations and software to study and understand the topology of small dimensional manifolds, going back at least to Thurston's works (and before him, Riley's pionering work). The underlying philosophy of these tools is to build combinatorial models of manifolds (for example, the torus is often described as a square with an identification of the sides). For dimension 2, 3, 4, this approach is relevant and effective. In the team OURAGAN, we focus on the dimension 3, where the manifolds are modelized by a finite numbers of tetrahedra with identification of the faces. The software SnapPy (https://www.math.uic.edu/t3m/SnapPy/) implements this strategy and is regularly used as a starting point in our work. Along the same philosophy of implementation, we can also cite Regina (https://regina-normal.github.io). A specific trait of SnapPy is that it focuses on hyperbolic structures on the 3-dimensional manifolds. This setting is the object of a huge amount of theoretical work that were used to speed up computations. For example, some Newton methods were implemented without certification for solving a system of equations, but the theoretical knowledge of the uniqueness of the solution made this implementation efficient enough for the target applications. In recent years, in part under the influence of our team (as part of the CURVE project), more attention has been given to certified computations and now this is implemented in SnapPy.
This philosophy (modelization of manifolds by quite simple combinatoric models to compute such complicated objects as representations of the fundamental group) was applied in a pioneering work of Falbel[5] when he begins to look for another type of geometry on 3-dimensional manifolds (called CR-spherical geometry). From a computational point of view, this change of objectives was a jump in the unknown: the theoretical justification for the computations were missing, and the number of variables of the systems were multiplied by four. So instead of a relatively small system that could be tackled by Newton methods and numerical approximations, we had to deal with/study (were in front of) relatively big systems (the smallest example being 8 variables of degree 6) with no a priori description of the solutions. This input from OURAGAN was needed and proved to be useful.
Still, the computable objects that appear from the theoretical study are very often outside the reach of automated computations and are to be handled case by case. A few experts around the world have been tackling this kind of computations (Dunfield, Goerner, Heusener, Porti, Tillman, Zickert) and the main current achievement is the Ptolemy module (https://www.math.uic.edu/t3m/SnapPy/ptolemy.html) for SnapPy.
From these early computational needs, topology in small dimension has historically been the source of collaboration with the IMJ-PRG laboratory. At the beginning, the goal was essentially to provide computational tools for finding geometric structures in triangulated 3-dimensional varieties. Triangulated varieties can be topologically encoded by a collection of tetrahedra with gluing constraints (this can be called a triangulation or mesh, but it is not an approximation of the variety by simple structures, rather a combinatorial model). Imposing a geometric structure on this combinatorial object defines a number of constraints that we can translate into an algebraic system that we then have to solve to study geometric structures of the initial variety, for example in relying on solutions to study representations of the fundamental group of the variety. For these studies, a large part of the computable objects or algorithms we develop are required, from the algorithms for univariate polynomials to systems depending on parameters. It should be noted that most of the computational work lies in the modeling of problems [32] (see [4]) that have strictly no chance to be solved by blindly running the most powerful black boxes: we usually deal here with systems that have 24 to 64 variables, depend on 4 to 8 parameters and with degrees exceeding 10 in each variable. With an ANR (ANR project Structures Géométriques et Triangulations) funding on the subject, the progress that we did [48](see [4]) were (much) more significant than expected. In particular, we have introduced new computable objects with an immediate theoretical meaning (let us say rather with a theoretical link established with the usual objects of the domain), namely, the so-called deformation variety.
Recent developments around Mahler measure [24] lead to the study of new computable objects at a cross-road between geometry and number theory.
Knot theory
Knot theory is a wide area of mathematics. We are interested in polynomial representations of long knots, that is to say polynomial embeddings . Every knot admits a polynomial representation and a natural question is to determine explicit parameterizations, minimal degree parameterizations. On the other hand we are interested to determine what is the knot of a given polynomial smooth embedding . These questions involve real algebraic curves. Two-bridge knots (or rational knots) are particularly studied because they are much easier to study. The first 26 knots (except ) are two-bridge knots. It is proved that every knot is a Chebyshev knot [67], that is to say can be parameterized by a Chebyshev curve where is the -th Chebyshev polynomial of the first kind. Chebyshev knots are polynomial analogues of Lissajous knots that have been studied by Jones, Hoste, Lamm...
Our activity in Knot theory is a bridge between our work in computational geometry (topology and drawing of real space curves) and our work on topology in small dimensions (varieties defined as a knot complement). It was first established that any knot can be parameterized by Chebyshev polynomials, then we have studied the properties of harmonic nodes [69] which then opened the way to effective computations. We were able to give an exhaustive, minimal and certified list of Chebyshev parameterizations of the first rational knots, using blind computations [70]. On the other hand, we propose the identification of Chebyshev knot diagrams ([12]) by developing new certified algorithms for computing trigonometric expressions [71], which was also the subject of Tran Cuong's PhD thesis at UPMC [78]. These works share many tools with our action in visualization and computational geometry.
We made use of Chebyshev polynomials so as Fibonacci polynomials which are families of orthogonal polynomials. Considering the Alexander-Conway polynomials as continuant polynomials in the Fibonacci basis, we were able to give a partial answer to Hoste's conjecture on the roots of Alexander polynomials of alternating knots [68].
We study the lexicographic degree of the two-bridge knots, that is to say the minimal (multi)degree of a polynomial representation of a -crossing two-bridge knot. We show that this degree is with . We have determined the lexicographic degree of the first 362 first two-bridge knots with 12 crossings or fewer [39].Minimal degrees are available (https://webusers.imj-prg.fr/~pierre-vincent.koseleff/knots/2bk-lexdeg.html). These results make use of the braid theoretical approach developped by Y. Orevkov to study real plane curves and the use of real pseudoholomorphic curves ([2]), the slide isotopies on trigonal diagrams, namely those that never increase the number of crossings [38].
Visualization and Computational Geometry
The drawing of algebraic curves and surfaces is a critical action in OURAGAN since it is a key ingredient in numerous developments. For example, a certified plot of a discriminant variety could be the only admissible answer that can be proposed for engineering problems that need the resolution of parametric algebraic systems: this variety (and the connected components of its counter part) defines a partition of the parameter’s space in regions above which the solutions are numerically stable and topologically simple.
For our action in Algorithmic Geometry, we are associated with the GAMBLE EPI (Inria Nancy Grand Est) with the aim of developing computational techniques for the study, plotting and topology of real algebraic curves and surfaces. The work involves the development of effective methods of resolution of algebraic systems with 2 or 3 variables (see [1] for example) which are basic engines for computing the topology [74], [43] / or plotting.