The main scientific objective of the VEGAS research team is to contribute to the development of an effective geometric computing dedicated to non-trivial geometric objects. Included among its main tasks are the study and development of new algorithms for the manipulation of geometric objects, the experimentation of algorithms, the production of high-quality software, and the application of such algorithms and implementations to research domains that deal with a large amount of geometric data, notably solid modeling and computer graphics.
Computational geometry has traditionally treated linear objects like line segments and polygons in the plane, and point sets and polytopes in three-dimensional space, occasionally (and more recently) venturing into the world of non-linear curves such as circles and ellipses. The methodological experience and the know-how accumulated over the last thirty years have been enormous.
For many applications, particularly in the fields of computer graphics and solid modeling, it is necessary to manipulate more general objects such as curves and surfaces given in either implicit or parametric form. Typically such objects are handled by approximating them by simple objects such as triangles. This approach is extremely important and it has been used in almost all of the usable software existing in industry today. It does, however, have some disadvantages. Using a tessellated form in place of its exact geometry may introduce spurious numerical errors (the famous gap between the wing and the body of the aircraft), not to mention that thousands if not hundreds of thousands of triangles could be needed to adequately represent the object. Moreover, the curved objects that we consider are not necessarily everyday three-dimensional objects, but also abstract mathematical objects that are not linear, that may live in high-dimensional space, and whose geometry we do not control. For example, the set of lines in 3D (at the core of visibility issues) that are tangent to three polyhedra span a piecewise ruled quadratic surface, and the lines tangent to a sphere correspond, in projective five-dimensional space, to the intersection of two quadratic hypersurfaces.
Effectiveness is a key word of our research project. By requiring our algorithms to be effective, we imply that the algorithms should be robust, efficient, and versatile. By robust we mean algorithms that do not crash on degenerate inputs and always output topologically consistent data. By efficient we mean algorithms that run reasonably quickly on realistic data where performance is ascertained both experimentally and theoretically. Finally, by versatile we mean algorithms that work for classes of objects that are general enough to cover realistic situations and that account for the exact geometry of the objects, in particular when they are curved.
We are interested in the application of our work to virtual prototyping, which refers to the many steps required for the creation of a realistic virtual representation from a CAD/CAM model.
When designing an automobile, detailed physical mockups of the interior are built to study the design and evaluate human factors and ergonomic issues. These hand-made prototypes are costly, time consuming, and difficult to modify. To shorten the design cycle and improve interactivity and reliability, realistic rendering and immersive virtual reality provide an effective alternative. A virtual prototype can replace a physical mockup for the analysis of such design aspects as visibility of instruments and mirrors, reachability and accessibility, and aesthetics and appeal.
Virtual prototyping encompasses most of our work on effective geometric computing. In particular, our work on 3D visibility should have fruitful applications in this domain. As already explained, meshing objects of the scene along the main discontinuities of the visibility function can have a dramatic impact on the realism of the simulations.
Solid modeling, i.e., the computer representation and manipulation of 3D shapes, has historically developed somewhat in parallel to computational geometry. Both communities are concerned with geometric algorithms and deal with many of the same issues. But while the computational geometry community has been mathematically inclined and essentially concerned with linear objects, solid modeling has traditionally had closer ties to industry and has been more concerned with curved surfaces.
Clearly, there is considerable potential for interaction between the two fields. Standing somewhere in the middle, our project has a lot to offer. Among the geometric questions related to solid modeling that are of interest to us, let us mention: the description of geometric shapes, the representation of solids, the conversion between different representations, data structures for graphical rendering of models and robustness of geometric computations.
We work in collaboration with CIRTES on rapid
prototyping. CIRTES, a company based in
Saint-Dié-des-Vosges, has designed a technique called
Stratoconception
When the model is complex, for example an art sculpture, some parts of the models may be inaccessible to the milling machine. These inaccessible regions are sanded out by hand in a post-processing phase. This phase is very consuming in time and resources. We work on minimizing the amount of work to be done in this last phase by improving the algorithmic techniques for decomposing the model into layers, that is, finding a direction of slicing and a position of the first layer.
QI stands for “Quadrics Intersection”. QI is the first exact, robust, efficient and usable implementation of an algorithm for parameterizing the intersection of two arbitrary quadrics, given in implicit form, with integer coefficients. This implementation is based on the parameterization method described in , and represents the first complete and robust solution to what is perhaps the most basic problem of solid modeling by implicit curved surfaces.
QI is written in C++ and builds upon the LiDIA computational number theory library bundled with the GMP multi-precision integer arithmetic . QI can routinely compute parameterizations of quadrics having coefficients with up to 50 digits in less than 100 milliseconds on an average PC; see for detailed benchmarks.
Our implementation consists of roughly 18,000 lines of source code. QI has being registered at the Agence pour la Protection des Programmes (APP). It is distributed under the free for non-commercial use Inria license and will be distributed under the QPL license in the next release. The implementation can also be queried via a web interface .
Since its official first release in June 2004, QI has been downloaded six times a month on average and it has been included in the geometric library EXACUS developed at the Max-Planck-Institut für Informatik (Saarbrücken, Germany). QI is also used in a broad range of applications; for instance, it is used in photochemistry for studying the interactions between potential energy surfaces, in computer vision for computing the image of conics seen by a catadioptric camera with a paraboloidal mirror, and in mathematics for computing flows of hypersurfaces of revolution based on constant-volume average curvature.
Isotop is a Maple software for computing the topology of an algebraic plane curve, that is, for computing an arrangement of polylines isotopic to the input curve. This problem is a necessary key step for computing arrangements of algebraic curves and has also applications for curve plotting. This software has been developed since 2007 in collaboration with F. Rouillier from Inria Paris - Rocquencourt. It is based on the method described in which incorporates several improvements over previous methods. In particular, our approach does not require generic position.
Isotop is registered at the APP (June 15th 2011) with reference IDDN.FR.001.240007.000.S.P.2011.000.10000. This version is competitive with other implementations (such as AlciX and Insulate developed at MPII Saarbrücken, Germany and top developed at Santander Univ., Spain). It performs similarly for small-degree curves and performs significantly better for higher degrees, in particular when the curves are not in generic position.
We are currently working on an improved version integrating our new bivariate polynomial solver.
Born as a European project, CGAL (http://
In computational geometry, many problems lead to standard, though difficult, algebraic questions such as computing the real roots of a system of equations, computing the sign of a polynomial at the roots of a system, or determining the dimension of a set of solutions. we want to make state-of-the-art algebraic software more accessible to the computational geometry community, in particular, through the computational geometric library CGAL. On this line, we contributed a model of the Univariate Algebraic Kernel concept for algebraic computations (see Sections 8.2.2 and 8.4). This CGAL package improves, for instance, the efficiency of the computation of arrangements of polynomial functions in CGAL . We are currently developing a model of the Bivariate Algebraic Kernel based a new bivariate polynomial solver.
The library fast_polynomial
This software is focused on fast online computation, multivariate evaluation, modularity, and efficiency.
Fast online computation. The library is optimized for the evaluation of a polynomial on several point arguments given one after the other. The main motivation is numerical path tracking of algebraic curves, where a given polynomial criterion must be evaluated several thousands of times on different values arising along the path.
Multivariate evaluation. The library provides specialized fast evaluation of multivariate polynomials with several schemes, specialized for different types such as mpz big ints, boost intervals with hardware precision, mpfi intervals with any given precision, etc.
Modularity. The evaluation scheme can be easily changed and adapted to the user needs. Moreover, the code is designed to easily extend the library with specialization over new C++ objects.
Efficiency. The library uses several tools and methods to provide high efficiency. First, the
code uses templates, such that after the compilation of a polynomial for a specific type, the
evaluation performance is equivalent to low-level evaluation. Locality is also taken into account:
the memory footprint is minimized, such that an evaluation using the classical Hörner scheme will
use
Average-case analysis of data-structures or algorithms is commonly used in computational geometry when the more classical worst-case analysis is deemed overly pessimistic. Since these analyses are often intricate, the models of random geometric data that can be handled are often simplistic and far from "realistic inputs".
Complexity analysis of random geometric structures made simpler. In a joint work with Olivier Devillers and Marc Glisse (Inria
Geometrica), we presented a new simple
scheme for the analysis of geometric structures. While this scheme
only produces results up to a polylog factor, it is much simpler to
apply than the classical techniques and therefore succeeds in
analyzing new input distributions related to smoothed complexity
analysis. We illustrated our method on two classical structures:
convex hulls and Delaunay triangulations. Specifically, we gave short
and elementary proofs of the classical results that
Monotonicity of the number of facets of random
polytopes. We also proved a result on the size of the convex
hull
Worst-case silhouette size of random polytopes. Finally, we studied from a probabilistic point of view the size of the silhouette of a
polyhedron. While the silhouette size of a polyhedron with
We continued working this year on the problem of embedding geometric
objects on a grid of
We considered the problem of computing shortest paths having curvature
at most one almost everywhere and visiting a sequence of
A standard way to approximate the distance between any two vertices
We analyzed the stretch factor
A set of points is said universal if it supports a crossing-free drawing of any planar graph. For a planar graph with
We also considered the setting in which graphs are drawn with curved edges. We proved that, surprisingly, there exists a universal set of
In the context of our algorithm Isotop for computing the topology of algebraic curves , we work on the problem of solving a system of two bivariate polynomials. We focus on the problem of computing a Rational Univariate Representation (RUR) of the solutions, that is, roughly speaking, a univariate polynomial and two rational functions which map the roots of the polynomial to the two coordinates of the solutions of the system.
Separating linear forms. We first presented an algorithm for computing a separating linear form of a system of
bivariate polynomials with integer coefficients, that is a linear combination
of the variables that takes different values when evaluated at distinct
(complex) solutions of the system. In other words, a separating linear form
defines a shear of the coordinate system that sends the algebraic system in
generic position, in the sense that no two distinct solutions are vertically
aligned. The computation of such linear forms is at the core of most
algorithms that solve algebraic systems by computing rational
parameterizations of the solutions and, moreover, the computation of a separating
linear form is the bottleneck of these algorithms, in terms of worst-case bit
complexity.
Given two bivariate polynomials of total degree at most
Solving bivariate systems & RURs. Given such a separating linear form, we also presented an
algorithm for computing a RUR with worst-case bit complexity in
This work is done in collaboration with Fabrice Rouillier (project-team Ouragan at Inria Paris-Rocquencourt).
We addressed the problem of finding the reflection point on a quadric mirror surfaces of a light ray emanating from a 3D point
source
Evaluating a polynomial can be done with different evaluation schemes. The
Hörner scheme for example allows to evaluate a polynomial of degree
The best way to handle these cases is to use divide-and-conquer algorithms to
keep a linear complexity in the degree up to logarithmic factors.
State-of-the-art algorithms split at the highest pure power of 2 lower or
equal to
We developed the library fast_polynomial to explore different divide-and-conquer schemes and observed notably that splitting at
In a joint work with Jiří Matoušek, Pavel Paták, Zuzana
Safernová, Martin Tancer (Charles University, Prague, Czech
republic), we worked on computing simplified inclusion-exclusion
formulas. Let
In a joint work with Éric Colin de Verdière (CNRS-ENS) and Grégory Ginot (IMJ-UPMC), we worked on applications of algebraic topology to combinatorial geometry, and more precisely on extending classical results on nerve complexes. The nerve complex of a family is an abstract simplicial complex that encode its intersection patterns. Nerves are widely used in computational geometry and topology, in particular in reconstruction problems where one aims at inferring the geometry of an object from a point sample while guaranteeing that the topology is correct. Indeed, the nerve theorem ensures that the nerve of a family of geometric objects has the same “topology” (formally: homotopy type) as the union of the objects whenever they form a “good cover”, that is, when any subset of the objects has an empty or contractible intersection. We relaxed this “good cover” condition to allow for families of non-connected sets. We defined an analogue of the nerve, called the multinerve, that is suitable for general acyclic families, and we proved that this combinatorial structure enjoys an analogue of the nerve theorem. Using multinerve, we could derive a new topological Helly-type theorem for acyclic families that generalizes previous results of Amenta, Kalai and Meshulam, and Matoušek. We finally used this new Helly-type theorem to (re)prove, in a unified way, bounds on transversal Helly numbers in geometric transversal theory. This article was submitted to the journal Advances in mathematics in 2012; it was accepted in 2013 and will appear in 2014 .
In a joint work with Otfried Cheong (KAIST, South Korea) and Cyril Nicaud (Univ. Marne-La-Vallée), we studied two problems of the following flavor: how large can a family of combinatorial objects defined on a finite set be if its number of distinct “projections” on any small subset is bounded? We first consider set systems, where the “projections” is the standard notion of trace, and for which we generalized Sauer's Lemma on the size of set systems with bounded VC-dimension. We then studied families of permutations, where the “projections” corresponds to the notion of containment used in the study of permutations with excluded patterns, and for which we delineated the main growth rates ensured by projection conditions. One of our motivations for considering these questions is the “geometric permutation problem” in geometric transversal theory, a question that has been open for two decades. This work was submitted to the European Journal of Combinatorics in 2012 and published in 2013 .
The white ANR grant PRESAGE brings together computational geometers (from the VEGAS and GEOMETRICA projects of Inria) and probabilistic geometers (from Universities of Rouen, Orléans and Poitiers) to tackle new probabilistic geometry problems arising from the design and analysis of geometric algorithms and data structures. We focus on properties of discrete structures induced by or underlying random continuous geometric objects.
This is a four year project, with a total budget of 400kE, that started on Dec. 31st, 2011. It is coordinated by Xavier Goaoc (VEGAS).
The objective of the young-researcher ANR grant SingCAST is to intertwine further symbolic/numeric approaches to compute efficiently solution sets of polynomial systems with topological and geometrical guarantees in singular cases. We focus on two applications: the visualization of algebraic curves and surfaces and the mechanical design of robots.
After identifying classes of problems with restricted types of singularities, we plan to develop dedicated symbolic-numerical methods that take advantage of the structure of the associated polynomial systems that cannot be handled by purely symbolical or numerical methods. Thus we plan to extend the class of manipulators that can be analyzed, and the class of algebraic curves and surfaces that can be visualized with certification.
This is a 3.5 years project, with a total budget of 100kE, that will start on March 1st 2014, coordinated by Guillaume Moroz.
Nuno Gonçalves, University of Coimbra (Portugal), visited the VEGAS project for 1 week in January.
William J. Lenhart, Williams College (USA), visited the VEGAS project for 2 weeks in May..
Subject: Common tangents to ellipsoids in
Date: from Apr. 2013 until July 2013.
Institution: University of Athens, Greece.
Subject: Study with computer algebra system of a conjecture relating the width of a convex polygon with the width of its inscribed triangles.
Date: from Apr. 2013 until Aug. 2013.
Institution: Telecom Nancy de l'université de Lorraine.
Subject: Topology of planar singular curves resultant of two trivariate polynomials.
Date: from Apr. 2013 until Aug. 2013
Institution: Halle-Wittenberg university, Germany.
Program and Paper Committee:
Sylvain Lazard: Program committee of the European Workshop on Computational Geometry (EuroCG'13).
Editorial responsibilities:
Sylvain Petitjean: Editor of Graphical Models (Elsevier).
Workshop organizations:
Sylvain Lazard co-organized with S. Whitesides (Victoria
University) the 12th Inria - McGill - Victoria Workshop on Computational
Geometry
Marc Pouget co-organized the Journées Informatiques et Géométrie
Guillaume Moroz organized the session Calcul formel et numérique oh the Rencontres Arithmétiques de l'Informatique Mathématique
Other responsibilities:
Sylvain Lazard: Head of the Inria Nancy-Grand Est PhD and Post-doc hiring committee (since 2009). Member of the Bureau du Département Informatique de Formation Doctorale of the École Doctorale IAEM (since 2009). “Chargé de formation par la recherche” for Inria Nancy-Grand Est.
Laurent Dupont: Member of Commission Pédagogique Nationale Infocom/SRC (since 2011). Member of Commission Information Scientifique (Inria/Loria).
Xavier Goaoc: Chair of the Inria COST-GTRI committee (2011– august 2013).
Guillaume Moroz: Vice delegate of the Commission des Utilisateurs des Moyens Informatiques pour la Recherche.
Sylvain Petitjean: Director of the Inria Nancy Grand-Est. Member of Inria's Executive committee.
Marc Pouget: Member of the CGAL Editorial Board (since 2008).
Licence: Laurent Dupont, Systèmes de Gestion de Bases de Données Avancé, 40h, L3, Université de Lorraine (IUT Charlemagne).
Licence: Laurent Dupont, Concepts et Outils Internet, 40h, L1, Université de Lorraine (IUT Charlemagne).
Licence: Laurent Dupont, Programmation Objet et Évènementielle, 40h, L2, Université de Lorraine (IUT Charlemagne).
Licence: Laurent Dupont, Rich Internet Applications, 40h, L2, Université de Lorraine (IUT Charlemagne).
Licence: Laurent Dupont and Yacine Bouzidi, Programmation de Sites Web Dynamiques, 70h, L2, Université de Lorraine (IUT Charlemagne).
Licence: Laurent Dupont, Algorithmique, 80h, L1, Université de Lorraine (IUT Charlemagne)
Licence: Laurent Dupont Programmation Objet, 40h, L1, Université de Lorraine (IUT Charlemagne)
Master: Marc Pouget, Introduction à la géométrie algorithmique, 10.5h, M2, École Nationale Supérieure de Géologie, France.
Doctorat: Marc Pouget, Postdoctoral Summer: Convex hulls and point location, 15h, IMPA, Rio de Janeiro, Brazil.
Licence: Sylvain Lazard, Algorithms and Complexity, 25h, L3, Université de Lorraine.
Licence: Yacine Bouzidi, Certification informatique et internet, 54h, L1, Université de Lorraine.
Licence: Yacine Bouzidi, Langage orienté objet, Java, 24h, L3, Université de Lorraine.
Licence: Yacine Bouzidi, Langage d'interrogation des bases de données, 20h, L3, Université de Lorraine.
PhD in progress : Yacine Bouzidi, Résolution de systèmes bivariés et topologie de courbes planes, Oct. 2010, Sylvain Lazard et Marc Pouget.
Guillaume Moroz: Member of the organizing committee of the Olympiades académiques de mathématiques.