ALICE is one of the six teams in the Algorithms, Computation, Geometry and Image groupin INRIA Nancy Grand-Est.

ALICE is a project-team in Computer Graphics. The fundamental aspects of this domain concern the interaction of
*light*with the
*geometry*of the objects. The lighting problem consists in designing accurate and efficient
*numerical simulation*methods for the light transport equation. The geometrical problem consists in developing new solutions to
*transform and optimize geometric representations*. Our original approach to both issues is to restate the problems in terms of
*numerical optimization*. We try to develop solutions that are
*provably correct*,
*numerically stable*and
*scalable*.

By provably correct, we mean that some properties/invariants of the initial object need to be preserved by our solutions.

By numerically stable, we mean that our solutions need to be resistant to the degeneracies often encountered in industrial data sets.

By scalable, we mean that our solutions need to be applicable to data sets of industrial size.

To reach these goals, our approach consists in transforming the physical or geometric problem into a numerical optimization problem, studying the properties of the objective function and designing efficient minimization algorithms. To properly construct these discretizations, we use the formalism of finite element modeling, geometry and topology. We are also interested in fundamental concepts that were recently introduced into the geometry processing community, such as discrete exterior calculus, spectral geometry processing and theory of sampling.

The main applications of our results concern scientific visualization. We develop cooperations with researchers and people from the industry, who experiment applications of our general solutions to various domains, comprising CAD, industrial design, oil exploration and plasma physics. Our solutions are distributed in both open-source software ( Graphite, OpenNL, CGAL) and industrial software ( Gocad, DVIZ).

B. Lévy received the Inria young researcher award.

N. Cherpeau received an award of merit for his paper "Stochastic simulation of fault networks from 2D seismic lines", awarded by the SEG (Society of Exploration Geophysicists).

Computer Graphics is a quickly evolving domain of research. These last few years, both acquisition techniques (e.g., range laser scanners) and computer graphics hardware (the so-called GPU's, for Graphics Processing Units) have made considerable advances. However, as shown in Figure , despite these advances, fundamental problems still remain open. For instance, a scanned mesh composed of hundred million triangles cannot be used directly in real-time visualization or complex numerical simulation. To design efficient solutions for these difficult problems, ALICE studies two fundamental issues in Computer Graphics:

the representation of the objects, i.e., their geometry and physical properties;

the interaction between these objects and light.

Historically, these two issues have been studied by independent research communities. However, we think that they share a common theoretical basis. For instance, multi-resolution and
wavelets were mathematical tools used by both communities
. We develop a new approach, which consists in studying the geometry
and lighting from the
*numerical analysis*point of view. In our approach, geometry processing and light simulation are systematically restated as a (possibly non-linear and/or constrained) functional
optimization problem. This type of formulation leads to algorithms that are more efficient. Our long-term research goal is to find a formulation that permits a unified treatment of geometry and
illumination over this geometry.

Geometry processing recently emerged (in the middle of the 90's) as a promising strategy to solve the geometric modeling problems encountered when manipulating meshes composed of hundred
millions of elements. Since a mesh may be considered to be a
*sampling*of a surface - in other words a
*signal*- the
*digital signal processing*formalism was a natural theoretic background for this subdomain (see e.g.,
). Researchers of this domain then studied different aspects of this
formalism applied to geometric modeling.

Although many advances have been made in the geometry processing area, important problems still remain open. Even if shape acquisition and filtering is much easier than 30 years ago, a
scanned mesh composed of hundred million triangles cannot be used directly in real-time visualization or complex numerical simulation. For this reason, automatic methods to convert those large
meshes into higher level representations are necessary. However, these automatic methods do not exist yet. For instance, the pioneer Henri Gouraud often mentions in his talks that the
*data acquisition*problem is still open. Malcolm Sabin, another pioneer of the “Computer Aided Geometric Design” and “Subdivision” approaches, mentioned during several conferences of the
domain that constructing the optimum control-mesh of a subdivision surface so as to approximate a given surface is still an open problem. More generally, converting a mesh model into a higher
level representation, consisting of a set of equations, is a difficult problem for which no satisfying solutions have been proposed. This is one of the long-term goals of international
initiatives, such as the
AIMShapeEuropean network of excellence.

Motivated by gridding application for finite elements modeling for oil and gas exploration, in the frame of the Gocadproject, we started studying geometry processing in the late 90's and contributed to this area at the early stages of its development. We developed the LSCM method (Least Squares Conformal Maps) in cooperation with Alias Wavefront . This method has become the de-facto standard in automatic unwrapping, and was adopted by several 3D modeling packages (including Maya and Blender). We experimented various applications of the method, including normal mapping, mesh completion and light simulation .

However, classical mesh parameterization requires to partition the considered object into a set of topological disks. For this reason, we designed a new method (Periodic Global Parameterization) that generates a continuous set of coordinates over the object . We also showed the applicability of this method, by proposing the first algorithm that converts a scanned mesh into a Spline surface automatically . Both algorithms are demonstrated in Figure .

We are still not fully satisfied with these results, since the method remains quite complicated. We think that a deeper understanding of the underlying theory is likely to lead to both efficient and simple methods. For this reason, we studied last year several ways of discretizing partial differential equations on meshes, including Finite Element Modeling and Discrete Exterior Calculus. This year, we also explored Spectral Geometry Processing and Sampling Theory (more on this below).

Numerical simulation of light means solving for light intensity in the “Rendering Equation”, an integral equation modeling energy transfers (or light
*intensity*transfers). The Rendering Equation was first formalized by Kajiya
, and is given by:

In addition, these methods are challenged with more and more complex materials (see Figure ) which need to be taken into account in the simulation. The simple diffuse Lambert law has been replaced with much more complex reflection models. The goal is to create synthetic images that no longer have a synthetic aspect, in particular when human characters are considered.

One of the difficulties is finding efficient ways of evaluating the visibility term. This is typically a Computational Geometry problem, i.e., a matter of finding the right combinatorial
data structure (the
*visibility complex*), studying its complexity and deriving algorithms to construct it. To deal with this issue, several teams (including VEGAS, ARTIS and REVES) study the visibility
complex.

The other terms of the Rendering Equation cannot be solved analytically in general. Many different numerical resolution methods have been used. The main difficulties of the discipline are that each time a new physical effect should be simulated, the numerical resolution methods need to be adapted. In the worst case, it is even necessary to design a new ad-hoc numerical resolution method. For instance, in Monte-Carlo based solvers and in recent Photon-Mapping based methods, several sampling maps are used, one for each effect (a map is used for the diffuse part of lighting, another map is used for caustics, etc.). As a consequence, the discipline becomes a collection of (sometimes mutually exclusive) techniques, where each of these techniques can only simulate a specific lighting effect.

The other difficulty is the classical problem of satisfying two somewhat antinomic objectives at the same time. On the one hand, we want to simulate complex physical phenomena (subsurface scattering, polarization, interferences, etc.), responsible for subtle lighting effects. On the other hand, we want to visualize the result of the simulation in real-time.

We first experimented finite-element methods in parameter space, and developed the
*Virtual Mesh*approach and a parallel solution mechanism for the associated hierarchical finite element formulation. The initial method was dedicated to scenes composed of quadrics. We
combined this method with our geometry processing methods to improve the visualization
.

One of our goals is now to design new representations of lighting coupled with the geometric representation. These representations of lighting need to be general enough so as to be easily extended when multiple physical phenomena should be simulated. Moreover, we want to be able to use these representations of lighting in the frame of real-time visualization. Our original approach to these problems consists in finding efficient function bases to represent the geometry and the physical attributes of the objects. We have first experimented this approach to the problem of image vectorization . We think that our dynamic function basis formulation is likely to lead to efficient light simulation algorithms. The originality is that the so-defined optimization algorithm solves for approximation and sampling all together. Developing such an algorithm is the main goal of our ERC GoodShapeproject.

After having introduced the
*geometry processing*and
*light simulation*scientific domains, we now present the principles that we use to design a common mathematical framework that can be applied to both domains. Early approaches to geometry
processing and light simulation were driven by a Signal Processing approach. In other words, the solution of the problem is obtained after applying a
*filtering scheme*multiple times. This is for instance the case of the mesh smoothing operator defined by Taubin in his pioneering work
. Recent approaches still inherit from this background. Even if the
general trend moves to Numerical Analysis, much work in geometry processing still studies the coefficients of the gradient of the objective function
*one by one*. This intrinsically refers to
*descent*methods (e.g., Gauss-Seidel), which are not the most efficient, and do not converge in general when applied to meshes larger than a certain size (in practice, the limit appears to
be around

In the approach we develop in the ALICEproject-team, geometry processing and light simulation are systematically restated as a (possibly non-linear and/or constrained) functional optimization problem. As a consequence, studying the properties of the minimum is easier: the minimizer of a multivariate function can be more easily characterized than the limit of multiple applications of a smoothing operator. This simple remark makes it possible to derive properties (existence and uniqueness of the minimum, injectivity of a parameterization, and independence to the mesh).

Besides helping to characterize the solution, restating the geometric problem as a numerical optimization problem has another benefit. It makes it possible to design efficient numerical optimization methods, instead of the iterative relaxations used in classic methods.

Richard Feynman (Nobel Prize in physics) mentions in his lectures that physical models are a “smoothed” version of reality. The global behavior and interaction of multiple particles is
captured by physical entities of a larger scale. According to Feynman, the striking similarities between equations governing various physical phenomena (e.g., Navier-Stokes in fluid dynamics
and Maxwell in electromagnetism) is an illusion that comes from the way the phenomena are modeled and represented by “smoothed” larger-scale values (i.e.,
*fluxes*in the case of fluids and electromagnetism). Note that those larger-scale values do not necessarily directly correspond to a physical intuition, they can reside in a more abstract
“computational” space. For instance, representing lighting by the coefficients of a finite element is a first step in this direction. More generally, our approach consists in trying to get rid
of the limits imposed by the classic view of the existing solution mechanisms. The traditional approaches are based on an intuition driven by the laws of physics. Instead of trying to mimic the
physical process, we try to restate the problem as an abstract numerical computation problem, on which more sophisticated methods can be applied (a plane flies like a bird, but it does not flap
its wings). We try to consider the problem from a computational point of view, and focus on the link between the numerical simulation process and the properties of the solution of the Rendering
Equation. Note also that the numerical computation problems yielded by our approach lie in a high-dimensional space (millions of variables). To ensure that our solutions scale-up to scientific
and industrial data from the real world, our strategy is to try to always use the best formalism and the best tool. The best formalism comprises Finite Elements theory, differential geometry,
topology, and the best tools comprise recent hardware, such as GPU (Graphic Processing Units), with the associated highly parallel algorithms. To implement our strategy, we develop algorithmic,
software and hardware architectures, and distribute these solutions in both open-source software (
Graphite) and industrial software (
Gocad,
DVIZ).

Besides developing new solutions for geometry processing and numerical light simulation, we aim at applying these solutions to real-size scientific and industrial problems. In this context, scientific visualization is our main applications domain. With the advances in acquisition techniques, the size of the data sets to be processed increases faster than Moore's law, and represents a scientific and technical challenge. To ensure that our processing and visualization algorithms scale-up, we develop a combination of algorithmic, software and hardware architectures. Namely, we are interested in hierarchical function bases, and in parallel computation on GPUs (graphic processing units).

Our developments in parallel processing and GPU programming permit our geometry processing and light simulation solutions to scale-up, and handle real-scale data from other research and industry domains. The following applications are developed within the MIS (Modelization, Interaction, Simulation) and AOC (Analysis, Optimization and Control) programs, which are supported by the “Contrat de Plan État-Région Lorraine”.

This application domain is led by the Gocadconsortium, created by Prof. Mallet, now headed by Guillaume Caumon. The consortium involves 48 universities and most of the major oil and gas companies. ALICEcontributes to Gocadwith numerical geometry and visualization algorithms for oil and gas engineering. The currently explored domains are complex and dynamic structural models construction, extremely large seismic volumes exploration, and drilling evaluation and planning. The solutions that we develop are transferred to the industry through Earth Decision Sciences. Several Ph.D. students are co-advised by researchers in GOCAD and ALICE.

Graphiteis a research platform for computer graphics, 3D modeling and numerical geometry. It comprises all the main research results of our “geometry processing” group. Data structures for cellular complexes, parameterization, multi-resolution analysis and numerical optimization are the main features of the software. Graphite is publicly available since October 2003. It is hosted by Inria GForge since September 2008 (1000 downloads in two months). Graphite is one of the common software platforms used in the frame of the European Network of Excellence AIMShape.

Micromegasis a 3D modeler, developed as a plugin of Graphite, dedicated to molecular biology. Micromegas is developped in cooperation with the Fourmentin Guilbert foundation. Biologists need simple spatial modeling tools to help in understanding the role of objects' relative position in the functioning of the cell. In this context, we offer a tool for easy DNA modeling. The tool generates DNA along a Bézier curve, open or closed, allows fine-tuning of atoms' position and, most importantly, exports to PDB.

OpenNLis a standalone library for numerical optimization, especially well-suited to mesh processing. The API is inspired by the graphics API OpenGL, this makes the learning curve easy for computer graphics practitioners. The included demo program implements our LSCM mesh unwrapping method. It was integrated in Blenderby Brecht Van Lommel and others to create automatic texture mapping methods. OpenNL is extended with two specialized modules :

CGAL parameterization package: this software library, developed in cooperation with Pierre Alliez and Laurent Saboret, is a CGALpackage for mesh parameterization.

Concurrent Number Cruncher: this software library extends OpenNL with parallel computing on the GPU, implemented using the CUDA API.

Intersurfis a plugin of the VMD (Visual Molecular Dynamics) software. VMD is developed by the Theoretical and Computational Biophysics Group at the Beckmann Institute at University of Illinois. The Intersurf plugin is released with the official version of VMD since the 1.8.3 release. It provides surfaces representing the interaction between two groups of atoms, and colors can be added to represent interaction forces between these groups of atoms. We plan to include in this package the new results obtained this year in molecular surface visualization by Matthieu Chavent.

Gocadis a 3D modeler dedicated to geosciences. It was developed by a consortium headed by Jean-Laurent Mallet, in the Nancy School of Geology. Gocad is now commercialized by Earth Decision Sciences(formerly T-Surf), a company which was initially a start-up company of the project-team. Gocad is used by all major oil companies (Total-Fina-Elf, ChevronTexaco, Petrobras, etc.), and has become a de facto standard in geo-modeling. Luc Buatois's work on GPU-based numerical solvers is now integrated in Gocad's grid generation software SKUA.

LibSL is a Simple library for graphics. Sylvain Lefebvre continued development of the LibSL graphics library (under CeCill-C licence, filed at the APP). LibSL is a toolbox for rapid prototyping of computer graphics algorithms, under both OpenGL, DirectX 9/10, Windows and Linux. The library is actively used in both the REVES / INRIA Sophia-Antipolis and the Alice / INRIA Nancy Grand-Est teams.

**Sampling:**In the frame of the Goodshape project, we continued developing new techniques for sampling shapes optimally, based on the notion of Centroidal Voronoi Tesselations. We
developped a new technique for computing clipped 3D Voronoi diagrams, suitable to volumetric meshing
, and an accelerated GPU-based centroidal Voronoi tesselation
. We developped an optimization technique to suppress obtuse
triangles
. We also studied periodic boundary conditions
, suitable to some numerical simulations.

**Geometric modeling and computational geometry:**Dobrina Boltcheva joined the team, with hear expertise on simplicial homology
. Vincent Nivoliers published a L-system based knot insertion
rules
that he developped during his master in Gipsa lab. We also studied
furthest polygon Voronoi diagrams
.

We gave an invited course on geometry processing at SIGGRAPH Asia

**Foundations of Computer Graphics:**We developped new algorithms and data structures for spatial caching and hashing on the GPU
. We continued our work on noise generation based on Gabor
convolution and proposed several improvements
(see Figure
). We also studied the fundamental problem of interpolating functions and
developed an algorithm based on optimal transport theory
.

**Applications:**We developed an algorithm to change the lighting in images that contain trees
(with REVES, see Figure
).

**Molecular visualization:**We continued the development of our Micromegas software, and developed several improvements
,
(see Figure
).

**Geo-Modeling and geo-visualization:**In the frame of our partnership with the Gocad consortium, we developped an evaluation of multi-valued data depiction techniques
. We also developed several meshing tools dedicated to the
numerical simulation of oil exploitation
,
,
(see Figure
).

**Reverse engineering:**We developed methods to convert mesh surfaces and point sets into parametric splines, using either a global parameterization technique
or an approximation of surface-to-surface distance based on
Voronoi diagrams
(see Figure
).

The Ph.D. theses of Romain Merland, Nicolas Cherpeau and Jeanne Pellerin are funded by the Gocad consortium.

**Modeling and rendering with distance functions**: This project is a collaboration between the ALICE / INRIA Nancy Grand-Est team and the Computer Graphics group of the Karlsruhe
Institute of Technology (KIT). It is funded by INRIA Nancy Grand-Est for a 12 months period (COLORS grant) and serves as a first step in what we hope to become a continued collaboration
between our teams.

Title: Similar Cities

Principal Investigator: Sylvain Lefebvre (INRIA ALICE)

Participants: INRIA Nancy, CSTB, Allegorithmic

See also: GoodShape

Abstract: Similar Cities aims at enhancing the visual appearance of virtual cities, using procedural methods. Our key insight is to replace the numerous textures used to faithfully render large virtual cities by procedural equivalents. These procedural textures are thousands of times smaller but can still be quickly generated whenever required by the rendering engine. Our every-day tools for this research are procedural texture generators, texture synthesis by example, texture streaming algorithms and image processing tools.

Title: Physigraphics

Principal Investigator: Rhaleb Zayer (INRIA ALICE)

Instrument: ANR “chaire d'excellence” grant

See also: Physigraphics

Abstract: Physigrafix is a research effort geared towards bridging the gap between acquisition and modeling in the context of deformable objects. The will project proceed on two complementary tracks. The first is the acquisition and tracking of deformable models, and the second is the mathematical modeling of the captured deformation behavior. The central idea is to rely on the exhibited physics to drive the mathematical model, in this way problems commonly encountered in simulation modeling can be avoided in the first place. This research is motivated by real world applications, and in a broad scope touches upon disciplines such as virtual medicine, manufacturing and feature film industry.

Title: Morpho

Coordinator: Edmond Boyer (INRIA MORPHEO)

Participants: LJK/INRIA Grenoble, INRIA Nancy/LORIA, GIPSA-Lab

See also: Morpho

Abstract: Morpho is aimed at designing new technologies for the measure and for the analysis of dynamic surface evolutions using visual data. The interest arises in several application domains where temporal surface deformations need to be captured and analyzed. It includes human body analyses but also extends to other deforming objects, sails for instance. Potential applications with human bodies are anyway numerous and important, from the identification of pathologies to the design of new prostheses. The project focus is therefore on human body shapes and their motions and on how to characterize them through new biometric models for analysis purposes

Title: Moditere

Coordinator: C. Gentil (LIRIS)

Participants: LIRIS Lyon, LE2I Dijon, LORIA/INRIA Nancy, PEP(Pôle Européen de Plasturgie d’Oyonnax).

Abstract: Moditere aims at developing new 3D modeling tools, that extend the editing capabilities of classical CAD/CAM representations (Splines) to new geometries, such as fractal objects.

Title: Numerical Geometric Abstractions: from bits to equations

Type: IDEAS

Instrument: ERC Starting Grant (Starting)

Duration: August 2008 - July 2013

Coordinator: INRIA (France)

See also: GoodShape

Abstract: GOODSHAPE involves several fundamental aspects of 3D modelling and computer graphics. GOODSHAPE is taking a new approach to the classic, essential problem of sampling, or the digital representation of objects in a computer. This new approach proposes to simultaneously consider the problem of approximating the solution of a partial differential equation and the optimal sampling problem. The proposed approach, based on the theory of numerical optimization, is likely to lead to new algorithms, more efficient than existing methods. Possible applications are envisioned in inverse engineering and oil exploration.

In the frame of the GOODSHAPE project, we cooperate with Hong-Kong university on Centroidal Voronoi Tesselations and their applications. Researchers and students from Nancy and Hong-Kong visit each other on a regular basis. This year (2011), we had the following common publications on optimal sampling, centroidal Voronoi tesselations and their variations , , .

We continued our cooperation with Gustavo Patow (researcher) and Ismael Garcia (PhD student) of Girona University, Spain, on the topic of data structures for spatial caching on the GPU. This year, we published a common article in ACM Siggraph ASIA / ACM Transactions on Graphics .

Sylvain Lefebvre was a member of the paper committee of ACM SIGGRAPH and was the co-chair of the short papers program of Eurographics Symposium on Rendering.

Bruno Lévy was a member of the paper committees of SIAM/ACM GPM, IEEE SMI, NORDIA, Pacific Graphics and SIBGRAPI.

Members of the team gave several technical demonstrations at “Palais de la découverte” (between Sept. 26 to Dec. 04).

Vincent Nivoliers gave demonstrations at the “fête de la science” and gave several talks in schools.

Bruno Lévy gave an invited course at ACM Siggraph Asia 2011 (with Richard Hao Zhang / Simon Fraser University).

Bruno Lévy gave an invited talk at International Symposium on Voronoi Diagrams (ISVD 2011).

D. Sokolov teaches “Modeles de perception et raisonnement” M1 “Infographie”, M1 “Geometrie et representation dans l'espace”, L2+L3 “Logique et modeles de calcul”, M1 “Synthese d'images 3D”, M1 “Unité Bureautique et Communication électronique”, L1.

D. Boltcheva teaches “Images numériques” in IUT Saint Dié.

V. Nivoliers teaches AP2 (algorithmics and programming), UHP, L1: basics of algorithmics and programming (Ocaml), practicals, MI1 (mathematics for computer science), Esial, first year: introducing recursion and induction, boolean functions,and the basics of language theory.

S. Lefebvre and B. Lévy teach geometric modeling and computer graphics, ENSG (School of Geology - INPL)

S. Lefebvre participates to “Computer Graphics”, Ecole Centrale de Paris (organized by G. Drettakis).