ALICE is one of the four teams in the Image Geometry and Computation groupin INRIA Nancy Grand-Est.

ALICE is a project-team in Computer Graphics. The fundamental aspects of this domain concern the interaction of
*light*with the
*geometry*of the objects. The lighting problem consists in designing accurate and efficient
*numerical simulation*methods for the light transport equation. The geometrical problem consists in developing new solutions to
*transform and optimize geometric representations*. Our original approach to both issues is to restate the problems in terms of
*numerical optimization*. We try to develop solutions that are
*provably correct*,
*numerically stable*and
*scalable*.

By provably correct, we mean that some properties/invariants of the initial object need to be preserved by our solutions.

By numerically stable, we mean that our solutions need to be resistant to the degeneracies often encountered in industrial data sets.

By scalable, we mean that our solutions need to be applicable to data sets of industrial size.

To reach these goals, our approach consists in transforming the physical or geometric problem into a numerical optimization problem, studying the properties of the objective function and designing efficient minimization algorithms. To properly construct these discretizations, we use the formalism of finite element modeling, geometry and topology. We are also interested in fundamental concepts that were recently introduced into the geometry processing community, such as discrete exterior calculus, spectral geometry processing and theory of sampling.

The main applications of our results concern scientific visualization. We develop cooperations with researchers and people from the industry, who experiment applications of our general solutions to various domains, comprising CAD, industrial design, oil exploration and plasma physics. Our solutions are distributed in both open-source software ( Graphite, OpenNL, CGAL) and industrial software ( Gocad, DVIZ).

Our softare Graphitewas awarded the third prize in the scientific software category and the special prize of the jury for the most innovative project at the “Trophées du Libre” free software contest.

Computer Graphics is a quickly evolving domain of research. These last few years, both acquisition techniques (e.g., range laser scanners) and computer graphics hardware (the so-called GPU's, for Graphics Processing Units) have made considerable advances. However, as shown in Figure , despite these advances, fundamental problems still remain open. For instance, a scanned mesh composed of hundred million triangles cannot be used directly in real-time visualization or complex numerical simulation. To design efficient solutions for these difficult problems, ALICE studies two fundamental issues in Computer Graphics:

the representation of the objects, i.e., their geometry and physical properties;

the interaction between these objects and light.

Historically, these two issues have been studied by independent research communities. However, we think that they share a common theoretical basis. For instance, multi-resolution and
wavelets were mathematical tools used by both communities
. We develop a new approach, which consists in studying the geometry
and lighting from the
*numerical analysis*point of view. In our approach, geometry processing and light simulation are systematically restated as a (possibly non-linear and/or constrained) functional
optimization problem. This type of formulation leads to algorithms that are more efficient. Our long-term research goal is to find a formulation that permits a unified treatment of geometry and
illumination over this geometry.

Geometry processing recently emerged (in the middle of the 90's) as a promising strategy to solve the geometric modeling problems encountered when manipulating meshes composed of hundred
millions of elements. Since a mesh may be considered to be a
*sampling*of a surface - in other words a
*signal*- the
*digital signal processing*formalism was a natural theoretic background for this subdomain (see e.g.,
). Researchers of this domain then studied different aspects of this
formalism applied to geometric modeling.

Although many advances have been made in the geometry processing area, important problems still remain open. Even if shape acquisition and filtering is much easier than 30 years ago, a
scanned mesh composed of hundred million triangles cannot be used directly in real-time visualization or complex numerical simulation. For this reason, automatic methods to convert those large
meshes into higher level representations are necessary. However, these automatic methods do not exist yet. For instance, the pioneer Henri Gouraud often mentions in his talks that the
*data acquisition*problem is still open. Malcolm Sabin, another pioneer of the “Computer Aided Geometric Design” and “Subdivision” approaches, mentioned during several conferences of the
domain that constructing the optimum control-mesh of a subdivision surface so as to approximate a given surface is still an open problem. More generally, converting a mesh model into a higher
level representation, consisting of a set of equations, is a difficult problem for which no satisfying solutions have been proposed. This is one of the long-term goals of international
initiatives, such as the
AIMShapeEuropean network of excellence.

Motivated by gridding application for finite elements modeling for oil and gas exploration, in the frame of the Gocadproject, we started studying geometry processing in the late 90's and contributed to this area at the early stages of its development. We developed the LSCM method (Least Squares Conformal Maps) in cooperation with Alias Wavefront . This method has become the de-facto standard in automatic unwrapping, and was adopted by several 3D modeling packages (including Maya and Blender). We experimented various applications of the method, including normal mapping, mesh completion and light simulation .

However, classical mesh parameterization requires to partition the considered object into a set of topological disks. For this reason, we designed a new method (Periodic Global Parameterization) that generates a continuous set of coordinates over the object . We also showed the applicability of this method, by proposing the first algorithm that converts a scanned mesh into a Spline surface automatically . Both algorithms are demonstrated in Figure .

We are still not fully satisfied with these results, since the method remains quite complicated. We think that a deeper understanding of the underlying theory is likely to lead to both efficient and simple methods. For this reason, we studied last year several ways of discretizing partial differential equations on meshes, including Finite Element Modeling and Discrete Exterior Calculus. This year, we also explored Spectral Geometry Processing and Sampling Theory (more on this below).

Numerical simulation of light means solving for light intensity in the “Rendering Equation”, an integral equation modeling energy transfers (or light
*intensity*transfers). The Rendering Equation was first formalized by Kajiya
, and is given by:

In addition, these methods are challenged with more and more complex materials (see Figure ) which need to be taken into account in the simulation. The simple diffuse Lambert law has been replaced with much more complex reflection models. The goal is to create synthetic images that no longer have a synthetic aspect, in particular when human characters are considered.

One of the difficulties is finding efficient ways of evaluating the visibility term. This is typically a Computational Geometry problem, i.e., a matter of finding the right combinatorial
data structure (the
*visibility complex*), studying its complexity and deriving algorithms to construct it. To deal with this issue, several teams (including VEGAS, ARTIS and REVES) study the visibility
complex.

The other terms of the Rendering Equation cannot be solved analytically in general. Many different numerical resolution methods have been used. The main difficulties of the discipline are that each time a new physical effect should be simulated, the numerical resolution methods need to be adapted. In the worst case, it is even necessary to design a new ad-hoc numerical resolution method. For instance, in Monte-Carlo based solvers and in recent Photon-Mapping based methods, several sampling maps are used, one for each effect (a map is used for the diffuse part of lighting, another map is used for caustics, etc.). As a consequence, the discipline becomes a collection of (sometimes mutually exclusive) techniques, where each of these techniques can only simulate a specific lighting effect.

The other difficulty is the classical problem of satisfying two somewhat antinomic objectives at the same time. On the one hand, we want to simulate complex physical phenomena (subsurface scattering, polarization, interferences, etc.), responsible for subtle lighting effects. On the other hand, we want to visualize the result of the simulation in real-time.

We first experimented finite-element methods in parameter space, and developed the
*Virtual Mesh*approach and a parallel solution mechanism for the associated hierarchical finite element formulation. The initial method was dedicated to scenes composed of quadrics. We
combined this method with our geometry processing methods to improve the visualization
.

One of our goals is now to design new representations of lighting coupled with the geometric representation. These representations of lighting need to be general enough so as to be easily extended when multiple physical phenomena should be simulated. Moreover, we want to be able to use these representations of lighting in the frame of real-time visualization. Our original approach to these problems consists in finding efficient function bases to represent the geometry and the physical attributes of the objects. We have first experimented this approach to the problem of image vectorization . We think that our dynamic function basis formulation is likely to lead to efficient light simulation algorithms. The originality is that the so-defined optimization algorithm solves for approximation and sampling all together. Developing such an algorithm is the main goal of our ERC GoodShapeproject.

After having introduced the
*geometry processing*and
*light simulation*scientific domains, we now present the principles that we use to design a common mathematical framework that can be applied to both domains. Early approaches to geometry
processing and light simulation were driven by a Signal Processing approach. In other words, the solution of the problem is obtained after applying a
*filtering scheme*multiple times. This is for instance the case of the mesh smoothing operator defined by Taubin in his pioneering work
. Recent approaches still inherit from this background. Even if the
general trend moves to Numerical Analysis, much work in geometry processing still studies the coefficients of the gradient of the objective function
*one by one*. This intrinsically refers to
*descent*methods (e.g., Gauss-Seidel), which are not the most efficient, and do not converge in general when applied to meshes larger than a certain size (in practice, the limit appears to
be around
10
^{4}facets).

In the approach we develop in the ALICEproject-team, geometry processing and light simulation are systematically restated as a (possibly non-linear and/or constrained) functional optimization problem. As a consequence, studying the properties of the minimum is easier: the minimizer of a multivariate function can be more easily characterized than the limit of multiple applications of a smoothing operator. This simple remark makes it possible to derive properties (existence and uniqueness of the minimum, injectivity of a parameterization, and independence to the mesh).

Besides helping to characterize the solution, restating the geometric problem as a numerical optimization problem has another benefit. It makes it possible to design efficient numerical optimization methods, instead of the iterative relaxations used in classic methods.

Richard Feynman (Nobel Prize in physics) mentions in his lectures that physical models are a “smoothed” version of reality. The global behavior and interaction of multiple particles is
captured by physical entities of a larger scale. According to Feynman, the striking similarities between equations governing various physical phenomena (e.g., Navier-Stokes in fluid dynamics
and Maxwell in electromagnetism) is an illusion that comes from the way the phenomena are modeled and represented by “smoothed” larger-scale values (i.e.,
*fluxes*in the case of fluids and electromagnetism). Note that those larger-scale values do not necessarily directly correspond to a physical intuition, they can reside in a more abstract
“computational” space. For instance, representing lighting by the coefficients of a finite element is a first step in this direction. More generally, our approach consists in trying to get rid
of the limits imposed by the classic view of the existing solution mechanisms. The traditional approaches are based on an intuition driven by the laws of physics. Instead of trying to mimic the
physical process, we try to restate the problem as an abstract numerical computation problem, on which more sophisticated methods can be applied (a plane flies like a bird, but it does not flap
its wings). We try to consider the problem from a computational point of view, and focus on the link between the numerical simulation process and the properties of the solution of the Rendering
Equation. Note also that the numerical computation problems yielded by our approach lie in a high-dimensional space (millions of variables). To ensure that our solutions scale-up to scientific
and industrial data from the real world, our strategy is to try to always use the best formalism and the best tool. The best formalism comprises Finite Elements theory, differential geometry,
topology, and the best tools comprise recent hardware, such as GPU (Graphic Processing Units), with the associated highly parallel algorithms. To implement our strategy, we develop algorithmic,
software and hardware architectures, and distribute these solutions in both open-source software (
Graphite) and industrial software (
Gocad,
DVIZ).

Besides developing new solutions for geometry processing and numerical light simulation, we aim at applying these solutions to real-size scientific and industrial problems. In this context, scientific visualization is our main applications domain. With the advances in acquisition techniques, the size of the data sets to be processed increases faster than Moore's law, and represents a scientific and technical challenge. To ensure that our processing and visualization algorithms scale-up, we develop a combination of algorithmic, software and hardware architectures. Namely, we are interested in hierarchical function bases, and in parallel computation on GPUs (graphic processing units).

Our developments in parallel processing and GPU programming permit our geometry processing and light simulation solutions to scale-up, and handle real-scale data from other research and industry domains. The following applications are developed within the MIS (Modelization, Interaction, Simulation) and AOC (Analysis, Optimization and Control) programs, which are supported by the “Contrat de Plan État-Région Lorraine”.

This application domain is led by the Gocadconsortium, created by Prof. Mallet, now headed by Guillaume Caumon. The consortium involves 48 universities and most of the major oil and gas companies. ALICEcontributes to Gocadwith numerical geometry and visualization algorithms for oil and gas engineering. The currently explored domains are complex and dynamic structural models construction, extremely large seismic volumes exploration, and drilling evaluation and planning. The solutions that we develop are transferred to the industry through Earth Decision Sciences. Several Ph.D. students were co-advised by researchers in GOCAD and ALICE, such as Laurent Castanié (defended in 2006, on novel visualization methods, published in IEEE Visualization ), Luc Buatois (defended this year, on high-performance numerical solvers on Graphic Processing units), and more rencently (last year), Thomas Viard, on the visualization of data with uncertainties.

Graphiteis a research platform for computer graphics, 3D modeling and numerical geometry. It comprises all the main research results of our “geometry processing” group. Data structures for cellular complexes, parameterization, multi-resolution analysis and numerical optimization are the main features of the software. Graphite is publicly available since October 2003. It is hosted by Inria GForge since September 2008 (1000 downloads in two months). Graphite is one of the common software platforms used in the frame of the European Network of Excellence AIMShape.

OpenNLis a standalone library for numerical optimization, especially well-suited to mesh processing. The API is inspired by the graphics API OpenGL, this makes the learning curve easy for computer graphics practitioners. The included demo program implements our LSCM mesh unwrapping method. It was integrated in Blenderby Brecht Van Lommel and others to create automatic texture mapping methods. More recently, they implemented our ABF++ method (developed in cooperation with University of British Columbia). It will shortly include the more recent linear ABF, that we developed in cooperation with Rhaleb Zayer (who was at that time with Max Planck Institute for Informatik). Our mesh unwrapping algorithms have now become the de-facto standard for mesh unwrapping in several industrial mesh modeling packages (including Maya, Silo, Catia). OpenNL is extended with two specialized modules :

CGAL parameterization package: this software library, developed in cooperation with Pierre Alliez and Laurent Saboret, is a CGALpackage for mesh parameterization. It includes a special, generic version of OpenNL, compatible with CGAL requirements of genericity.

Concurrent Number Cruncher: this software library extends OpenNL with parallel computing on the GPU, implemented using the CUDA API.

This year, we merged the GPU solver Concurrent Number Cruncherwith the main software trunk of OpenNLto have a single API for both solvers. We also extended the GPU solver to use the new functionalities of GPUs, that now support floating point numbers in double precision.

Intersurfis a plugin of the VMD (Visual Molecular Dynamics) software. VMD is developed by the Theoretical and Computational Biophysics Group at the Beckmann Institute at University of Illinois. The Intersurf plugin is released with the official version of VMD since the 1.8.3 release. It provides surfaces representing the interaction between two groups of atoms, and colors can be added to represent interaction forces between these groups of atoms. We plan to include in this package the new results obtained this year in molecular surface visualization by Matthieu Chavent.

Gocadis a 3D modeler dedicated to geosciences. It was developed by a consortium headed by Jean-Laurent Mallet, in the Nancy School of Geology. Gocad is now commercialized by Earth Decision Sciences(formerly T-Surf), a company which was initially a start-up company of the project-team. Gocad is used by all major oil companies (Total-Fina-Elf, ChevronTexaco, Petrobras, etc.), and has become a de facto standard in geo-modeling. Luc Buatois's work on GPU-based numerical solvers is now integrated in Gocad's grid generation software SKUA.

LibSL is a Simple library for graphics. Sylvain Lefebvre continued development of the LibSL graphics library (under CeCill-C licence, filed at the APP). LibSL is a toolbox for rapid prototyping of computer graphics algorithms, under both OpenGL, DirectX 9/10, Windows and Linux. The library is actively used in both the REVES / INRIA Sophia-Antipolis and the Alice / INRIA Nancy Grand-Est teams.

We continued our work on Geometry Processing with the strategy of considering all the three levels of abstraction in parallel, namely
*formalization*(specification using functional analysis and topology),
*discretization*(relations between the continuous problem and discretized linear models), and finally
*implementation*(how to implement efficient solvers for these linear problems using modern hardware). This year's realization for these three levels of abstraction are described in the
following three paragraphs.

Many algorithms in texture synthesis, non-photorealistic rendering (hatching), or re-meshing require defining the orientation of some features (texture, hatches or edges) at each point of a surface. This is also the case of the quad-remeshing algorithms that we developed ( and ). In early works, tangent vector (or tensor) fields were used to define the orientation of these features. Extrapolating and smoothing such fields is usually performed by minimizing an energy composed of a smoothness term and of a data fitting term. Those approaches allow smoothing existing fields such as the direction of the curvature, to interactively introduce directional constraints, but fail to control the topology of the resulting field.

We have developped an algorithm that lets the direction field emerge naturally from the direction extrapolation and smoothing (as with previous approaches), but that controls the singularities. The idea here is to restate the objective function such that the optimization algorithm does not try to minimize the part of the field curvature that is due to the Gaussian curvature of the surface. Some results are shown in Figure . We published this work in ACM Transactions on Graphics .

We developed the “Mesh Matrix Methods” formalism, a new way of desining geometry processing tools based on the idea of replacing complicated mesh data structures (halfedges etc ...) with sparse matrices. We show how mesh traversal, finite element matrices and subdivision can be efficiently implemented in terms of sparse matrices operations. Our formalism is currently used to teach digital geometry processing in Magdeburg university, Germany. Our implementation in MATLAB together with the algorithmic description will be released as OpenSource software.

We continued our work on the efficient implementation of numerical solvers on the GPU, using the new functionalities of GPUs, that now support floating point numbers in double precision. We compared different implementations for representing sparse matrices, including the CRS (Compressed Row Storage) representation. A new Ph.D. thesis will start in 2010 (Thomas Jost), co-advised with Sylvain Contassot (ALGORILLE project). The goal is to experiment with the possibility of implementing a sparse direct solver on the GPU.

In the frame of our GoodShapeproject, we study geometry processing problems with the specific point of view of computing an optimal function basis. To reach this goal, we explore different strategies, and revisit them with the formalism of numerical optimization. As a mean of computing an efficient function basis, we study Centroidal Voronoi Tessellations and spectral methods, as described in the following two paragraphs. The so-computed function basis will be used as the fundamental tool for new light simulation methods that we try to develop (see below).

Optimization technique for Faster Centroidal Voronoi Tessellation (CVT): CVT is an essential tool in many scientific fields, that can be used to compute the optimal sampling of a given signal. In Figure , we show a CVT adapted to a background density function, computed by our algorithm mentioned below. For large-scale problems, the popular Lloyd relaxation is not fast enough to achieve local minimum due to its linear convergence rate. Our previous work shows that Limited memory Quasi-Newton method (for instance, L-BFGS) is a better method which preserves sparsity and simplicity for our CVT program. We published our efficient CVT algorithm in ACM Transactions on Graphics .

An important application of CVT is isotropic remeshing of 3D models. We developped an efficient algorithm to compute the restricted Voronoi diagram in 3D, i.e. the intersection between a 3D Voronoi diagram and a polygonal mesh embedded in 3D. Our algorithm uses two graph traversals in parallel . A symbolic encoding of vertices configurations allows for numerical opimization with the Newton framework. As a result, meshes of high quality (near equilateral triangles) can be obtained. We also developed a generalization of the algorithm to sample a 3D volume with line segments and graphs. The main application is fitting a skeleton to a 3D model .

We are currently working on extending this framework in two different ways : (1) we introduce a new objective function (
L_{p}-CVT), that approximates the
metric. Minimizing this new objective function allows to generate quad-dominant and hex-dominant meshes. (2) we study the problem of anisotropic remeshing from the point of view of
embedding the Riemannian manifold defined by the domain to be meshed and its anisotropy into a higher-dimensional space, using Nash's embedding theorem, then meshing this higher-dimensional
object isotropically, and finally re-projecting into 3D space.

We continued our research program started in 2006 about spectral geometry processing methods. We developed a shell based approach for mesh deformation and editing (Figure ). The approach also can take advantage of modal analysis of the surface models and a partitioning approach for efficiently solving the arising eigenvalue problem. cooperation with Alexander Belyaev (Heriot-Watt University, Edinburgh, Scotland, UK), Jens Kerber and Art Tevs (MPI Informatik, Saarbrücken, Germany) . We developed an intuitive artistic tool which allows compressing the depth range of a given scene without compromising the visual quality of surface features. The presented algorithm allows for real time computation, thanks to our implementation on graphics hardware. Hence, besides the interactive design of still results, our method offers the possibility for generating animated Bas-Reliefs.

We presented a course on Spectral Mesh Processing at SIGGRAPH Asia .

Light simulation is a very active topic in the computer graphics community. In the frame of his Ph.D. (started in Oct. 2008), Vincent Nivoliers studies a dynamic basis formulation of the
problem. Among the methods used to obtain satisfactory results, radiosity aims at finding an approximate solution to the general light equation problem. The formulation of this problem fits
well into the dynamic function basis framework, which could be used to quickly find both a good sampling of the scene, and the best approximation on this sampling. This method would avoid the
use of discontinuity meshing, and provide a light solution without requiring hierarchical sampling. The problem of the illumination of a scene can be translated into an integral equation. The
general solution of this equation cannot be computed in closed form, therefore, the usual method is to restrict the problem on a specific function space which both approximates the general
L^{2}function space of the solution, and has a simple basis on which to project. Most approaches of the problem use hierarchical function basis, to refine the solution were needed, and to
compute large-scale interactions with fewer coefficients. In the dynamic function basis formalism, the function basis changes during the optimisation step to fit the solution and enhance the
accuracy of the approximation. We experimented the Dynamic Function Basis framework in two different settings, for image approximation, and for sampling direct lighting in the presence of
shadows. In this latter configuration, the results are encouraging. The samples are aligned along the direction of the gradient of illumination, and shadow boundaries are well captured.

We developped new texturing tools (material space texturing) in cooperation with Greg Turk (Georgia Tech). Many objects have patterns that vary in appearance at different surface locations. We say that these are differences in materials, and we present a material-space approach for interactively designing such textures. At the heart of our approach is a new method to pre-calculate and use a 3D texture tile that is periodic in the spatial dimensions (s,t) and that also has a material axis along which the materials change smoothly. Given two textures and their feature masks, our algorithm produces such a tile in two steps. The first step resolves the features morphing by a level set advection approach, improved to ensure convergence. The second step performs the texture synthesis at each slice in material-space, constrained by the morphed feature masks. With such tiles, our system lets a user interactively place and edit textures on a surface, and in particular, allows the user to specify which material appears at given positions on the object. Additional operations include changing the scale and orientation of the texture. We support these operations by using a global surface parameterization that is closely related to quad re-meshing. Re-parameterization is performed on-the-fly whenever the user's constraints are modified. We published this result in Computer Graphics Forum .

Matthaus Chajdas was an intern within the REVES team from April to September 2009. He was supervised by Sylvain Lefebvre and worked on an algorithm to help modelers texture large virtual environments. Modelers typically manually select a texture from a database of materials for each and every surface of a virtual environment. Our algorithm automatically propagates user input throughout the entire environment as the user is applying textures to it. After choosing textures for only a small subset if the surface, the entire scene is textured. This work was accepted by I3D 2010.

Boundary Controlled IFS is a new layer of control over traditional IFS, allowing to create a wide variety of shapes. Recently we have demonstrated how subdivision surfaces with extraordinary points may be generated by Boundary Controlled Iterated Function Systems means, as well as how we may get beyond the traditional subdivision schemes. Thanks to the method any subdivision scheme may be analyzed and it is even possible to construct new ones. Indeed, having programmed a single tool it is possible to implement any subdivision scheme (3D as well as 2D) just by feeding the tool with simple text files representing control graphs.

We currently study the relations between L-systems and subdivision, with Cedric Gérot, Viktor Ostromoukhov and Nicolas Stuart. The objective of this research project is to use L-systems to define the subdivision algorithms. Based on the blossoming theory it is possible to insert control points on a spline curve without changing its shape. Subdivision schemes are a particular case of this insertion, where many new control points are inserted at predefined positions. This enables the precomputation of the knot insertion algorithm to produce a subdivision mask. Irregular subdivision gets rid of the predefined position constraint, but prevents the construction of the subdivision mask, therefore reducing the amount of precomputation. We use L-systems to guide the knot insertions to enable the definition of non-uniform subdivision schemes, while still being able to precompute a subdivision mask.

We prove that the robber can evade (that is, stay at least unit distance from) at least
cops patroling an
n×
ncontinuous square region, that a robber can always evade a single cop patroling a square with side length 4 or larger, and that a single cop on patrol can always
capture the robber in a square with side length smaller than
(with E.M. Reingold, submitted to J. of Computational Geometry Theory and Applications).

Given a set of
nelements, each of which is colored one of
c2colors, we have to determine an element of the plurality (most frequently
occurring) color by pairwise equal/unequal color comparisons of elements. We derive lower bounds for the expected number of color comparisons when the
c^{n}colorings are equally probable. We prove a general lower bound of
for
c2; we prove the stronger particular bounds of
for
c= 3,
for
c= 4,
for
c= 5,
for
c= 6,
for
c= 7, and
for
c= 8(with E.M. Reigold,
).

The company Earth Decision Sciences(formerly T-Surf) develops and commercializes the modeler Gocad. Gocad is a 3D modeler dedicated to geosciences. This company was initially created as a start-up company of the National School of Geology and members of ISA and ALICE project-teams. It has now 200 employees in seven countries (France, United States, Brazil, Dubai, Canada ...). It was recently acquired by the Paradigm company.

The ScalableGraphicscompany was created in January 2007 by Xavier Cavin (detached ALICE researcher). Its objective is to provide high-performance visualization solutions, based on graphics PC clusters. The DViz software is based on industrialized research results from the research on high performance visualization done in ALICE.

The Gocadsoftware is developed in the context of a consortium that encloses more than forty universities and thirty oil and gas companies around the world. This software is dedicated to modeling and visualizing the underground. ALICE studies the mathematical aspects of geo-modeling, and develops efficient numerical algorithms to solve the underlying optimization problems. The cooperation is formalized by several co-advised Ph.D. thesis (Laurent Castanié, Luc Buatois, Thomas Viard), and courses on numerical optimization given by ALICE researchers in the school of geology. Guillaume Caumon (head of Gocad consortium) is an external collaborator of the ALICE project-team.

In the frame of the AOC program (Analysis, Optimization and Control) of the CPER (“Contrat de Plan État-Région Lorraine”), we participate to the “swimmer” action, coordinated by Marius Tucsnak (CORIDA project-team). The goal of this action is to simulate and visualize the complex fluid-solid interactions caused by a swimming fish. In 2009, we started to mutualize the sofware development between ALICE and CORIDA, and developped some common tools in the OpenNL library. The sparse matrix data structure from OpenNL is now used by CORIDA's MATLAB sofware. We now work on improving the efficiency of sparse matrix - vector product, using GPU parallel implementations. The goal is to have efficient implementations of preconditioners for Navier-Stokes simulation.

The ANR SIMILAR-CITIES started late january 2009. This is a common project between INRIA, CSTB and the Alleorithmic compagny, on the theme of approximating urban texture procedurally.

SIMILAR-CITIES est un projet entre l'INRIA, le CSTB et Allegorithmic sur le thème de l'approximation procédurale de textures urbaines: Les textures et images appliquées sur les modèles géométriques sont calculées plutôt que stockées. Le but est de fournir des représentations procédurales peu couteuses en mémoire et temps de création pour les environnements urbains massifs (villes virtuelles) et ce afin d'augmenter l'immersion et le réalisme lors d'explorations interactives de telles scènes.

The project advanced well and the first work packages have been completed, including the creation by our partner Allegorithmic of a database of textures. Two meetings have been held, the kickoff meeting in Sophia-Antipolis and a second meeting in Clermont-Ferrand. A server was installed at INRIA Sophia-Antipolis by SEMIR for the project, and the intellectual property agreement has been prepared by the legal department of INRIA. It is currently being signed by all partners.

The project hired Anass Lasram as a PhD student. Anass started in October in the Alice team of INRIA Nancy (since Sylvain Lefebvre joined the Alice team). We are currently working on an algorithm for fast synthesis of structural textures, with a particular focus on urban imagery. The main application is to give a more realistic appearance to the many anonymous buildings of large virtual cities, which are typically only crudely modeled. A key advantage of our approach is the compactness of the results. We are currently working on submitting this work to SIGGRAPH 2010. The project involves Sylvain Lefebvre, Samuel Hornus (Alice/INRIA Nancy) and Anass Lasram (Alice/INRIA Nancy).

The scientific objective of this proposal is to develop new deformation models, where the underlying mathematics (basis functions) is adaptively learned from acquisition, and thus have inherently a clear physical meaning. In this way, the simulation goes on par with the real deformation behavior. To address this goal the PhysiGrafix project which consists of (1) systematic tracking and reconstruction of a coarse representation of captured multi-view video deformation sequence; (2) problem reduction by encoding the physics in relevant deformation modes and elimination of irrelevant parameters (e.g. rigid body modes); (3) Adaption to refined reconstruction as well as to the addition of new footage of the same model or similar models. This research is motivated by real world applications, and in a broad scope touches upon disciplines such as virtual medicine, manufacturing and feature film industry.

We work in cooperation with the Gocadgroup. The Ph.D. theses of N. Cherpeau, R. Merland and T. Viard are co-advised by the ENSG/Gocad (Nancy School of Geology) and ALICE. The goals are to develop new tools to visualize uncertainties (T. Viard), a modeling framework for complex geological objects with faults (N. Cherpeau) and 3D meshing tools for flow simulation (R. Merland).

L. Alonso is secretary of the national AGOS association of INRIA.

Project GoodShape(Numerical Geometric Abstraction: from bits to equations), funded by the European Research Council, involves several fundamental aspects of 3D modelling and computer graphics. GOODSHAPE is taking a new approach to the classic, essential problem of sampling, or the digital representation of objects in a computer. This new approach proposes to simultaneously consider the problem of approximating the solution of a partial differential equation and the optimal sampling problem. The proposed approach, based on the theory of numerical optimization, is likely to lead to new algorithms, more efficient than existing methods. Possible applications are envisioned in inverse engineering and oil exploration.

In the frame of the GOODSHAPE project, we cooperate with Hong-Kong university on Centroidal Voronoi Tesselations and their applications. Researchers and students from Nancy and Hong-Kong visit each other on a regular basis.

Sylvain Lefebvre started a collaboration with Gustavo Patow (researcher) and Ismael Garcia (PhD student) of Girona University, Spain, on the topic of dynamic tiletree; a method to progressively texture objects as they appear or are modified on screen. Ismael Garcia is currently visiting the Alice team in INRIA Nancy which S. Lefebvre has joined.

Roman Luchin Associated Professor, St. Petersbourg State University, 1-3 June 2009

Malcolm Sabin, Numerical Geometry Ltd (UK) 19-20 October 2009

Gustavo Patow, Girona University, 16-17 November 2009

D. Sokolov teaches “Modeles de perception et raisonnement” M1 “Infographie”, M1 “Geometrie et representation dans l'espace”, L2+L3 “Logique et modeles de calcul”, M1 “Synthese d'images 3D”, M1 “Unité Bureautique et Communication électronique”, L1.

V. Nivoliers teaches AP2 (algorithmics and programming), UHP, L1: basics of algorithmics and programming (Ocaml), practicals, MI1 (mathematics for computer science), Esial, first year: introducing recursion and induction, boolean functions,and the basics of language theory.

B. Lévy teaches “Modélisation géométrique), M2, “Visualisation” M2, “Numerical Algorithms”, ENSG (School of Geology - INPL)

S. Lefebvre served on the Eurographics Rendering Symposium papers committee and on the Symposium on Interactive 3D Graphics and Games committee.

B. Lévy was member of the program committee of Eurographics, IEEE SMI, SIAM/ACM GPM, ACM/EG SGP, IEEE VIS.

Members of the team attended ACM SIGGRAPH, EUROGRAPHICS, ACM/EG SGP.

Sylvain Lefebvre gave invited talks at the 2009 Italian EUROGRAPHICS chapter (EG-IT), the Mathematics and Image Analysis 2009 workshop (MIA) and the Paris chapter of ACM SIGGRAPH (15/12/2009).

Sylvain Lefebvre presented the Gabor noise work at a seminar at INRIA Grenoble (June 8, 2009) and during a talk at the GT-Rendu on October 2nd, 2009. He also presented his research program at the CP of INRIA Nancy Grand-Est on May 4, 2009.

Vincent Nivoliers presented our software Graphite at RMLL 2009 (International Open Source Software Meeting, Nantes, 7-11 July 2009);

Vincent Nivoliers participated to “A Day with a Scientist” (28 January, 31 March), and organized visits for a group of yound students;

Vincent Nivoliers animated several demos and gave several talks during the Science fair (18-22 November 2009).