ALICE is a project in Computer Graphics. The fundamental aspects of this domain concern the interaction of
*light*with the
*geometry*of the objects. The lighting problem consists in designing accurate and efficient
*numerical simulation*methods for the light transport equation. The geometrical problem consists in developing new solutions to
*transform and optimize geometric representations*. Our original approach to both issues is to restate the problems in terms of
*numerical optimization*. We try to develop solutions that are
*provably correct*,
*numerically stable*and
*scalable*.

By provably correct, we mean that some properties/invariants of the initial object need to be preserved by our solutions.

By numerically stable, we mean that our solutions need to be resistant to the degeneracies often encountered in industrial data sets.

By scalable, we mean that our solutions need to be applicable to data sets of industrial size.

To reach these goals, our approach consists in transforming the physical or geometric problem into a numerical optimization problem, studying the properties of the objective function and designing efficient minimization algorithms.

The main applications of our results concern Scientific Visualization. We develop cooperations with researchers and people from the industry, who experiment applications of our general solutions to various domains, comprising CAD, industrial design, oil exploration and plasma physics. Our solutions are distributed in both open-source software ( Graphite) and industrial software ( Gocad, DVIZ).

Computer Graphics is a quickly evolving domain of research. These last few years, both acquisition techniques (e.g. range laser scanners) and computer graphics hardware (the so-called GPU's, for Graphics Processing Units) have made considerable advances. However, as shown in Figure , despite these advances, fundamental problems still remain open. For instance, a scanned mesh composed of 30 millions of triangles cannot be used directly in real-time visualization or complex numerical simulation. To design efficient solutions for these difficult problems, ALICE studies two fundamental issues in Computer Graphics:

the representation of the objects, i.e. their geometry and physical properties;

the interaction between these objects and light.

Historically, these two issues have been studied by independent research communities, in isolation. However, we think that they share a common theoretical basis. For instance,
multi-resolution and wavelets were mathematical tools used by both communities
. We develop a new approach, that consists in
studying the geometry and lighting from the
*numerical analysis*point of view. In our approach, Geometry Processing and Light Simulation are systematically restated as a (possibly non-linear and/or constrained) functional
optimization problem. Our long-term research goal is to find a formulation that permits a unified treatment of geometry and illumination over this geometry.

Geometry Processing recently appeared (in the middle of the 90's) as a promising avenue to solve the geometric modeling problems encountered when manipulating meshes composed of million
elements. Since a mesh may be considered to be a
*sampling*of a surface - in other words a
*signal*- the
*digital signal processing*formalism was a natural theoretic background for this discipline (see e.g.
). The discipline then studied different aspects
of this formalism applied to geometric modeling.

Although many advances have been made in the Geometry Processing area, important problems still remain open. Even if shape acquisition and filtering is much easier than 30 years ago, a
scanned mesh composed of 30 millions of triangles cannot be used directly in real-time visualization or complex numerical simulation. For this reason, automatic methods to convert those large
meshes into higher level representations are necessary. However, these automatic methods do not exist yet. For instance, the pioneer Henri Gouraud often mentions in his talk that the
*data acquisition*problem is still open. Malcolm Sabin, another pioneer of the ``Computer Aided Geometric Design'' and ``Subdivision'' approaches, mentioned during several conferences of
the domain that constructing the optimum control-mesh of a subdivision surface so as to approximate a given surface is still an open problem. More generally, converting a mesh model into a
higher level representation, consisting of a set of equations, is a difficult problem for which no satisfying solution have been proposed. This is one of the long-term goals of international
initiatives, such as the
AIMShapeEuropean network of
excellence.

Motivated by gridding application for finite elements modeling for oil and gas exploration, in the frame of the Gocadproject, we started studying Geometry Processing in the late 90's and contributed to this area at the early stages of its development. We then developped new algorithms to add interactivity in the method . To improve both the robustness and the flexibility of the method, we then studied a new algorithm to minimize the conformal energy, based on Cauchy-Riemann's equation. As a result, we developped the LSCM method (Least Squares Conformal Maps) in cooperation with Alias Wavefront . We experimented various applications of the method, comprising normal mapping, mesh completion and fairing and light simulation , .

Numerical simulation of light means solving for light intensity in the ``Rendering Equation'', an integral equation modeling energy transfers (or light
*intensity*transfers). The Rendering Equation was first formalized by Kajiya
, and is given by:

Computing global illumination (i.e., solving for intensity in Equation
) in general environments is a challenging task. Global
illumination may be considered in terms of computing the interactions between the
*lighting signal*and the
*geometric signal*(i.e., the scene). These interactions occur at various
*scales*. This issue belongs to the same class of problems encountered by Geometry Processing, described in the previous section. As a consequence, the
*signal processing*family of approaches is again a well-suited formalism. As such, the
*multi-scale*approach is a natural choice, which dramatically improves performances. Environments composed of a large number of primitives, such as highly tessellated models, show a high
variability of these scales (see Figure
).

In addition, these methods are challenged with more and more complex materials that need to be taken into account in the simulation. The simple diffuse Lambert law has been replaced with much more complex reflection models. The goal is to create synthetic images that no longer have a synthetic aspect, in particular when human characters are considered.

One of the difficulties is finding efficient ways of evaluating the visibility term. This is typically a Computational Geometry problem, i.e., a matter of finding the right combinatorial
data structure (the
*visibility complex*), studying its complexity and deriving algorithms to construct it. To deal with this issue, several teams (including VEGAS, ARTIS and REVES) study the visibility
complex.

The other terms of the Rendering Equation cannot be solved analytically in general. Many different numerical resolution methods have been used. The main difficulties of the discipline is that each time a new physical effect should be simulated, the numerical resolution methods need to be adapted. In the worst case, it is even necessary to design a new ad-hoc numerical resolution method. For instance, in Monte-Carlo based solvers, several sampling maps are used, one for each effect (a map is used for the diffuse part of lighting, another map is used for caustics, etc.). As a consequence, the discipline becomes a collection of (sometimes mutually exclusive) techniques, where each of these technique can only simulate a specific lighting effect.

The other difficulty is to satisfy two somewhat antinomic objectives at the same time. On the one hand, we want to simulate complex physical phenomena (subsurface scattering, polarization, interferences, etc.), responsible for subtle lighting effects. On the other hand, we want to visualize the result of the simulation in real-time.

We first experimented finite-element methods in parameter space, and developped the
*Virtual Mesh*approach
and a parallel solution mechanism for the
associated hierarchical finite element formulation. The initial method was dedicated to scenes composed of quadrics. We combined this method with our Geometry Processing methods to improve the
visualization
.

One of our goals is now to design new representations of lighting coupled with the geometric representation. These representations of lighting need to be general enough so as to be easily extended when multiple physical phenomena should be simulated. Moreover, we want to be able to use these representations of lighting in the frame of real-time visualization. Our original approach to these problems consists in finding efficient function bases to represent the geometry and the physical attributes of the objects.

Early approaches to Geometry Processing and Light Simulation were driven by a Signal Processing approach. In other words, the solution of the problem is obtained after applying a
*filtering scheme*multiple times. This is for instance the case of the mesh smoothing operator defined by Taubin in his pioneering work
. Recent approaches still inherit from this
background. Even if the general trend moves to Numerical Analysis, much work in Geometry Processing still study the coefficients of the gradient of the objective function
*one by one*. This intrinsicly refers to
*descent*methods (e.g. Gauss-Seidel), which are not the most efficient, and do not converge in general when applied to meshes larger than a certain size (30000 facets).

In the approach that we develop in the ALICE project, Geometry Processing and Light Simulation are systematically restated as a (possibly non-linear and/or constrained) functional optimization problem. As a consequence, studying the properties of the minimum is easier: the minimizer of a multivariate function can be more easily characterized than the limit of multiple applications of a smoothing operator. This simple remark makes it possible to derive properties (existence and uniqueness of the minimum, injectivity of a parameterization, and independence to the mesh).

Besides helping to characterize the solution, restating the geometric problem as a numerical optimization problem has another benefit. It makes it possible to design efficient numerical optimization methods, instead of the iterative relaxations used in classic methods.

Richard Feynman (Nobel Prize in physics) mentions in his lectures that physical models are a ``smoothed'' version of reality. The global behavior and interaction of multiple particles is
captured by physical entities of a larger scale. According to Feynman, the striking similarities between equations governing various physical phenomena (e.g. Navier-Stokes in fluid dynamics and
Maxwell in electromagnetism) is an illusion that comes from the way the phenomena are modeled and represented by ``smoothed'' larger-scale values (i.e.,
*fluxes*in the case of fluids and electromagnetism). Note that those larger-scale values do not necessarily directly correspond to a physical intuition, they can reside in a more abstract
``computational'' space. For instance, representing lighting by the coefficients of a finite element is a first step in this direction. More generally, our approach consists in trying to get
rid of the limits imposed by the classic view of the existing solution mechanisms, that come from a physical intuition. Instead of trying to mimic the physical process, we try to restate the
problem as an abstract numerical computation problem, on which more sophisticated methods can be applied (a plane flies like a bird, but it does not flap its wings). We try to consider the
problem from a computational point of view, and focus on the link between the numerical simulation process and the properties of the solution of the Rendering Equation. Note also that the
numerical computation problems yielded by our approach reside in a high-dimensional space (millions of variables). To ensure that our solutions scale-up to scientific and industrial data from
the real world, we develop algorithmic, software and hardware architectures, and distribute these solutions in both open-source software (
Graphite) and
industrial software (
Gocad).

Besides developping new solutions for Geometry Processing and Numerical Light Simulation, we aim at applying those solutions to real-size scientific and industrial problems. In this context, Scientific Visualization is our main applications domain. With the advances in acquisition techniques, the size of the data sets to be processed increases faster than Moore's law, and represents a scientific and technical challenge. To ensure that our processing and visualization algorithms scale-up, we develop a combination of algorithmic, software and hardware architectures. These developments started initially with our implementation of a parallel solver for the radiosity equation on the multi-processor Origin 3000 and the associated visualization solutions . With the evolution of computer hardware, clusters have appeared as an interesting alternative to multi-processor mainframes. We recently developped a visualization cluster based on this type of architecture .

These developments permit our Geometry Processing and Light Simulation solutions to scale-up, and handle real-scale data from other research and industry domains. The following applications are developed within the CRVHPProgram (Calcul Réseau Visualisation Haute Performance - High Performance Computing, Network, and Visualization). This program includes more than twenty Research Institutions and Industrial Companies and is supported by the ``Contrat de Plan État-Région Lorraine''.

This application domain is led by the Gocadconsortium, created by Prof. Mallet. The consortium involves 48 universities and most of the major oil and gas companies. ALICE contributes to Gocadwith numerical geometry and visualization algorithms for oil and gas engineering. The currently explored domains are complex and dynamic structural models construction, extremely large sismic volumes exploration, and drilling evaluation and planning. The solutions that we develop are transferred to the industry with Earth Decision Sciences.

The computation of turbulent thermal diffusivities in fusion plasmas is of prime importance since the energy confinement time is determined by these transport coefficients. An original approach is developed to study trapped ion instability. A Vlasov code is used to determine the behavior of the instability near the threshold and compare with analytical solutions of the Vlasov equation. Some interesting features which appear in the nonlinear regime are explored thanks to a specialized module of the Graphite library (in cooperation with LPMI-CNRS and CEA).

Protein docking is a fundamental biological process that links two proteins. This link is typically defined by an interaction between two large zones of the protein boundaries. Visualizing such an interface is useful to understand the process thanks to 3D protein structures, to estimate the quality of docking simulation results, and to classify interactions in order to predict docking affinity between classes of interacting zones. Our developments take place in the VMD software (in cooperation with eDAMand the Beckmann Institute at University of Illinois).

Computed images and immersive visualization systems are used to design and evaluate virtual products in the aircraft and car industry. In this application, the CAD models used are extremely large and the images have to be computed from an accurate physically-based simulation process. Other developments are experimented with the car industry on an application of distant visualization and immersive virtual reality (in cooperation with Renault).

Graphiteis a research platform for computer graphics, 3D modeling and numerical geometry. It comprises all the main research results of our ``Geometry Processing'' group. Data structures for cellular complexes, parameterization, multi-resolution analysis and numerical optimization are the main features of the software. Graphite is publically available since October 2003, and is now used by researchers from Geometrica (INRIA Sophia Antipolis), Artis (INRIA Grenoble), LSIIT (Strasbourg), Technion (Israel), Stanford University (United States), Harvard University (United States), University of British Columbia (Canada), MIT (United States). Graphite is one of the common software platforms used in the frame of the European Network of Excellence AIMShape.

DVIZis a library dedicated to distributed visualization. The development of DViz started in september 2002 and serves as the basis for some of our work in Scientific Visualization. It allows applications to run on graphics clusters with optimal parallel performance. A startup headed by Xavier Cavin starts in January 2007 to transfer these results to the industry.

Intersurfis a plugin of the VMD (Visual Molecular Dynamics) software. VMD is developped by the Theoretical and Computational Biophysics Group at the Beckmann Institute at University of Illinois. The Intersurf plugin is released with the official version of VMD since the 1.8.3 release. It provides surfaces representing the interaction between two groups of atoms, and colors can be added to represent interaction forces between these groups of atoms.

Gocadis a 3D modeler dedicated to geosciences. It was developped by a consortium headed by Jean-Laurent Mallet, in the Nancy School of Geology. Gocad is now commercialized by Earth Decision Sciences(formerly T-Surf), a company which was initially a start-up company of the project. Gocad is used by all major oil compagnies (Total-Fina-Elf, ChevronTexaco, Petrobras, etc.), and has became a de facto standard in geo-modeling. Laurent Castanié's work (CIFRE Earth Decision Sciences) was successfully integrated in the VolumeExplorer plugin of Gocad.

Candela is a library dedicated to light simulation. Candela-VR is an extension of Candela which makes possible to display the result of a simulation in environments with several CPU's and GPU's.

We continue our research program on numerical optimization with dynamic function bases that we proposed last year. We consider the problem of the numerical approximation of the solutions
of Partial Derivative Equations (or integro-differential equations in the case of global illumination). In its most general form, this class of problem can be expressed by the following
equation:
Lf=
gwhere
Lis a linear operator,
fis the unknown function, and the function
gis the right hand side. The classic Finite Element formulation (Galerkin) projects this equation onto a linear function basis
. In our setting, the approximation of the solution is also represented in a function basis
, but all those functions
depend on an additional
*unknown*vector of parameters
p. For instance, we suppose that the function
fis a bivariate piecewise linear function, defined on the faces of a Delaunay triangulation. This setting corresponds to exactly one basis function
per vertex
kof the triangulation, and the vector of parameters
pthen corresponds to all the coordinates
(
x
_{k},
y
_{k})at all the vertices of the triangulation. The function
fis then given by
. We will first explore the problem of minimizing the residual
:

The main difficulty comes from the non-linear dependencies introduced by the additional vector of parameters
p. The other difficulty is that in the general case, the expression of the energy functional
Fdepends on the value of the parameters
p(
Fis piecewise defined). To compute the fixed points of
F, we have designed a general framework, based on Newton's algorithm:

In practice, we will instanciate this general framework into different algorithms. We can now imagine a research program to instanciate our general framework in increasingly complex settings, that we have started studying in cooperation with our partners in AIMShapeand ARC Georep:

1D: Laplace equation, Poisson equation ;

1D + t: Heat equation;

2D,
L=
Id: a function approximation problem. Applications presented below concern image vectorization;

2D + t: fluid simulation with Navier-stokes;

3D,
L=
Id: Mesh-to-Spline conversion, this result is presented below.

3D: light simulation; optimizing for the parameters
pcorresponds to a
*numerical approach*to computing the visibility complex;

3D + t: dynamic simulations.

The following two subsections show our new results obtained in year 2006 in the case (2D,
L=
Id) that corresponds to image vectorization, and in the case (3D,
L=
Id) that corresponds to Mesh-to-Spline conversion.

We have first considered the case (2D,
L=
Id) that corresponds to an image approximation problem. Based on this formalism, we have developped a new method to convert a bitmap image into a vector image, that
we published in the Eurographics Symposium on Rendering
. The method is illustrated in Figure
. The algorithm first computes from the initial image a
saliency image (Figure
-B) using the ``compass'' filter. Then, a triangulation
adapted to this saliency image is generated (Figure
-C), using our highly efficent generic rasterizer
algorithm. From this triangulation, homogeneous zones are detected, using a clustering algorithm steered by a robust estimator. The boundaries of these regions are then approximated by cubic
Splines. The final result is shown in Figure
-D.

We have worked on the problem of automatically converting a triangulated mesh surface into a Spline surface. This problem setting corresponds to the case (3D,
L=
Id) in the list above. Wan-Chiu Li defended his Ph.D. thesis on this topic
. We have published our Periodic Global
Parameterization algorithm in ACM Transactions on Graphics
and proposed an application to the automatic
conversion of triangulated surfaces into Spline surfaces that we presented in the Eurographics/ACM Symposium on Geometry Processing
. The method is illustrated in Figure
. Figure
-A shows the initial triangulated surface. We first
compute a global smooth parameterization of this surface using our Periodic Global Parameterization algorithm, as shown in Figure
-B. From this parameterization, we extract an initial
control mesh (Figure
-C) by contouring the parameters. Then, validity
contraints dictated by the target Spline representation are enforced in the N-sided polygons of the control mesh (Figure
-D). Finally, standard regularized least-squares fitting
is applied to the parameterized control mesh. Figure
-E shows the final result, displayed in the T-Spline
plugin of the Maya software.

Another possibility to find efficient function bases is to compute the eigenfunctions of a symmetric operator. For instance, one may consider the eigenfunction of the Laplace operator (or
Laplace-Beltrami, its generalization to curved surfaces). This was used in
to compute surface quadrangulations. In our
case, we plan to use this approach with a continuous-setting formulation to define an efficient function basis. The eigenfunctions are defined by
=
where
denotes the Laplace operator,
an eigenfunction and
the associated eigenvalue. Since
is symmetric, its eigenfunctions are orthogonal, i.e. two eigenfunctions
_{1},
_{2}associated with eigenvalues
_{1}_{2}respectively satisfy
<
_{1},
_{2}> = 0, where:

We presented in our invited talk at IEEE Shape Modeling International
a review of spectral methods in geometry
processing together with a method to compute these eigenfunctions (that we call
*manifold harmonics*) for small objects, and some ideas of possible applications. Our method is based on a FEM (Finite Element Modeling) approach, applied to the P1 function basis
defined of the mesh. To simplify the computations, we use the classical result
<
f,
g>=
-<
f,
g>(for a closed integration domain). We currently study the link between our formulation and exterior calculus. We also design new algorithms to efficiently solve
the numerical problem, based on spectral transforms and multiresolution.

Last year, we developped in FEM-based light simulations for large tesselated models, based on a combination of the results of the former ISA project and our Geometry Processing tools. However, the underlying FEM radiosity method suffers from two major limitations:

it is limited to diffuse phenomena, and cannot account for subtle shading effects (e.g. complex Bidirectional Reflectance Distribution Functions, caustics, subsurface scattering, etc.)

it requires a perfectly ``clean'' representation of the geometry. For instance, applying it to a tesselated CAD model requires a great deal of model preparation (i.e. hole filling, elimination of duplicated surfaces, re-meshing, etc.)

To overcome both limitations, our strategy is now to follow the current trend of the discipline, by considering other families of methods, that can make use of the computational power offered by graphics hardware (GPUs). Our originality is to apply the Geometry Processing formalism to the physical attributes involved in light simulation. More specifically, we think that finding efficient function bases to represent those attributes is of paramount importance, for both the offline simulation process and real-time rendering.

For instance, the eigenfunctions of the Laplace-Beltrami operator on a surface mentioned in Section
) (
*spectral methods*subsection) define a Fourier basis onto this surface, that can be used to represent any smooth function defined over this surface. Applying this approach to the unit
circle redefines the classical Fourier basis. When considering a unit square, this defines the cosine transform, used for instance in the JPEG standard. On a sphere, this defines spherical
harmonics, widely used in lighting computations. For an arbitrary manifold, this defines a similar function basis, well adapted to the surface. For this reason, we refer to this function
basis as
*manifold harmonics*. We show in Figure
-A,B,C some signal-processing transforms applied to the
normal of the surface, and the resulting lighting. A low-pass filter (
-B) simulates subsurface scattering, whereas a high-pass
filter (
-C) augments the details of the shading. Our
signal-processing approach to shading makes it possible to express a form of subsurface scattering
and exagerated shading
in a unified framework.

In the frame of our ARC ARC Georep, we develop efficient visualization methods for finely tesselated models, in cooperation with X. Decoret and L. Baboud (ARTIS). Figure shows the idea of the method: the tesselated model is first transformed into a set of height fields (Figure -A), that can be efficiently displayed by the ray-tracing algorithm developped by ARTIS (Figure -B).

We also continue to develop light simulation and rendering methods for CAD models, in the frame of our cooperation with Renault. For instance, we developed some tools to efficiently display soft shadows (based on state-of-the-art real-time rendering technology), as shown in Figure -Right.

Sort-last parallel rendering is an efficient technique to visualize huge datasets on COTS clusters. The dataset is subdivided and distributed across the cluster nodes. For every frame, each node renders a full resolution image of its data using its local GPU, and the images are composited together using a parallel image compositing algorithm. We have developped a highly efficent hardware and software architecture last year . We did further analysis and optimization our system in the article that we presented at the Eurographics Symposium on Parallel Graphics and Visualization. We now achieve interactive visualization (20 frames per second) for gigantic polygonal meshes, such as the Boeing model (350 million primitives), shown in Figure -Left. Our visualization cluster infrastructure can also be used to drive extreme high-definition display device, such as the Reality Center available at the Inria lorraine, or the new immersive system that we currently develop in the frame of the current and next CPER (Contrat de Plan Etat Region) (see also Section ). Figure -Right shows a photograph of a 1:1 mockup of the system, that we built in our preliminary studies.

Another issue that we tackled is the visualization of gigantic structured grids (several hundreds of Gigabytes), such as those used in oil exploration. To deal with this volume of data, we developped a distributed cache system on a PC cluster combined with our DVIZsoftware . Our approach keeps each node informed of the global state of the system using an all-to-all communication method overlapped with the rendering and compositing steps. We presented this approach at the IEEE Visualization Conference (special issue of the IEEE Transactions on Visualization and Computer Graphics). Using our system, it is possible to interactively explore a volume of 200 Gigabytes with a probe of 1 Gigabyte. This was one of the topic of Laurent Castanié's Ph.D. , done in cooperation with the Gocadconsortium and the Earth Decision Sciences/Paradigm compagny.

Isosurfaces is an efficient way of exploring 3D scalar fields. Intuitively, they correspond to the 3D version of the iso-pressure curves displayed in meteorological maps. For structured hexaedral grids, the standard algorithm (Marching Cubes) associated with additional data structures (e.g. interval trees) provides an efficient way of displaying iso-surfaces. For homogeneous unstructured grids, that is to say grids composed of a single type of element (e.g. tetrahedra), it is easy to generalize the Marching Cubes algorithm. For more general grids, such as those encountered in CFD (Computational Fluid Dynamics), it is more difficult to use pre-computed tables since arbitrary cells may be encountered. To deal with this issue, we developped different algorithms, making use of the GPU to accelerate computations , that we presented at the International Symposium on Visual Computing. Our approach uses textures to overcome the limitations imposed on the number of vertex attributes. Interactive visualization is then possible for the fully unstructured grids used in oil-exploration to simulate the production of a well.

We worked on the problem of accurately displaying the singularities of a vector field defined on a surface. These singularities correspond to the zones where the vector field vanishes and has no defined direction. A singularity is characterized by its index, i.e. the number of times the vector fields winds when circulating along a curve that encloses the singularity. Existing visualization methods for vector fields on triangulated surfaces use a linear interpolation in the triangles, and are therefore tightly coupled with the mesh discretization. More over, using these approaches, only a restricted class of singularities can be represented. For this reason, we developped a new approach , that can accurately represent and display a singularity of arbitrary index (see Figure ). We presented our approach at the IEEE Visualization Conference (special issue of the IEEE Transactions on Visualization and Computer Graphics). The approach is also explained in the Ph.D. thesis of Wan-Chiu Li .

The company Earth Decision Sciences(formerly T-Surf) develops and commercializes the modeler Gocad. Gocad is a 3D modeler dedicated to geosciences. This company was initially created as a start-up company of the National School of Geology and members of ISA project. It has now 100 employees in 7 countries (France, United States, Brazil, Dubai, Canada, etc. ). It was recently acquired by the Paradigm compagny.

The DVIZcompagny starts in January 2007. Its objective is to provide high-performance visualization solutions, based on graphics PC clusters. The DViz sotfware is based on industrialized research results from the research on high performance visualization done in ALICE.

Our proposal
*Geometric Intelligence*on Geometry Processing was selected by Microsoft Research Cambridge (in the frame of the Microsoft call for proposals
*tools for advancing science*). Aurélien Martinet (currently Ph.D student in ARTIS) starts a post-doc beginning 2007 on this theme.

The Gocadsoftware is developed in the context of a consortium that encloses more than forty universities and thirty oil and gas companies around the world. This software is dedicated to modeling and visualizing the underground.

We have signed a Non Disclosure Agreements with ATI and NVidia. We experiment their new APIs to implement high performance GPGPU computations, i.e. using the graphic board as a high-performance numerical computation engine (Luc Buatois).

The Ph.D. thesis of Gregory Lecot is co-funded by Renault and the CNRS. The work concerns geometry processing and real-time visualization tools for highly detailed CAD models of automobiles.

The CRVHPprogram, led by Xavier Cavin and Jean-Claude Paul is supported by the CPER (``Contrat de Plan État-Région Lorraine''). It includes more than twenty Academic Institutions and Industrial Companies. Ressources allocated to the program include two SGI Origin3000, a ``Reality Center'' immersive environment, a computing cluster and three graphics clusters, all interconnected by high bandwidth networks. In the scope of the next CPER (2007-2013), we have worked on a project for an extreme high-definition immersive visualization system (more details are given in Section ).

The Data Masses ACI (2004-2006, 36 monthes) studies new ways of handling large geometrical databases. ALICE's contribution concerns the parameterization of large triangulated surfaces and the segmentation of geometric objects based on an approximation of the curvature tensor.

We coordinate the ARC Georep(2005-2006, 24 monthes), that aims at designing new solutions to convert a raw representation of a 3D object into a higher-level representation. The main objective is to show the feasability of this approach by applying it to a real Computer Graphics problem, i.e. to demonstrate the full pipeline: acquisition geometry processing application. To reach this goal, this ARC connects participants having skills in various disciplines (3D acquisition: LSIIT, 3D reconstruction: MOVI, Geometry Processing: ALICE, UBC, Numerical Analysis: GRAAL and Computer Graphics: ARTIS, ALICE). We extended this ARC to new participants in MPII (Hans-Peter Seidel and Alexander Belyaev).

As a followup to previous cooperation projects (including our ACI Geogrid, coordinated by J.-C. Paul), we work in cooperation with the Gocadgroup. The Ph.D. theses of L. Castanié and L. Buatois are co-advised by the ENSG/Gocad (Nancy School of Geology) and ALICE. The recent results of L. Castanié et.al were integrated in the Volume Explorer plugin of the Gocad software, tested by different industrial partners (including ARAMCO).

L. Alonso is secretary of the national AGOS association of INRIA.

The AIMShapeEuropean project intends to design geometrical modeling techniques improving the management of semantic information. The 3D modeling and computer graphics research domains require more and more expertise in various areas (differential geometry, numerical algorithms, combinatorial data structure, computer graphics hardware, etc.). Achieving significant advances requires to master all these fundamental domains, which requires at least 10 men years for each aspect. In other words, reinventing the wheel can be a dramatic waste of time. This Network of Excellence (NoE) aims at sharing the expertise of European research groups in this area. To better share the knowledge and know-how, we proposed to develop within the network the notion of DSW (Digital Shape Workbench), i.e. a set of common integrated research platforms ( CGAL: computational geometry library, Graphite: numerical geometry workbench, Synapse: numerical algorithms). We expect significant new fundamental results as the outcome of this strategy.

GeorgiaTech Lorraine (UMI CNRS 2958) is the European platform of GeorgiaTech, with joint education (with french engineering schools) and research (with CNRS) programs. In this framework, there is currently a project of creating a joint lab between GeorgiaTech and Loria/Inria Lorraine. In the frame of this joint lab project, we studied the possibilities of cooperation and identified a strong complementarity between Greg Turk's group and project ALICE. Greg Turk visited ALICE (Feb. 13), then N. Ray, W.-C. Li, B. Vallet and B. Lévy visited GeorgiaTech (Oct. 10-27) for a joint research project in texture synthesis. The early results of this cooperation seem very promising to us. We plan to strengthen this cooperation in the future.

Jan 9-10: Jean-Marc Hazenfratz (INRIA-ARTIS)

Feb. 3: Laurent Chodorge (CEA)

Feb. 13,14: Greg Turk (Georgia Tech)

March 28: Edmond Boyer (INRIA Movi and UJF) and Olivier Genevaux (LSIIT Strasbourg)

May, 29-30: Sylvain Paris (MIT)

May, 31: Bruno Stefanizzi (ATI)

June, 26: Alain Gonzalez (PSA group, virtual reality dept.)

Aug, 28, Andy Keane and Joerg Crall (NVidia)

Aug, 30, J.-E. Etienne, P. Ehanno and C. Raptis (Renault)

Sept, 28, Helmut Schaeben (Freiberg University)

Oct, 2, Henry Bensler (VolksWagen)

Oct, 9-16, Maxime Boucher (Mc Gill)

G. Lecot teaches ``Computer Systems Architecture" (IUT Verdun) ;

N. Ray teaches ``Analysis and Conception of Information Systems'', ``Computer Archtecture'', ``Algorithms'' (IUT Charlemagne) and ``Networking'' (IUT Verdun) (as an ATER Sept. 05 - Sept 06.)

B. Vallet teaches ''Java programming'' (Nancy School of Mines)

B. Lévy teaches ``History of Computer Sciences'' and ``Numerical Algorithms'' at the ENSG (School of Geology - INPL)

B. Lévy was a member of the program committee of SMI'06, SPM'06

B. Lévy was a member of the ``concours CR'' committee in Nancy.

X. Cavin is a member of the ``comité des utilisateurs des moyens informatiques'' (COMIN)

B. Lévy is a member of the ``commission de spécialistes'' of the Computer Sciences departement (section 27) in Strasbourg.

Members of the team attended Siggraph 06, Visualization 06, SGP 06, EGSR 06, SMI 06, SPM 06, AFIG 06;

B. Lévy gave invited talks at SMP 06 (June 6-8,Cardiff, UK), SMI 06 (June 14-16, Matsushima, Japan) and AFIG 06 (Nov 20-23, Bordeaux);

Jan., 12-13, X. Cavin, C. Mion and A. Filbois visit EDF and the CEA-DAM in the frame of the "pole de competitivite" SYSTEM@TIC

July 10, ALICE visits MPII Saarebruck in the frame of our ARC GEOREP.

Sept, 14, B. Levy, X. Cavin and C. Mion visit Henry Bensler (VolksWagen).

Oct 17-18, W.-C. Li, B. Lévy (ALICE) and L. Baboud (ARTIS) give a presentation of the ARC GEOREP at the INRIA ARC meeting.

Oct, 17-27 N. Ray (17-27), W-C Li, B. Vallet and B. Levy (20-27) visit Greg Turk (GeorgiaTech)

Dec, 5-6, B. Lévy visits Raphaelle Chaine (LIRIS) (committee member of Rémi Allègre's Ph.D. jury).

Dec, 6-7, L. Buatois attends the NVidia Compute Training Class (Santa Clara, USA).

DVIZwas awarded the National Innovative Startup prize.

Xgl(Unix X11 server architecture layered on top of OpenGL) plans to use our VTM (Vector Texture Mapping) method for font rendering (more details on Xgl site).

Blender(OpenSource 3D modeler) uses our ABF++ unwrapping algorithm.

We presented an article about our research in image vectorization in the French ``Pixel Creation Numerique'' magazine.

Jan. 24, NVidia and PNY launch the GeForce Quadro 4500 with our light simulation software in INRIA Lorraine.

June, 6-9, 26th GocadMeeting, demonstrations of our DVIZparallel visualization platform.

August, 30, J.-E. Etienne, P. Ehanno and C. Raptis (Renault) visit us, demonstrations of our geometry processing tools ( Graphite).

Sept, 14, Bruno Levy, Xavier Cavin and Christophe Mion visit Henry Bensler (VolksWagen), demonstrations of our geometry processing tools ( Graphite).

Oct. 1-4, Participation to the SEG conference on the Earth Decision Sciences/Paradigm booth, demos of DVIZ

Oct. 24-26, Visiting Hewlett-Packard Grenoble (SVA group), demos of DVIZ

Dec. 12, 20th anniversary of the Inria Lorraine, demos of Graphiteand DVIZ