ALICE is a project-team in Computer Graphics. The fundamental aspects of this domain concern the interaction of
*light*with the
*geometry*of the objects. The lighting problem consists in designing accurate and efficient
*numerical simulation*methods for the light transport equation. The geometrical problem consists in developing new solutions to
*transform and optimize geometric representations*. Our original approach to both issues is to restate the problems in terms of
*numerical optimization*. We try to develop solutions that are
*provably correct*,
*numerically stable*and
*scalable*.

By provably correct, we mean that some properties/invariants of the initial object need to be preserved by our solutions.

By numerically stable, we mean that our solutions need to be resistant to the degeneracies often encountered in industrial data sets.

By scalable, we mean that our solutions need to be applicable to data sets of industrial size.

To reach these goals, our approach consists in transforming the physical or geometric problem into a numerical optimization problem, studying the properties of the objective function and designing efficient minimization algorithms. To properly construct these discretizations, we use the formalism of finite element modeling, geometry and topology. We are also interested in recently-introduced research topics such as discrete exterior calculus.

The main applications of our results concern scientific visualization. We develop cooperations with researchers and people from the industry, who experiment applications of our general solutions to various domains, comprising CAD, industrial design, oil exploration and plasma physics. Our solutions are distributed in both open-source software ( Graphite) and industrial software ( Gocad, DVIZ).

Xavier Cavin created the ScalableGraphicscompany in January 2007, with Christophe Mion and Thibault Neiger. This company markets the DVIZsoftware, a middle-ware for graphic PC clusters that includes the latest progress of ALICE in the high-performance visualization research axis. For instance, this software allows to visualize the Boeing 777 database in real time, without requiring any pre-processing. This database is composed of more than 250 million primitives.

Luc Buatois obtained a best paper award at the HPCC conference (High Performance Computation Conference) for his paper “Concurrent Number Cruncher: an Efficient Sparse Linear Solver on the GPU” .

Rodrigo Toledo obtained a best paper award at the SIBGRAPI conference, for his paper “Geometry Textures” .

Bruno Lévy obtained a best course notes award at the SIGGRAPH conference, for his course notes “Geometric Modeling Based on Polygonal Meshes” (course organized by Mario Botsch and Mark Pauly - ETH Zurich) .

Computer Graphics is a quickly evolving domain of research. These last few years, both acquisition techniques (e.g., range laser scanners) and computer graphics hardware (the so-called GPU's, for Graphics Processing Units) have made considerable advances. However, as shown in Figure , despite these advances, fundamental problems still remain open. For instance, a scanned mesh composed of 30 millions of triangles cannot be used directly in real-time visualization or complex numerical simulation. To design efficient solutions for these difficult problems, ALICE studies two fundamental issues in Computer Graphics:

the representation of the objects, i.e., their geometry and physical properties;

the interaction between these objects and light.

Historically, these two issues have been studied by independent research communities, in isolation. However, we think that they share a common theoretical basis. For instance,
multi-resolution and wavelets were mathematical tools used by both communities
. We develop a new approach, which consists in studying the geometry and lighting from the
*numerical analysis*point of view. In our approach, geometry processing and light simulation are systematically restated as a (possibly non-linear and/or constrained) functional
optimization problem. This type of formulation leads to algorithms that are more efficients. Our long-term research goal is to find a formulation that permits a unified treatment of geometry
and illumination over this geometry.

Geometry processing recently appeared (in the middle of the 90's) as a promising avenue to solve the geometric modeling problems encountered when manipulating meshes composed of million
elements. Since a mesh may be considered to be a
*sampling*of a surface - in other words a
*signal*- the
*digital signal processing*formalism was a natural theoretic background for this subdomain (see e.g.,
). Researchers of this subdomain then studied different aspects of this formalism applied to geometric modeling.

Although many advances have been made in the geometry processing area, important problems still remain open. Even if shape acquisition and filtering is much easier than 30 years ago, a
scanned mesh composed of 30 millions of triangles cannot be used directly in real-time visualization or complex numerical simulation. For this reason, automatic methods to convert those large
meshes into higher level representations are necessary. However, these automatic methods do not exist yet. For instance, the pioneer Henri Gouraud often mentions in his talks that the
*data acquisition*problem is still open. Malcolm Sabin, another pioneer of the “Computer Aided Geometric Design” and “Subdivision” approaches, mentioned during several conferences of the
domain that constructing the optimum control-mesh of a subdivision surface so as to approximate a given surface is still an open problem. More generally, converting a mesh model into a higher
level representation, consisting of a set of equations, is a difficult problem for which no satisfying solutions have been proposed. This is one of the long-term goals of international
initiatives, such as the
AIMShapeEuropean network of excellence.

Motivated by gridding application for finite elements modeling for oil and gas exploration, in the frame of the Gocadproject, we started studying geometry processing in the late 90's and contributed to this area at the early stages of its development. We developed the LSCM method (Least Squares Conformal Maps) in cooperation with Alias Wavefront . This method has become the de-facto standard in automatic unwrapping, and was adopted by several 3D modeling packages (including Maya and Blender). We experimented various applications of the method, comprising normal mapping, mesh completion and light simulation .

However, classical mesh parameterization requires to partition the considered object into a set of topological disks. For this reason, last year (2006), we designed a new method (Periodic Global Parameterization) that generates a continuous set of coordinates over the object . We also showed the applicability of this method, by proposing the first algorithm that converts a scanned mesh into a Spline surface automatically .

We are still not fully satisfied with these results, since the method remains quite complicated. We think that a deeper understanding of the underlying theory is likely to lead to both efficient and simple methods. For this reason, this year (2007), we studied different ways of discretizing partial differential equations on meshes, including Finite Element Modeling and Discrete Exterior Calculus.

Numerical simulation of light means solving for light intensity in the “Rendering Equation”, an integral equation modeling energy transfers (or light
*intensity*transfers). The Rendering Equation was first formalized by Kajiya
, and is given by:

Computing global illumination (i.e., solving for intensity in Equation
) in general environments is a challenging task. Global illumination may be considered in terms of computing the interactions
between the
*lighting signal*and the
*geometric signal*(i.e., the scene). These interactions occur at various
*scales*. This issue belongs to the same class of problems encountered by geometry processing, described in the previous section. As a consequence, the
*signal processing*family of approaches is again a well-suited formalism. As such, the
*multi-scale*approach is a natural choice, which dramatically improves performances. Environments composed of a large number of primitives, such as highly tessellated models, show a high
variability of these scales (see Figure
).

In addition, these methods are challenged with more and more complex materials which need to be taken into account in the simulation. The simple diffuse Lambert law has been replaced with much more complex reflection models. The goal is to create synthetic images that no longer have a synthetic aspect, in particular when human characters are considered.

One of the difficulties is finding efficient ways of evaluating the visibility term. This is typically a Computational Geometry problem, i.e., a matter of finding the right combinatorial
data structure (the
*visibility complex*), studying its complexity and deriving algorithms to construct it. To deal with this issue, several teams (including VEGAS, ARTIS and REVES) study the visibility
complex.

The other terms of the Rendering Equation cannot be solved analytically in general. Many different numerical resolution methods have been used. The main difficulties of the discipline are that each time a new physical effect should be simulated, the numerical resolution methods need to be adapted. In the worst case, it is even necessary to design a new ad-hoc numerical resolution method. For instance, in Monte-Carlo based solvers, several sampling maps are used, one for each effect (a map is used for the diffuse part of lighting, another map is used for caustics, etc.). As a consequence, the discipline becomes a collection of (sometimes mutually exclusive) techniques, where each of these techniques can only simulate a specific lighting effect.

The other difficulty is to satisfy two somewhat antinomic objectives at the same time. On the one hand, we want to simulate complex physical phenomena (subsurface scattering, polarization, interferences, etc.), responsible for subtle lighting effects. On the other hand, we want to visualize the result of the simulation in real-time.

We first experimented finite-element methods in parameter space, and developed the
*Virtual Mesh*approach and a parallel solution mechanism for the associated hierarchical finite element formulation. The initial method was dedicated to scenes composed of quadrics. We
combined this method with our geometry processing methods to improve the visualization
.

One of our goals is now to design new representations of lighting coupled with the geometric representation. These representations of lighting need to be general enough so as to be easily extended when multiple physical phenomena should be simulated. Moreover, we want to be able to use these representations of lighting in the frame of real-time visualization. Our original approach to these problems consists in finding efficient function bases to represent the geometry and the physical attributes of the objects. We have first experimented this approach to the problem of image vectorization . We think that our dynamic function basis formulation is likely to lead to efficient light simulation algorithms. The originality is that the so-defined optimization algorithm solves for approximation and sampling all together.

After having introduced the
*geometry processing*and
*light simulation*scientific domains, we now present the principles that we use to design a common mathematical framework that can be applied to both domains. Early approaches to geometry
processing and light simulation were driven by a Signal Processing approach. In other words, the solution of the problem is obtained after applying a
*filtering scheme*multiple times. This is for instance the case of the mesh smoothing operator defined by Taubin in his pioneering work
. Recent approaches still inherit from this background. Even if the general trend moves to Numerical Analysis, much
work in geometry processing still studies the coefficients of the gradient of the objective function
*one by one*. This intrinsically refers to
*descent*methods (e.g., Gauss-Seidel), which are not the most efficient, and do not converge in general when applied to meshes larger than a certain size (in practice, the limit appears to
be around 30000 facets).

In the approach that we develop in the ALICE project-team, geometry processing and light simulation are systematically restated as a (possibly non-linear and/or constrained) functional optimization problem. As a consequence, studying the properties of the minimum is easier: the minimizer of a multivariate function can be more easily characterized than the limit of multiple applications of a smoothing operator. This simple remark makes it possible to derive properties (existence and uniqueness of the minimum, injectivity of a parameterization, and independence to the mesh).

Besides helping to characterize the solution, restating the geometric problem as a numerical optimization problem has another benefit. It makes it possible to design efficient numerical optimization methods, instead of the iterative relaxations used in classic methods.

Richard Feynman (Nobel Prize in physics) mentions in his lectures that physical models are a “smoothed” version of reality. The global behavior and interaction of multiple particles is
captured by physical entities of a larger scale. According to Feynman, the striking similarities between equations governing various physical phenomena (e.g., Navier-Stokes in fluid dynamics
and Maxwell in electromagnetism) is an illusion that comes from the way the phenomena are modeled and represented by “smoothed” larger-scale values (i.e.,
*fluxes*in the case of fluids and electromagnetism). Note that those larger-scale values do not necessarily directly correspond to a physical intuition, they can reside in a more abstract
“computational” space. For instance, representing lighting by the coefficients of a finite element is a first step in this direction. More generally, our approach consists in trying to get rid
of the limits imposed by the classic view of the existing solution mechanisms. The traditional approaches are based on an intuition driven by the laws of physics. Instead of trying to mimic the
physical process, we try to restate the problem as an abstract numerical computation problem, on which more sophisticated methods can be applied (a plane flies like a bird, but it does not flap
its wings). We try to consider the problem from a computational point of view, and focus on the link between the numerical simulation process and the properties of the solution of the Rendering
Equation. Note also that the numerical computation problems yielded by our approach lie in a high-dimensional space (millions of variables). To ensure that our solutions scale-up to scientific
and industrial data from the real world, our strategy is to try to always use the best formalism and the best tool. The best formalism comprises Finite Elements theory, differential geometry,
topology, and the best tools comprise recent hardware, such as GPU (Graphic Processing Units), with the associated highly parallel algorithms. This was especially true this year (2007), with
the advent of discrete exterior calculus in the geometry processing community, and the fast development of GPUs and new APIs to program them (CUDA). To implement our strategy, we develop
algorithmic, software and hardware architectures, and distribute these solutions in both open-source software (
Graphite) and industrial software (
Gocad,
DVIZ).

Besides developing new solutions for geometry processing and numerical light simulation, we aim at applying these solutions to real-size scientific and industrial problems. In this context, scientific visualization is our main applications domain. With the advances in acquisition techniques, the size of the data sets to be processed increases faster than Moore's law, and represents a scientific and technical challenge. To ensure that our processing and visualization algorithms scale-up, we develop a combination of algorithmic, software and hardware architectures. Namely, we are interested in hierarchical function bases, and in parallel computation on GPUs (graphic processing units).

Our developments in parallel processing and GPU programming permit our geometry processing and light simulation solutions to scale-up, and handle real-scale data from other research and industry domains. The following applications are developed within the MIS (Modelization, Interaction, Simulation) and AOC (Analysis, Optimization and Control) programs, which are supported by the “Contrat de Plan État-Région Lorraine”.

This application domain is led by the Gocadconsortium, created by Prof. Mallet, now headed by Guillaume Caumon. The consortium involves 48 universities and most of the major oil and gas companies. ALICE contributes to Gocadwith numerical geometry and visualization algorithms for oil and gas engineering. The currently explored domains are complex and dynamic structural models construction, extremely large seismic volumes exploration, and drilling evaluation and planning. The solutions that we develop are transferred to the industry through Earth Decision Sciences. The Ph.D. thesis of Laurent Castanié, co-advised by Gocad and ALICE, defended last year, lead to novel visualization methods, published in IEEE Visualization . We continue the cooperation, with the Ph.D. thesis of Luc Buatois, on high-performance numerical solvers on Graphic Processing units. Results were published in the high-performance computing conference (second best student paper award). This year, a new co-advised Ph.D. started (Thomas Viard), on the visualization of data with uncertainties.

Protein docking is a fundamental biological process that links two proteins. This link is typically defined by interactions between two large zones of the protein boundaries. Visualizing the interfaces where these interactions take place is useful to understand the process thanks to 3D protein structures, to estimate the quality of docking simulation results, and to classify interactions in order to predict docking affinity between classes of interacting zones. Our developments take place in the VMD software (in cooperation with ORPAILLEUR and the Beckmann Institute at University of Illinois). More recently, in the frame of his Ph.D., Matthieu Chavent studied new means of visualizing molecular surfaces, which play an important role in better understanding the nano-scale mechanisms of life.

Computed images and immersive visualization systems are used to design and evaluate virtual products in the aircraft and car industry. In this application, the CAD models used are extremely
large and the images have to be computed from an accurate physically-based simulation process. Therefore, constructing the underlying representations from the data is difficult and often done
manually. In a certain sense, one can say that the goal is to construct an
*abstraction*of the geometry. Once the geometry is abstracted, re-instancing it into alternate representations is made easier. We are currently exploring the fundamental aspects of this
point of view. Thus, we considered this year (2007) the more general problem of constructing a
*dynamic function basis*attached to the geometric model. This abstracted form makes the meaningful parameters (or the “control knobs”) appear.

Graphiteis a research platform for computer graphics, 3D modeling and numerical geometry. It comprises all the main research results of our “geometry processing” group. Data structures for cellular complexes, parameterization, multi-resolution analysis and numerical optimization are the main features of the software. Graphite is publicly available since October 2003, and is now used by researchers from Geometrica (INRIA - MÃ©ditÃ©rranÃ©e), Artis (INRIA Rhône-Alpes), LSIIT (Strasbourg), Technion (Israel), Stanford University (United States), Harvard University (United States), University of British Columbia (Canada), MIT (United States). Graphite is one of the common software platforms used in the frame of the European Network of Excellence AIMShape.

This year (2007), we developed a new module for manipulating volumetric meshes in Graphite, and meshing the interior of a triangulated surface. The module comprises the following tools (demonstrated in Figure ):

Mesh pre-processing tools: we developed various hole-filling strategies. Olivier Génevaux also implemented an efficient version of the “Pliant Remeshing” algorithm .

Data structures for volumetric meshes: we developed a flexible version of our mesh data structures , together with an efficient attribute management system to dynamically attach properties to them.

Volumetric meshing: we interfaced several volumetric meshers, including CGALand Tetmesh.

OpenNLis a standalone library for numerical optimization, especially well-suited to mesh processing. The API is inspired by the graphics API OpenGL, this makes the learning curve easy for graphics people. The included demo program implements our LSCM mesh unwrapping method. It was integrated in Blenderby Brecht Van Lommel and others to create automatic texture mapping methods. More recently, they implemented our ABF++ method (developed in cooperation with University of British Columbia). It will shortly include the more recent linear ABF , that we developed in cooperation with Rhaleb Zayer (Max Planck Institute for Informatik). Our mesh unwrapping algorithms have now become the de-facto standard for mesh unwrapping in several industrial mesh modeling packages (including Maya, Silo, Catia).

CGAL parameterization package: this software library, developed in cooperation with Pierre Alliez and Laurent Saboret, is a CGALpackage for mesh parameterization. It includes a special, generic version of OpenNL, compatible with CGAL requirements of genericity.

DVIZis a library dedicated to distributed visualization. The development of DViz started in September 2002 and serves as the basis for some of our work in scientific visualization. It allows applications to run on graphics clusters with optimal parallel performance. A startup headed by Xavier Cavin started in January 2007 to transfer these results to the industry.

Intersurfis a plugin of the VMD (Visual Molecular Dynamics) software. VMD is developed by the Theoretical and Computational Biophysics Group at the Beckmann Institute at University of Illinois. The Intersurf plugin is released with the official version of VMD since the 1.8.3 release. It provides surfaces representing the interaction between two groups of atoms, and colors can be added to represent interaction forces between these groups of atoms. We plan to include in this package the new results obtained this year in molecular surface visualization by Matthieu Chavent.

Gocadis a 3D modeler dedicated to geosciences. It was developed by a consortium headed by Jean-Laurent Mallet, in the Nancy School of Geology. Gocad is now commercialized by Earth Decision Sciences(formerly T-Surf), a company which was initially a start-up company of the project-team. Gocad is used by all major oil companies (Total-Fina-Elf, ChevronTexaco, Petrobras, etc.), and has become a de facto standard in geo-modeling. Last year, Laurent Castanié's work (CIFRE Earth Decision Sciences, defended in 2006) was successfully integrated in the VolumeExplorer plugin of Gocad. Luc Buatois's work on GPU-based numerical solvers will be integrated in Gocad's flow simulator.

Candela is a library dedicated to light simulation. Candela-VR is an extension of Candela which makes possible to display the result of a simulation in environments with several CPU's and GPU's.

In the field of mesh parameterization, the impact of angular and boundary distortion on parameterization quality has brought forward the need for robust and efficient free boundary angle
preserving methods. One of the most prominent approaches in this direction is the Angle Based Flattening (ABF) which directly formulates the problem as a constrained nonlinear optimization in
terms of angles. Since the original formulation of the ABF, a steady research effort has been dedicated to improving its efficiency. As such, we have proposed in 2004 the ABF++ method
, based on numerical methods (sequential linear quadratic programming) and algebraic transforms of the initial
problem (Schur complement). This dramatically reduced the dimension of the linear systems to be solved (from
4
n
f+ 2
n
vto
2
n
v, where
nfdenotes the number of facets, and
nvdenotes the number of vertices). These transforms were combined with a multigrid optimization framework, to further improve performances. However, the ABF++ method still requires
multiple iterations of a non-linear solver.

Our new linear ABF is based on a fundamental remark, made by Rhaleb Zayer (MPII Saarebrücken): As for any well posed numerical problem, the solution is generally an approximation of the underlying mathematical equations. As a followup of our ARC GEOREP, we developed Rhaleb Zayer's idea, leading to a linear approximate of the initial ABF equations. The time-space complexity and accuracy of the solution are to a great extent affected by the kind of approximation used. In this work we reformulate the problem based on the notion of error of estimation. A careful manipulation of the resulting equations yields for the first time a linear version of angle based parameterization. The error induced by this linearization is quadratic in terms of the error in angles and the validity of the approximation is further supported by numerical results. Besides performance speedup, the simplicity of the current setup makes re-implementation and reproduction of our results straightforward. An example is shown in Figure . The so-computed texture coordinates can be used for appearance-preserving simplification. It is then possible to replace a 300K facets model with one of 3500 facets only, and preserve the details by applying a normal map.

Quad remeshing is an important problem in geometry processing. To be put simply, the problem consists in transforming triangles into squares. In more details, the idea is to transform a triangulated mesh (i.e., obtained by scanning a real object) into a quad-dominant control mesh (i.e., a Catmull-Clark control mesh). This problem is extremely difficult. Our Periodic Global Parameterization is the first automatic algorithm that generates such a control mesh, adapted to the curvature of the shape. We were also able to use our method to convert scanned meshes into parametric surfaces . However, we are still not fully satisfied with these results, since the method remains quite complicated. Spectral methods seem to be a promising approach, as shown in . We started last year (2006) to study this formalism , and established interesting properties between those eigenfunctions and a generalization of the Fourier transform on manifolds.

This year (2007), we worked on the discretization of the eigenvalue problem. We studied the “Shape DNA” method , and developed two discretizations, one based on Finite Element Modeling, and the other one based on Discrete Exterior Calculus. As shown in Figure , this provides a better understanding of discrete Laplacians, and allows to design mesh filtering algorithms that are independent from the initial mesh of the object. We also developed efficient algorithms to solve the high-dimensional eigenvalue problems. Based on algebraic transforms (shift-invert spectral transform), we developed a band-by-band computation algorithm, that can compute Laplacian eigenfunctions for meshes with millions vertices (article accepted pending minor revisions).

A wide class of geometry processing and PDE resolution methods needs to solve a linear system, where the matrix pattern of non-zero coefficients is dictated by the connectivity matrix of the mesh. These numerical computations are so demanding on computational resources that the ever-growing power of GPUs makes them more and more valuable for handling such tasks. This is even more true with the appearance of new APIs (CTM from AMD-ATI and CUDA from NVIDIA) which give a direct access to the highly-multithreaded computational resources and associated memory bandwidth of GPUs; CUDA even provides a BLAS implementation but only for dense matrices (CuBLAS).

However, previously implemented GPU linear solvers are restricted to specific types of matrices (e.g., dense or band matrices), or use non-optimal compressed row storage strategies. By combining recent GPU programming techniques (and new GPU dedicated APIs) with supercomputing strategies (namely block compressed row storage and register blocking), we developed the CNC (Concurrent Number Cruncher) , a new sparse, general-purpose linear solver based on the Jacobi-Preconditioned Conjugate Gradient algorithm. To our knowledge, this is the first general purpose sparse linear solver for Graphics Processing Units. We successfully applied it to geometry processing problems (mesh fairing and mesh parameterization).

As shown in Figure , our CNC outperforms by up to a factor of 6.0x leading-edge CPU implementations of the same algorithms for significant sizes of linear systems.

For highly-detailed geometry, triangle mesh is not the most memory-efficient representation. In the frame of his Ph.D. , Rodrigo Toledo developed algorithms to convert a scanned mesh into a set of height fields and to display them on the GPU . Figure shows the different phases of the algorithm. We first use Alliez et al.'s Variational Shape Approximation to partition the model, then we generate a set of overlapping charts (to minimize boundary artifacts). Finally, we iteratively subdivide the charts that cannot be projected on a plane. By analogy with “geometry images”, we call the so-constructed representation a “geometry texture”. Rodrigo Toledo obtained the best paper award for his paper at the SIBGRAPI conference.

The ALICE team has developed several algorithms to generate texture coordinates , , , that have now become standards. However, texture coordinates are only one aspect of texturing. More importantly, creating the texture itself is a tedious and difficult task, that requires much user interaction. For this reason, we started a cooperation with Greg Turk's team (GeorgiaTech), leading expert in the domain of texture synthesis. We developed a new representation of texture (that we call “material space”), that can represent materials that smoothly vary over a surface. We also developed algorithms to generate a tileable texture, and an interactive parameterization method to apply those textures onto the model. The result is shown in Figure . From a set of small photographs (called “exemplars”), our algorithm generates a tileable texture, with an additional “material coordinate”. This material coordinate allows continuously morphing from one material to the other.

The quad-remeshing algorithms that we developed last year ( and ) generate quadrilateral elements aligned with a given guidance vector field. This guidance vector field can be obtained from an estimate of the principal directions of curvature of the surface. However, specific situations may require using alternative vector fields, possibly designed by the user. Moreover, many other algorithms in computer graphics and geometry processing use two orthogonal smooth direction fields (unit tangent vector fields) defined over a surface. For instance, these direction fields are used in texture synthesis, in geometry processing or in non-photorealistic rendering to distribute and orient elements on the surface. Such direction fields can be designed in fundamentally different ways, according to the symmetry requested: inverting a direction or swapping two directions may be allowed or not.

Despite the advances realized in the last few years in the domain of geometry processing, a unified formalism is still lacking for the mathematical object that characterizes these generalized direction fields. As a consequence, existing direction field design algorithms are constrained to use non-optimum local relaxation procedures.

We developed a formalization of what we call a
N-symmetry direction field, a generalization of classical direction fields. We gave a new definition of their singularities to explain how they relate with the topology of the surface.
Namely, we wrote an accessible demonstration of the Poincaré-Hopf theorem in the case of
N-symmetry direction fields on 2-manifolds. Based on this theorem, we explained how to control the topology of
N-symmetry direction fields on meshes. We showed the validity and robustness of this formalism by deriving a highly efficient algorithm to design a smooth field interpolating user
defined singularities and directions
.

As shown in Figure , industrial models can comprise a large number of tubular shapes. This is especially true in the domain of oil and gas, thinking for instance about a tanker, a refinery or an oil platform, which have a very large number of tubes and pipes. Rendering these pipes is traditionally done by first converting them into mesh models (Figure left). This causes jagged silhouettes (a), incorrect intersections (b), and artifacts where the discretizations do not line-up (c). In the frame of his Ph.D. , Rodrigo Toledo developed methods to recover the equation of the geometry, and rendering it directly (Figure right).

To display these surfaces in real-time, we developed new ray-casting algorithms . The ray-casting of implicit surfaces on GPU has been explored in the last few years. However, until recently, they were restricted to second degree (quadrics). We present an iterative solution to ray cast cubics and quartics on GPU. Our solution targets efficient implementation, obtaining interactive rendering for thousands of surfaces per frame. We have given special attention to torus rendering since it is a useful shape for multiple CAD models. We have tested four different iterative methods, including a novel one.

Molecules can be roughly described as the union of spheres (that correspond to the individual atoms). The boundary of this union of spheres is called the Van-der-Waals surface. However, the function of complex molecules depends on more elaborate geometrical objects, deduced from this union of spheres, called molecular surfaces. The most popular definition of molecular surfaces by Connolly models the electric field generated by the interaction of the molecule and a solvent. A Connolly surface is defined to be the set of contact points of a sphere that rolls on the Van-der-Walls surface. More recently, Edelsbrunner introduced the so-called “skin surfaces” , that are smoother than Connolly surfaces. To visualize a molecular surface, the standard approach consists in triangulating it, and sending the triangles to the graphic board. However, generating the triangulated surface is a time-consuming process. Moreover, depending on the used number of triangles, the quality of the generated image may be unsatisfactory. More importantly, guaranteeing that the topology of the so-constructed triangulated surface corresponds to the initial one faithfully is a difficult theoretic problem .

For these reasons, our goal is to use the equation of the skin surface directly, and visualize a pixel-accurate version using the GPU (Graphic Processing Unit). Some early results of Matthieu Chavent are shown in Figure -A. The equation of a skin surface is a piecewise quadric surface. The pieces are shown in different colors in Figure -B. The pieces correspond to the cells of a computational geometry data structure, called the mixed complex, that may be thought-of as the linear interpolation between a Delaunay and a Voronoi diagram (see Figure -C).

The company Earth Decision Sciences(formerly T-Surf) develops and commercializes the modeler Gocad. Gocad is a 3D modeler dedicated to geosciences. This company was initially created as a start-up company of the National School of Geology and members of ISA and ALICE project-teams. It has now 200 employees in 7 countries (France, United States, Brazil, Dubai, Canada, etc. ). It was recently acquired by the Paradigm company.

The ScalableGraphicscompany started in January 2007. Its objective is to provide high-performance visualization solutions, based on graphics PC clusters. The DViz software is based on industrialized research results from the research on high performance visualization done in ALICE.

Our proposal
*Geometric Intelligence*on Geometry Processing was selected by Microsoft Research Cambridge (in the frame of the Microsoft call for proposals
*tools for advancing science*). Aurélien Martinet did a post-doc on this theme. He studied the possibility of integrating automatic instanciation in high-performance visualization tools,
and studied a variational approximation algorithm based on higher-order geometric primitives.

The Gocadsoftware is developed in the context of a consortium that encloses more than forty universities and thirty oil and gas companies around the world. This software is dedicated to modeling and visualizing the underground. ALICE studies the mathematical aspects of geo-modeling, and develops efficient numerical algorithms to solve the underlying optimization problems. The cooperation is formalized by several co-advised Ph.D. thesis (Laurent CastaniÃ©, Luc Buatois, Thomas Viard), and courses on numerical optimization given by ALICE researchers in the school of geology. Guillaume Caumon (head of Gocad consortium) is an external collaborator of project-team ALICE.

We have signed Non Disclosure Agreements with ATI and NVidia. We experiment their new APIs to implement high performance GPGPU computations, i.e., using the graphic board as a high-performance numerical computation engine (Luc Buatois).

In the frame of the MIS program (Modeling, Interaction and Simulation) of the CPER (“Contrat de Plan État-Région Lorraine”), we coordinate the MOVIS action, with participants from ALICE, ScalableGraphics, ORPAILLEUR, and Gocad. The goal of this action is to design new algorithms for modeling and visualizing both industrial and manufactured objects. In 2007, about visualization aspects, we developed algorithms for visualizing molecular surfaces, industrial structures , detailed objects . About modeling aspects, we developed algorithms to efficiently solve linear systems on GPUs .

In the frame of the AOC program (Analysis, Optimization and Control) of the CPER (“Contrat de Plan État-Région Lorraine”), we participate to the “swimmer” action, coordinated by Marius Tucsnak (CORIDA project-team). The goal of this action is to simulate and visualize the complex fluid-solid interactions caused by a swimming fish. In 2007, we designed a new software library for extending MATLAB. This software library, currently under development, will allow the user to easily implement finite element solvers for coupled fluid-solid dynamics. Our final goal is to test the validity of our approach by implementing a 3D Navier-Stokes solver with solid-fluid interactions.

As a followup to previous cooperation projects (including our ACI Geogrid, coordinated by J.-C. Paul), we work in cooperation with the Gocadgroup. The Ph.D. theses of L. Buatois and T. Viard are co-advised by the ENSG/Gocad (Nancy School of Geology) and ALICE.

L. Alonso is secretary of the national AGOS association of INRIA.

The AIMShapeEuropean project (FP6) intends to design geometrical modeling techniques improving the management of semantic information. The 3D modeling and computer graphics research domains require more and more expertise in various areas (differential geometry, numerical algorithms, combinatorial data structure, computer graphics hardware, etc.). Achieving significant advances requires to master all these fundamental domains, which requires at least 10 men years for each aspect. In other words, reinventing the wheel can be a dramatic waste of time. This Network of Excellence (NoE) aims at sharing the expertise of European research groups in this area. To better share the knowledge and know-how, we proposed to develop within the network the notion of DSW (Digital Shape Workbench), i.e., a set of common integrated research platforms ( CGAL: computational geometry library, Graphite: numerical geometry workbench, Synaps: numerical algorithms). We expect significant new fundamental results as the outcome of this strategy.

GeorgiaTech Lorraine (UMI CNRS 2958) is the European platform of GeorgiaTech, with joint education (with french engineering schools) and research (with CNRS) programs. In this framework, there is currently a project of creating a joint lab between GeorgiaTech and Loria/Inria Lorraine. In the frame of this joint lab project, we studied the possibilities of cooperation and identified a strong complementarity between Greg Turk's group and project-team ALICE. We started a cooperation last year, on texture synthesis (see the results section).

February 5-6, Jarek Rossignac (GeorgiaTech) visited us and gave a talk on shape morphing. We also discussed about the possible use of conformal geometry (analytic complex functions) for shape morphing.

L. Buatois teaches “C2I” (basics of computers) at Nancy2 university.

B. Lévy teaches “History of Computer Sciences” and “Numerical Algorithms” at the ENSG (School of Geology - INPL)

B. Lévy was program co-chair of the ACM Symposium on Solid and Physical Modeling 2007 .

B. Lévy was member of the program committee of IEEE Visualization 2007, ACM Symposium on Geometry Processing 2007, IEEE Shape Modeling International 2007, Pacific Graphic 2007 and NASAGEM07.

B. Lévy is a member of the “commission de spécialistes” of the Computer Sciences departement (section 27) in Strasbourg.

Members of the team attended Siggraph 07, SGP 07, SMI 07, SPM 07, ICIAM 07.

Jun. 18-21: L. Buatois, N. Ray and B. Vallet gave three talks at the Gocadmeeting.

Jul. 16-20: B. Lévy participated to two minisymposia in ICIAM 07 (International Congress of Industrial and Applied Mathematics): symposium on Laplacian Eigenfunctions (with Naoki Saito) and symposium on shape classification (with Michela Spagnulo).

Aug. 1-4: B. Lévy, N. Ray and B. Vallet visited Greg Turk (Georgia-Tech, USA), common work on scientific visualization and texture synthesis, preparation of workplans for further cooperation.

Sept. 11: O. Génevaux and B. Lévy visited Edmond Boyer (INRIA Rhone-Alpes), we discussed about a cooperation on eigenfunctions (as a followup of our ARC GEOREP).

Sept, 10: B. Lévy visited Raphaelle Chaine (LIRIS) (committee member for the Ph.D. defense of Mohamed Mousa).

Sept, 17: B. Lévy, N. Ray and B. Vallet visited Hans-Peter Seidel (MPII Saarebruck) (B. Lévy was committe member for the Ph.D. defense of Rhaleb Zayer).

Dec., 3-4: B. Lévy gave a scientific session at GDC-Lyon 2007 (Game Developer Conference).

Sept. 28 - Nov. 04: “Émoi de l'image” exhibition, Nancy Salle Poirel, demonstrations of our software (Graphite and DViz) and conference (Oct. 3 2007).

Oct. 6: “Fête de la science”, B. Lévy gave a conference on geometry processing.

Dec. 10-11: B. Lévy, N. Ray and B. Vallet show demonstrations of spectral geometry processing at the 40 years anniversary meeting of the INRIA, in Lille.