Computer generated images are ubiquitous in our everyday life. Such images are the result of a process that has seldom changed over the years: the optical phenomena due to the propagation of light in a 3D environment are simulated taking into account how light is scattered , according to shape and material characteristics of objects. The intersection of optics (for the underlying laws of physics) and computer science (for its modeling and computational efficiency aspects) provides a unique opportunity to tighten the links between these domains in order to first improve the image generation process (computer graphics, optics and virtual reality) and next to develop new acquisition and display technologies (optics, mixed reality and machine vision).
Most of the time, light, shape, and matter properties are studied, acquired, and modeled separately, relying on realistic or stylized rendering processes to combine them in order to create final pixel colors. Such modularity, inherited from classical physics, has the practical advantage of permitting to reuse the same models in various contexts. However, independent developments lead to un-optimized pipelines and difficult-to-control solutions since it is often not clear which part of the expected result is caused by which property. Indeed, the most efficient solutions are most often the ones that blur the frontiers between light, shape, and matter to lead to specialized and optimized pipelines, as in real-time applications (like Bidirectional Texture Functions and Light-Field rendering ). Keeping these three properties separated may lead to other problems. For instance:
Measured materials are too detailed to be usable in rendering systems and data reduction techniques have to be developed , , leading to an inefficient transfer between real and digital worlds;
It is currently extremely challenging (if not impossible) to directly control or manipulate the interactions between light, shape, and matter. Accurate lighting processes may create solutions that do not fulfill users' expectations;
Artists can spend hours and days in modeling highly complex surfaces whose details will not be visible due to inappropriate use of certain light sources or reflection properties.
Most traditional applications target human observers. Depending on how deep we take into account the specificity of each user, the requirement of representations, and algorithms may differ.
With the evolution of measurement and display technologies that go beyond conventional images (e.g., as illustrated in Figure , High-Dynamic Range Imaging , stereo displays or new display technologies , and physical fabrication , , ) the frontiers between real and virtual worlds are vanishing . In this context, a sensor combined with computational capabilities may also be considered as another kind of observer. Creating separate models for light, shape, and matter for such an extended range of applications and observers is often inefficient and sometimes provides unexpected results. Pertinent solutions must be able to take into account properties of the observer (human or machine) and application goals.
The main goal of the MANAO project is to study phenomena resulting from the interactions between the three components that describe light propagation and scattering in a 3D environment: light, shape, and matter. Improving knowledge about these phenomena facilitates the adaption of the developed digital, numerical, and analytic models to specific contexts. This leads to the development of new analysis tools, new representations, and new instruments for acquisition, visualization, and display.
To reach this goal, we have to first increase our understanding of the different phenomena resulting from the interactions between light, shape, and matter. For this purpose, we consider how they are captured or perceived by the final observer, taking into account the relative influence of each of the three components. Examples include but are not limited to:
The manipulation of light to reveal reflective or geometric properties , as mastered by professional photographers;
The modification of material characteristics or lighting conditions to better understand shape features, for instance to decipher archaeological artifacts;
The large influence of shape on the captured variation of shading and thus on the perception of material properties .
Based on the acquired knowledge of the influence of each of the components, we aim at developing new models that combine two or three of these components. Examples include the modeling of Bidirectional Texture Functions (BTFs) that encode in a unique representation effects of parallax, multiple light reflections, and also shadows without requiring to store separately the reflective properties and the meso-scale geometric details, or Light-Fields that are used to render 3D scenes by storing only the result of the interactions between light, shape, and matter both in complex real environments and in simulated ones.
One of the strengths of MANAO is that we are inter-connecting computer graphics and optics. On one side, the laws of physics are required to create images but may be bent to either increase performance or user's control: this is one of the key advantage of computer graphics approach. It is worth noticing that what is not possible in the real world may be possible in a digital world. However, on the other side, the introduced approximations may help to better comprehend the physical interactions of light, shape, and matter.
The MANAO project specifically aims at considering information transfer, first from the real world to the virtual world (acquisition and creation), then from computers to observers (visualization and display). For this purpose, we use a larger definition of what an observer is: it may be a human user or a physical sensor equipped with processing capabilities. Sensors and their characteristics must be taken into account in the same way as we take into account the human visual system in computer graphics. Similarly, computational capabilities may be compared to cognitive capabilities of human users. Some characteristics are common to all observers, such as the scale of observed phenomena. Some others are more specifics to a set of observers. For this purpose, we have identified two classes of applications.
Physical systems Provided our partnership that leads to close relationships with optics, one novelty of our approach is to extend the range of possible observers to physical sensors in order to work on domains such as simulation, mixed reality, and testing. Capturing, processing, and visualizing complex data is now more and more accessible to everyone, leading to the possible convergence of real and virtual worlds through visual signals. This signal is traditionally captured by cameras. It is now possible to augment them by projecting (e.g., the infrared laser of Microsoft Kinect) and capturing (e.g., GPS localization) other signals that are outside the visible range. These supplemental information replace values traditionally extracted from standard images and thus lower down requirements in computational power . Since the captured images are the result of the interactions between light, shape, and matter, the approaches and the improved knowledge from MANAO help in designing interactive acquisition and rendering technologies that are required to merge the real and the virtual world. With the resulting unified systems (optical and digital), transfer of pertinent information is favored and inefficient conversion is likely avoided, leading to new uses in interactive computer graphics applications, like augmented reality , and computational photography .
Interactive visualization This direction includes domains such as scientific illustration and visualization, artistic or plausible rendering. In all these cases, the observer, a human, takes part in the process, justifying once more our focus on real-time methods. When targeting average users, characteristics as well as limitations of the human visual system should be taken into account: in particular, it is known that some configurations of light, shape, and matter have masking and facilitation effects on visual perception . For specialized applications, the expertise of the final user and the constraints for 3D user interfaces lead to new uses and dedicated solutions for models and algorithms.
The MANAO project aims at studying, acquiring, modeling, and rendering the interactions between the three components that are light, shape, and matter from the viewpoint of an observer. As detailed more lengthily in the next section, such a work will be done using the following approach: first, we will tend to consider that these three components do not have strict frontiers when considering their impacts on the final observers; then, we will not only work in computer graphics, but also at the intersection of computer graphics and optics, exploring the mutual benefits that the two domains may provide. It is thus intrinsically a transdisciplinary project (as illustrated in Figure ) and we expect results in both domains.
Thus, the proposed team-project aims at establishing a close collaboration between computer graphics (e.g., 3D modeling, geometry processing, shading techniques, vector graphics, and GPU programming) and optics (e.g., design of optical instruments, and theories of light propagation).
The following examples illustrate the strengths of such a partnership.
First, in addition to simpler radiative transfer equations commonly used in computer graphics, research in the later will be based on state-of-the-art understanding of light propagation and scattering in real environments.
Furthermore, research will rely on appropriate instrumentation expertise for the measurement , and display of the different phenomena.
Reciprocally, optics researches may benefit from the expertise of computer graphics scientists on efficient processing to investigate interactive simulation, visualization, and design.
Furthermore, new systems may be developed by unifying optical and digital processing capabilities.
Currently, the scientific background of most of the team members is related to computer graphics and computer vision.
A large part of their work have been focused on simulating and analyzing optical phenomena as well as in acquiring and visualizing them.
Combined with the close collaboration with the optics laboratory LP2N (http://
At the boundaries of the MANAO project lie issues in human and machine vision. We have to deal with the former whenever a human observer is taken into account. On one side, computational models of human vision are likely to guide the design of our algorithms. On the other side, the study of interactions between light, shape, and matter may shed some light on the understanding of visual perception. The same kind of connections are expected with machine vision. On the one hand, traditional computational methods for acquisition (such as photogrammetry) are going to be part of our toolbox. On the other hand, new display technologies (such as the ones used for augmented reality) are likely to benefit from our integrated approach and systems. In the MANAO project we are mostly users of results from human vision. When required, some experimentation might be done in collaboration with experts from this domain, like with the European PRISM project. For machine vision, provided the tight collaboration between optical and digital systems, research will be carried out inside the MANAO project.
Analysis and modeling rely on tools from applied mathematics such as differential and projective geometry, multi-scale models, frequency analysis or differential analysis , linear and non-linear approximation techniques, stochastic and deterministic integrations, and linear algebra. We not only rely on classical tools, but also investigate and adapt recent techniques (e.g., improvements in approximation techniques), focusing on their ability to run on modern hardware: the development of our own tools (such as Eigen, see Section ) is essential to control their performances and their abilities to be integrated into real-time solutions or into new instruments.
The MANAO project is organized around four research axes that cover the large range of expertise of its members and associated members. We briefly introduce these four axes in this section. More details and their inter-influences that are illustrated in the Figure will be given in the following sections.
Axis 1 is the theoretical foundation of the project. Its main goal is to increase the understanding of light, shape, and matter interactions by combining expertise from different domains: optics and human/machine vision for the analysis and computer graphics for the simulation aspect. The goal of our analyses is to identify the different layers/phenomena that compose the observed signal. In a second step, the development of physical simulations and numerical models of these identified phenomena is a way to validate the pertinence of the proposed decompositions.
In Axis 2, the final observers are mainly physical captors. Our goal is thus the development of new acquisition and display technologies that combine optical and digital processes in order to reach fast transfers between real and digital worlds, in order to increase the convergence of these two worlds.
Axes 3 and 4 focus on two aspects of computer graphics: rendering, visualization and illustration in Axis 3, and editing and modeling (content creation) in Axis 4. In these two axes, the final observers are mainly human users, either generic users or expert ones (e.g., archaeologist , computer graphics artists).
Challenge: Definition and understanding of phenomena resulting from interactions between light, shape, and matter as seen from an observer point of view.
Results: Theoretical tools and numerical models for analyzing and simulating the observed optical phenomena.
To reach the goals of the MANAO project, we need to increase our understanding of how light, shape, and matter act together in synergy and how the resulting signal is finally observed. For this purpose, we need to identify the different phenomena that may be captured by the targeted observers. This is the main objective of this research axis, and it is achieved by using three approaches: the simulation of interactions between light, shape, and matter, their analysis and the development of new numerical models. This resulting improved knowledge is a foundation for the researches done in the three other axes, and the simulation tools together with the numerical models serve the development of the joint optical/digital systems in Axis 2 and their validation.
One of the main and earliest goals in computer graphics is to faithfully reproduce the real world, focusing mainly on light transport. Compared to researchers in physics, researchers in computer graphics rely on a subset of physical laws (mostly radiative transfer and geometric optics), and their main concern is to efficiently use the limited available computational resources while developing as fast as possible algorithms. For this purpose, a large set of theoretical as well as computational tools has been introduced to take a maximum benefit of hardware specificities. These tools are often dedicated to specific phenomena (e.g., direct or indirect lighting, color bleeding, shadows, caustics). An efficiency-driven approach needs such a classification of light paths in order to develop tailored strategies . For instance, starting from simple direct lighting, more complex phenomena have been progressively introduced: first diffuse indirect illumination , , then more generic inter-reflections , and volumetric scattering , . Thanks to this search for efficiency and this classification, researchers in computer graphics have developed a now recognized expertise in fast-simulation of light propagation. Based on finite elements (radiosity techniques) or on unbiased Monte Carlo integration schemes (ray-tracing, particle-tracing, ...), the resulting algorithms and their combination are now sufficiently accurate to be used-back in physical simulations. The MANAO project will continue the search for efficient and accurate simulation techniques, but extending it from computer graphics to optics. Thanks to the close collaboration with scientific researchers from optics, new phenomena beyond radiative transfer and geometric optics will be explored.
Search for algorithmic efficiency and accuracy has to be done in parallel with numerical models.
The goal of visual fidelity (generalized to accuracy from an observer point of view in the project) combined with the goal of efficiency leads to the development of alternative representations.
For instance, common classical finite-element techniques compute only basis coefficients for each discretization element: the required discretization density would be too large and to computationally expensive to obtain detailed spatial variations and thus visual fidelity.
Examples includes texture for decorrelating surface details from surface geometry and high-order wavelets for a multi-scale representation of lighting .
The numerical complexity explodes when considering directional properties of light transport such as radiance intensity (Watt per square meter and per steradian -
Before being able to simulate or to represent the different observed phenomena, we need to define and describe them. To understand the difference between an observed phenomenon and the classical light, shape, and matter decomposition, we can take the example of a highlight. Its observed shape (by a human user or a sensor) is the resulting process of the interaction of these three components, and can be simulated this way. However, this does not provide any intuitive understanding of their relative influence on the final shape: an artist will directly describe the resulting shape, and not each of the three properties. We thus want to decompose the observed signal into models for each scale that can be easily understandable, representable, and manipulable. For this purpose, we will rely on the analysis of the resulting interaction of light, shape, and matter as observed by a human or a physical sensor. We first consider this analysis from an optical point of view, trying to identify the different phenomena and their scale according to their mathematical properties (e.g., differential and frequency analysis ). Such an approach has leaded us to exhibit the influence of surfaces flows (depth and normal gradients) into lighting pattern deformation (see Figure ). For a human observer, this correspond to one recent trend in computer graphics that takes into account the human visual systems both to evaluate the results and to guide the simulations.
Challenge: Convergence of optical and digital systems to blend real and virtual worlds.
Results: Instruments to acquire real world, to display virtual world, and to make both of them interact.
In this axis, we investigate unified acquisition and display systems, that is systems which combine optical instruments with digital processing. From digital to real, we investigate new display approaches , . We consider projecting systems and surfaces , for personal use, virtual reality and augmented reality . From the real world to the digital world, we favor direct measurements of parameters for models and representations, using (new) optical systems unless digitization is required , . These resulting systems have to acquire the different phenomena described in Axis 1 and to display them, in an efficient manner , , , . By efficient, we mean that we want to shorten the path between the real world and the virtual world by increasing the data bandwidth between the real (analog) and the virtual (digital) worlds, and by reducing the latency for real-time interactions (we have to prevent unnecessary conversions, and to reduce processing time). To reach this goal, the systems have to be designed as a whole, not by a simple concatenation of optical systems and digital processes, nor by considering each component independently .
To increase data bandwidth, one solution is to parallelize more and more the physical systems. One possible solution is to multiply the number of simultaneous acquisitions (e.g., simultaneous images from multiple viewpoints , ). Similarly, increasing the number of viewpoints is a way toward the creation of full 3D displays . However, full acquisition or display of 3D real environments theoretically requires a continuous field of viewpoints, leading to huge data size. Despite the current belief that the increase of computational power will fill the missing gap, when it comes to visual or physical realism, if you double the processing power, people may want four times more accuracy, thus increasing data size as well. To reach the best performances, a trade-off has to be found between the amount of data required to represent accurately the reality and the amount of required processing. This trade-off may be achieved using compressive sensing. Compressive sensing is a new trend issued from the applied mathematics community that provides tools to accurately reconstruct a signal from a small set of measurements assuming that it is sparse in a transform domain (e.g., , ).
We prefer to achieve this goal by avoiding as much as possible the classical approach where acquisition is followed by a fitting step: this requires in general a large amount of measurements and the fitting itself may consume consequently too much memory and preprocessing time.
By preventing unnecessary conversion through fitting techniques, such an approach increase the speed and reduce the data transfer for acquisition but also for display.
One of the best recent examples is the work of Cossairt et al. .
The whole system is designed around a unique representation of the energy-field issued from (or leaving) a 3D object, either virtual or real: the Light-Field.
A Light-Field encodes the light emitted in any direction from any position on an object.
It is acquired thanks to a lens-array that leads to the capture of, and projection from, multiple simultaneous viewpoints.
A unique representation is used for all the steps of this system.
Lens-arrays, parallax barriers, and coded-aperture are one of the key technologies to develop such acquisition (e.g., Light-Field camera
Those are only some examples of what we investigate. We also consider the following approaches to develop new unified systems. First, similar to (and based on) the analysis goal of Axis 1, we have to take into account as much as possible the characteristics of the measurement setup. For instance, when fitting cannot be avoided, integrating them may improve both the processing efficiency and accuracy . Second, we have to integrate signals from multiple sensors (such as GPS, accelerometer, ...) to prevent some computation (e.g., ). Finally, the experience of the group in surface modeling help the design of optical surfaces for light sources or head-mounted displays.
Challenge: How to offer the most legible signal to the final observer in real-time?
Results: High-level shading primitives, expressive rendering techniques for object depiction, real-time realistic rendering algorithms
The main goal of this axis is to offer to the final observer, in this case mostly a human user, the most legible signal in real-time. Thanks to the analysis and to the decomposition in different phenomena resulting from interactions between light, shape, and matter (Axis 1), and their perception, we can use them to convey essential information in the most pertinent way. Here, the word pertinent can take various forms depending on the application.
In the context of scientific illustration and visualization, we are primarily interested in tools to convey shape or material characteristics of objects in animated 3D scenes. Expressive rendering techniques (see Figure c,d) provide means for users to depict such features with their own style. To introduce our approach, we detail it from a shape-depiction point of view, domain where we have acquired a recognized expertise. Prior work in this area mostly focused on stylization primitives to achieve line-based rendering , or stylized shading , with various levels of abstraction. A clear representation of important 3D object features remains a major challenge for better shape depiction, stylization and abstraction purposes. Most existing representations provide only local properties (e.g., curvature), and thus lack characterization of broader shape features. To overcome this limitation, we are developing higher level descriptions of shape with increased robustness to sparsity, noise, and outliers. This is achieved in close collaboration with Axis 1 by the use of higher-order local fitting methods, multi-scale analysis, and global regularization techniques. In order not to neglect the observer and the material characteristics of the objects, we couple this approach with an analysis of the appearance model. To our knowledge, this is an approach which has not been considered yet. This research direction is at the heart of the MANAO project, and has a strong connection with the analysis we plan to conduct in Axis 1. Material characteristics are always considered at the light ray level, but an understanding of higher-level primitives (like the shape of highlights and their motion) would help us to produce more legible renderings and permit novel stylizations; for instance, there is no method that is today able to create stylized renderings that follow the motion of highlights or shadows. We also believe such tools also play a fundamental role for geometry processing purposes (such as shape matching, reassembly, simplification), as well as for editing purposes as discussed in Axis 4.
In the context of real-time photo-realistic rendering ((see Figure a,b), the challenge is to compute the most plausible images with minimal effort. During the last decade, a lot of work has been devoted to design approximate but real-time rendering algorithms of complex lighting phenomena such as soft-shadows , motion blur , depth of field , reflexions, refractions, and inter-reflexions. For most of these effects it becomes harder to discover fundamentally new and faster methods. On the other hand, we believe that significant speedup can still be achieved through more clever use of massively parallel architectures of the current and upcoming hardware, and/or through more clever tuning of the current algorithms. In particular, regarding the second aspect, we remark that most of the proposed algorithms depend on several parameters which can be used to trade the speed over the quality. Significant speed-up could thus be achieved by identifying effects that would be masked or facilitated and thus devote appropriate computational resources to the rendering , . Indeed, the algorithm parameters controlling the quality vs speed are numerous without a direct mapping between their values and their effect. Moreover, their ideal values vary over space and time, and to be effective such an auto-tuning mechanism has to be extremely fast such that its cost is largely compensated by its gain. We believe that our various work on the analysis of the appearance such as in Axis 1 could be beneficial for such purpose too.
Realistic and real-time rendering is closely related to Axis 2: real-time rendering is a requirement to close the loop between real world and digital world. We have to thus develop algorithms and rendering primitives that allow the integration of the acquired data into real-time techniques. We have also to take care of that these real-time techniques have to work with new display systems. For instance, stereo, and more generally multi-view displays are based on the multiplication of simultaneous images. Brute force solutions consist in independent rendering pipeline for each viewpoint. A more energy-efficient solution would take advantages of the computation parts that may be factorized. Another example is the rendering techniques based on image processing, such as our work on augmented reality . Independent image processing for each viewpoint may disturb the feeling of depth by introducing inconsistent information in each images. Finally, more dedicated displays would require new rendering pipelines.
Challenge: Editing and modeling appearance using drawing- or sculpting-like tools through high level representations.
Results: High-level primitives and hybrid representations for appearance and shape.
During the last decade, the domain of computer graphics has exhibited tremendous improvements in image quality, both for 2D applications and 3D engines. This is mainly due to the availability of an ever increasing amount of shape details, and sophisticated appearance effects including complex lighting environments. Unfortunately, with such a growth in visual richness, even so-called vectorial representations (e.g., subdivision surfaces, Bézier curves, gradient meshes, etc.) become very dense and unmanageable for the end user who has to deal with a huge mass of control points, color labels, and other parameters. This is becoming a major challenge, with a necessity for novel representations. This Axis is thus complementary of Axis 3: the focus is the development of primitives that are easy to use for modeling and editing.
More specifically, we plan to investigate vectorial representations that would be amenable to the production of rich shapes with a minimal set of primitives and/or parameters. To this end we plan to build upon our insights on dynamic local reconstruction techniques and implicit surfaces . When working in 3D, an interesting approach to produce detailed shapes is by means of procedural geometry generation. For instance, many natural phenomena like waves or clouds may be modeled using a combination of procedural functions. Turning such functions into triangle meshes (main rendering primitives of GPUs) is a tedious process that appears not to be necessary with an adapted vectorial shape representation where one could directly turn procedural functions into implicit geometric primitives. Since we want to prevent unnecessary conversions in the whole pipeline (here, between modeling and rendering steps), we will also consider hybrid representations mixing meshes and implicit representations. Such research has thus to be conducted while considering the associated editing tools as well as performance issues. It is indeed important to keep real-time performance (cf. Axis 2) throughout the interaction loop, from user inputs to display, via editing and rendering operations. Finally, it would be interesting to add semantic information into 2D or 3D geometric representations. Semantic geometry appears to be particularly useful for many applications such as the design of more efficient manipulation and animation tools, for automatic simplification and abstraction, or even for automatic indexing and searching. This constitutes a complementary but longer term research direction.
In the MANAO project, we want to investigate representations beyond the classical light, shape, and matter decomposition. We thus want to directly control the appearance of objects both in 2D and 3D applications (e.g., ): this is a core topic of computer graphics. When working with 2D vector graphics, digital artists must carefully set up color gradients and textures: examples range from the creation of 2D logos to the photo-realistic imitation of object materials. Classic vector primitives quickly become impractical for creating illusions of complex materials and illuminations, and as a result an increasing amount of time and skill is required. This is only for still images. For animations, vector graphics are only used to create legible appearances composed of simple lines and color gradients. There is thus a need for more complex primitives that are able to accommodate complex reflection or texture patterns, while keeping the ease of use of vector graphics. For instance, instead of drawing color gradients directly, it is more advantageous to draw flow lines that represent local surface concavities and convexities. Going through such an intermediate structure then allows to deform simple material gradients and textures in a coherent way (see Figure ), and animate them all at once. The manipulation of 3D object materials also raises important issues. Most existing material models are tailored to faithfully reproduce physical behaviors, not to be easily controllable by artists. Therefore artists learn to tweak model parameters to satisfy the needs of a particular shading appearance, which can quickly become cumbersome as the complexity of a 3D scene increases. We believe that an alternative approach is required, whereby material appearance of an object in a typical lighting environment is directly input (e.g., painted or drawn), and adapted to match a plausible material behavior. This way, artists will be able to create their own appearance (e.g., by using our shading primitives ), and replicate it to novel illumination environments and 3D models. For this purpose, we will rely on the decompositions and tools issued from Axis 1.
Given our close relationships with researchers in optics, one novelty of our approach is to extend the range of possible observers to physical sensors in order to work on domains such as simulation, mixed reality, and testing. Capturing, processing, and visualizing complex data is now more and more accessible to everyone, leading to the possible convergence of real and virtual worlds through visual signals. This signal is traditionally captured by cameras. It is now possible to augment them by projecting (e.g., the infrared laser of Microsoft Kinect) and capturing (e.g., GPS localization) other signals that are outside the visible range. This supplemental information replaces values traditionally extracted from standard images and thus lowers down requirements in computational power. Since the captured images are the result of the interactions between light, shape, and matter, the approaches and the improved knowledge from MANAO help in designing interactive acquisition and rendering technologies that are required to merge the real and the virtual worlds. With the resulting unified systems (optical and digital), transfer of pertinent information is favored and inefficient conversion is likely avoided, leading to new uses in interactive computer graphics applications, like augmented reality, displays and computational photography.
This direction includes domains such as scientific illustration and visualization, artistic or plausible rendering, and 3D modeling. In all these cases, the observer, a human, takes part in the process, justifying once more our focus on real-time methods. When targeting average users, characteristics as well as limitations of the human visual system should be taken into account: in particular, it is known that some configurations of light, shape, and matter have masking and facilitation effects on visual perception. For specialized applications (such as archeology), the expertise of the final user and the constraints for 3D user interfaces lead to new uses and dedicated solutions for models and algorithms.
Our search of a better understanding of appearance have reached some great
milestones this year. First, our studies have shown that Bidirectional Reflection
Distribution Functions (BRDFs) exhibits some meaningful
statistics . They help designing intuitively MatCaps
(a shorthand for "Material Capture") that are often used by artists as a simple
and efficient way to design appearance .
Our studies have also shown that current BRDF models are
limited . We are exploring new models and
parameterizations , .
It is worth noting that we are integrating all these researches into a
common library named ALTA (http://
"Notable article in computing in 2014", from ACM ThinkLoud Computing Reviews http://
The ALTA Library
Keywords: Statistic analysis - Fitting - Measures
Functional Description
ALTA is a multi-platform software library to analyze, fit and understand Bidirectional Reflection Distribution Functions (BRDFs). It provides a set of command line software to fit measured data to analytical forms, tools to understand models and data.
In 2015, we continued the development of ALTA and added different unit and integration tests to reach a new milestone with our first Beta version.
Participants: Laurent Belcour, Romain Pacanowski, Xavier Granier and Pascal Barla
Partner: LP2N (CNRS - UMR 5298)
Contact: Romain Pacanowski
Scientific Description
Geometric skinning techniques are very popular in the industry for their high performances, but fail to mimic realistic deformations. With elastic implicit skinning the skin stretches automatically (without skinning weights) and the vertices distribution is more pleasing. Our approach is more robust, for instance the angle's range of joints is larger than implicit skinning.
This software has been ported as a plugin for the Modo software (The Foundry) in collaboration with Toulouse Tech Transfer. This plugin has been bought by The Foundry, which maintains and sells it.
Participants: Rodolphe Vaillant, Loïc Barthe, Florian Canezin, Gaël Guennebaud, Marie-Paule Cani, Damien Rohmer, Brian Wyvill, Olivier Gourmel and Mathias Paulin
Partners: Université de Bordeaux - CNRS - INP Bordeaux - Université de Toulouse - Institut Polytechnique de Grenoble - Ecole Supérieure de Chimie Physique Electronique de Lyon
Contact: Gaël Guennebaud
Functional Description
Eigen is an efficient and versatile C++ mathematical template library for linear algebra and related algorithms. In particular it provides fixed and dynamic size matrices and vectors, matrix decompositions (LU, LLT, LDLT, QR, eigenvalues, etc.), sparse matrices with iterative and direct solvers, some basic geometry features (transformations, quaternions, axis-angles, Euler angles, hyperplanes, lines, etc.), some non-linear solvers, automatic differentiations, etc. Thanks to expression templates, Eigen provides a very powerful and easy to use API. Explicit vectorization is performed for the SSE, AltiVec and ARM NEON instruction sets, with graceful fallback to non-vectorized code. Expression templates allow to perform global expression optimizations, and to remove unnecessary temporary objects.
In 2015, we released four revisions of the 3.2 branch, and the beta-1 of the next 3.3 version.
Participant: Gaël Guennebaud
Contact: Gaël Guennebaud
Keywords: OpenGL-GLSL HDR/LDR Viewer Functional Description HDRSee is a OpenGL/GLSL software that displays High Dynamic Range (HDR) and Low Dynamic Range (LDR) images. It is based on several libraries (e.g., glut, see below for full dependencies). To display HDR images, HDRSee implements a few tone-mapping operators. Moreover, it is designed with a plugin mechanism that let developers add, as easily as possible, their own tone-mapping operator. All tone-mapping operations are done using Graphics Hardware through pixel shader operations. The GUI currently used is nvWidgets.
Participants: Romain Pacanowski, Xavier Granier.
Partner: LP2N (CNRS - UMR 5298)
Contact: Romain Pacanowski
Keyword: HDR Merging, radiometric calibration, HDR tonemapping
Functional Description
The pfstools package is a set of command line programs for reading, writing, manipulating and viewing high-dynamic range (HDR) images and video frames. All programs in the package exchange data using a simple generic high dynamic range image format, pfs , and they use unix pipes to pass data between programs and to construct complex image processing operations.
pfstools comes with a library for reading and writing pfs files. The library can be used for writing custom applications that can integrate with the existing pfstools programs. It also offers a good integration with high-level mathematical programming languages, such as MATLAB or GNU Octave. pfstools can be used as an extension for MATLAB or Octave for reading and writing HDR images or simply to effectively store large matrices. The pfstools package integrates existing high dynamic range image formats by providing a simple data format that can be used to exchange data between applications. It is accompanied by the pfscalibration and pfstmo packages.
Participants: Rafal Mantiuk, Ivo Ihrke
Contact: Ivo Ihrke
Keyword: HDR Viewer
Functional Description
Shiver is a Scientific HDR Image Viewer with a convenient GUI. It features fast display / zoom OpenGL capabilities, the comparison of several images in different tabs, LDR, HDR, and raw-support through a plugin architecture, and more.
In addition, Shiver is an image processing program providing the ability to execute algorithms that are programmed as plugins on one or more images. Different frontends like the command line or a QT-based graphical user interface are available. Depending on the frontend different work flows are possible.
The console frontend can be used, if no X11 server is available or a large number of images have to be processed. The QT Gui allows for intuitive work and a test of processing plugins. It allows for example pixel picking and a comfortable way to compare different processed images.
Available Shiver plugins implement, e.g., the CalTag system for automatically detecting checkerboard corners in camera calibration images.
Participants: Ivo Ihrke
Contact: Ivo Ihrke
Keyword: Matlab optical raytracing toolbox
Functional Description
The purpose of the Maori project is to provide a simple, extensible, optical raytracing library in Matlab that incorporates some modern concepts from computer graphics. In particular it features scene graph integration, a shader model, CSG objects and uses non-sequential raytracing as default. The goal is to provide a simple-to-use 3D system. In contrast to most commercial systems, 2D rotationally symmetric systems are treated as special cases of the 3D setting.
Participants: Ivo Ihrke
Contact: Ivo Ihrke
Keywords: Expressive rendering - Multi-scale analysis - Material appearance - Vector graphics - 2D animation
Functional Description
Patate is a header only C++/CUDA library for graphics applications. It provides a collection of Computer Graphics techniques that incorporate the latest innovations from Inria research teams working in the field. It strives for efficiency and ease-of-use by focusing on low-level core operators and key algorithms, organized in modules, each tackling a specific set of issues. The central goal of the library is to drastically reduce the time and efforts required to turn a research paper into a ready-to-use solution, for both commercial and academic purposes.
The library is still in its infancy and we are actively working on it to include the latest of our published research techniques. Modules will be dealing with graphics domains as varied as multi-scale analysis, material appearance, vector graphics, expressive rendering and 2D animation.
Participants: Gaël Guennebaud, Pascal Barla, Simon Boyé, Gautier Ciaudo and Nicolas Mellado
Contact: Gaël Guennebaud
Functional Description The Radiance Scaling technique has received some interest in the Archaeology community, for enhancing details in carved stones in particular. For this reason, we have made it available as a plugin for the Open Source software Meshlab.
Participants: Romain Vergne, Olivier Dumas and Pascal Barla
Contact: Pascal Barla
Opaque materials are represented in computer graphics by Bi-directional Reflectance Distribution Functions (BRDF), which are 4D functions of light and view direction. Dealing with such a high dimensionality is problematic for the modeling and rendering of material appearance. The choice of a BRDF parametrization greatly simplifies this task by identifying the axis where most variations occur in common opaque materials. The 4D parametrization of Rusinkiewicz is classically used in graphics, in particular because of its direct connection to micro-facet theory. Alternative parametrization by Neumann et al. and Stark et al. have been proposed, but are restricted to 2D parametrizations, and hence a restricted class of materials.
We have extended the work of Neumann et al. and Stark et al. to a pair of 4D BRDF parameterizations with explicit changes of variables. Revealing some of their mathematical properties and relationships to Rusinkiewicz' parametrization allows us to better understand their benefits and drawbacks for representing measured BRDFs. Our preliminary study suggests that the alternative parametrization inspired by Stark et al. is superior, and should thus be considered in future work involving BRDFs.
Finding the appropriate BRDF model, with meaningful physical parameters, that can represent accurately measured data remains a challenging task. In , we show that two different physical phenomena are present in measured reflectance: reflection and diffraction. Taking both into account, we present a reflectance model that is compact and a very good approximation (cf. Figure ) of measured reflectance. Designers can act on model parameters, related to surface properties, to create new materials.
On the one hand, a BRDF is a complex 4D function, which should ensure reciprocity and energy conservation laws. On the other hand, when computing radiance reaching the eye from a surface point, the view direction is held fixed. In this respect, we are only interested in a 2D BRDF slice that acts as a filter on the local environment lighting. In , our goal is to understand the statistical properties of such a filter as a function of viewing elevation. To this end, we have conducted a study of measured BRDFs where we have computed statistical moments for each viewing angle. We show that some moments are correlated together across dimensions and orders, while some others are close to zero and may safely be discarded. Our study opens the way to novel applications such as moment-based manipulation of measured BRDFs, material estimation and image-based material editing. It also puts empirical and physically-based material models in a new perspective, by revealing their effect as view-dependent filters.
Realistic images can be rendered by simulating light transport with Monte Carlo methods. The possibility to use realistic light sources for synthesizing images greatly contributes to their physical realism. Among existing models, the ones based on light fields are attractive due to their ability to capture faithfully the far-field and near-field effects as well as their possibility of being acquired directly. Since acquired light sources have arbitrary frequencies and possibly high dimensions (4D), using such light sources for realistic rendering leads to performance problems. We have investigated how to balance the accuracy of the representation and the efficiency of the simulation (cf. Figure ). The work relies on generating high quality samples from the input light sources for unbiased Monte Carlo estimation . This is a foundation work that has leaded to new sampling techniques for physically-based rendering with light field light sources. The results show that physically accurate rendering with realistic light sources can be achieved in real time.
The aberrations of an optical system can be described in terms of the wave aberrations, defined as the departure from the ideal spherical wavefront; or the ray aberrations, which are in turn the deviations from the paraxial ray intersections measured in the image plane. The classical connection between the two descriptions is an approximation, the error of which has, so far, not been quantified analytically.
We derive exact analytical equations for computing the wavefront surface, the aberrated ray directions, and the transverse ray aberrations in terms of the wave aberrations (a.k.a., Optical Path Difference) and the reference sphere. We introduce precise conditions for a function to be an OPD function, show that every such function has an associated wavefront, and study the error arising from the classical approximation. We establish strict conditions for the error to be small. We illustrate our results with numerical simulations. Our results show that large numerical apertures and OPD functions with strong gradients yield larger approximation errors.
We explore the use of inexpensive consumer light-field camera technology for the purpose of light-field microscopy. Our experiments are based on the Lytro (first generation) camera. Unfortunately, the optical systems of the Lytro and those of microscopes are not compatible, leading to a loss of light-field information due to angular and spatial vignetting when directly recording microscopic pictures. We therefore consider an adaptation of the Lytro optical system. We demonstrate that using the Lytro directly as an ocular replacement, leads to unacceptable spatial vignetting. However, we also found a setting that allows the use of the Lytro camera in a virtual imaging mode which prevents the information loss to a large extent. We analyze the new virtual imaging mode and use it in two different setups for implementing light-field microscopy using a Lytro camera. As a practical result, we show that the camera can be used for low magnification work, as e.g. common in quality control, surface characterization, etc. (cf. Figure ) We achieve a maximum spatial resolution of about 6.25 micrometers, albeit at a limited SNR for the side views.
In sculpting software, MatCaps are often used by artists as a simple and efficient way to design appearance. Similar to LitSpheres, they convey material appearance into a single image of a sphere, which can be easily transferred to an individual 3D object. Their main purpose is to capture plausible material appearance without having to specify lighting and material separately. However, this also restricts their usability, since material or lighting cannot later be modified independently. Manipulations as simple as rotating lighting with respect to the view are not possible. In , we show how to decompose a MatCap into a new representation that permits dynamic appearance manipulation. We consider that the material of the depicted sphere acts as a filter in the image, and we introduce an algorithm that estimates a few relevant filter parameters interactively. We show that these parameters are sufficient to convert the input MatCap into our new representation, which enables real-time appearance manipulations through simple image re-filtering operations. This includes lighting rotations, the painting of additional reflections, material variations, selective color changes and silhouette effects that mimic Fresnel or asperity scattering (cf. Figure ).
In collaboration with Technicolor, we developed a method to generate procedural models with global structures, such as growth plants, on existing surfaces at interactive time . Our approach extends shape grammars to enable context-sensitive procedural generation on the GPU. To this end, we unified the representation of external contexts as texture maps, which can be spatially varying parameters controlling the grammar expansion through very fast texture fetches (e.g., a density map). External contexts also include the shape of the underlying surface itself that we represent as a texture atlas of geometry images. Extrusion along the surface is then performed by a marching rule working in texture space using indirection pointers. We also introduce a lightweight deformation mechanism of the generated geometry maintaining a C1 continuity between the terminal primitives while taking into account the shape and trajectory variations. Our method is entirely implemented on the GPU and it allows to dynamically generate highly detailed models on surfaces at interactive time (cf. Figure ). Finally, by combining marching rules and generic contexts, users can easily guide the growing process by directly painting on the surface with a live feedback of the generated model. This provides friendly editing in production environments.
Computing Boolean operations (Booleans) of 3D polyhedra/meshes is a basic and essential task in many domains, such as computational geometry, computer-aided design, and constructive solid geometry. Booleans are challenging to compute when dealing with meshes, because of topological changes, geometric degeneracies, etc. Most prior art techniques either suffer from robustness issues, deal with a restricted class of input/output meshes, or provide only approximate results.
We overcome these limitations and introduced an exact and robust approach performing on general surface meshes (closed and orientable) . Our method is based on a few geometric and topological predicates that allow to handle all input/output cases considered as degenerate in existing solutions, such as voids, non-manifold, disconnected, and unbounded meshes, and to robustly deal with special input configurations. Our experimentation showed that our more general approach is also more robust and more efficient than Maya’s implementation (
During this work, we also developed a complete benchmark intended to validate Boolean algorithms under relevant and challenging scenarios, and we successfully ascertain both our algorithm and implementation with it.
Moving least squares (MLS) surface approximation is a popular tool for the processing and reconstruction of non-structured and noisy point clouds. We introduce a new variant improving the approximation quality when the underlying surface is assumed to be locally developable, which is often the case in point clouds coming from the acquisition of manufactured objects. Our approach follows Levin's classical MLS procedure: the point cloud is locally approximated by a bivariate quadratic polynomial height-field defined in a local tangent frame. The a priori developability knowledge is introduced by constraining the fitted poly-nomials to have a zero-Gaussian curvature leading to the actual fit of so-called parabolic cylinders. When the local developability assumption cannot be made unambiguously, our fitted parabolic cylinders seamlessly degenerate to linear approximations. We show that our novel MLS kernel reconstructs more locally-developable surfaces than previous MLS methods while being faithful to the data.
CIFRE PhD contract with Technicolor 2 (2014-2018)
Participants: A. Dufay, X. Granier, and R. Pacanowski
For this project, we aim at providing interactive previsualization of complex lighting with a smooth transition to the final solution.
Currently, the characterization and display of the real world are limited to techniques focusing on a subset of the necessary physical phenomena. A lot of work has been done to acquire geometric properties. However, the acquisition of a geometry on an object with complex reflection property or dynamic behavior is still a challenge. Similarly, the characterization of a material is limited to a uniform object for complex material or a diffuse material when one is interested in its spatial variations.
To reach full interaction between real and virtual worlds (augmented reality, mixed reality), it is necessary to acquire the real world in all its aspects (spatial, spectral, temporal) and to return it as in all these dimensions. To achieve this goal, a number of theoretical and practical tools will be developed around the development of mixed reality solutions and the development of some theoretical framework that supports the entire project.
MANAO
Leader G. Guennebaud
This project aims at the development of novel representations for the efficient rendering and manipulation of highly detailed shapes in a multi-resolution context.
MAVERICK, REVES
Leader N. Holzschuch (MAVERICK)
The project ALTA aims at analyzing the light transport equations and at using the resulting representations and algorithms for more efficient computation. We target lighting simulations, either off-line, high-quality simulations or interactive simulations.
POTIOC, MANAO, LIG-CNRS-UJF, Diotasoft
Leader M. Hachet (POTIOC)
The ISAR project focuses on the design, implementation and evaluation of new interaction paradigms for spatial augmented reality, and to systematically explore the design space.
MAVERICK, LP2N-CNRS (MANAO), Musée d'Ethnographie de Bordeaux, OCÉ-Print
Leader N. Holzschuch (MAVERICK)
Local Leader R. Pacanowski (LP2N-CNRS)
Museums are operating under conflicting constraints: they have to preserve the artifacts they are storing, while making them available to the public and to researchers. Cultural artifacts are so fragile that simply exposing them to light degrades them. 3D scanning, combined with virtual reality and 3D printing has been used for the preservation and study of sculptures. The approach is limited: it acquires the geometry and the color, but not complex material properties. Current 3D printers are also limited in the range of colors they can reproduce. Our goal in this project is to address the entire chain of material acquisition and restitution. Our idea is to scan complex cultural artifacts, such as silk cloths, capturing all the geometry of their materials at the microscopic level, then reproduce them for study by public and researchers. Reproduction can be either done through 2.5D printing or virtual reality displays.
IMB (UPR 5251), LABRI (UMR 5800), Inria (CENTRE BORDEAUX SUD-OUEST), I2M (NEW UMR FROM 2011), IMS (UMR 5218), CEA/DAM
Some members of MANAO participate in the local initiative CPU. As it includes many thematics, from fluid mechanics computation to structure safety but also management of timetable, safety of networks and protocols, management of energy consumption, etc., numerical technology can impact a whole industrial sector. In order to address problems in the domain of certification or qualification, we want to develop numerical sciences at such a level that it can be used as a certification tool.
Title: Perceptual Representation of Illumination, Shape and Material
Programm: FP7
Duration: January 2013 - December 2016
Coordinator: JUSTUS-LIEBIG-UNIVERSITAET GIESSEN
Partners:
Justus-Liebig-Universitaet Giessen (Germany)
Katholieke Universiteit Leuven (Belgium)
Next Limit Sl (Spain)
Technische Universiteit Delft (Netherlands)
the Chancellor, Masters and Scholars of The University of Cambridge (United Kingdom)
Bilkent Üniversitesi (Turkey)
Universite Paris Descartes (France)
The University of Birmingham (United Kingdom)
Local Leader: Pascal Barla
Visual perception provides us with a richly detailed representation of the surrounding world, enabling us to make subtle judgements of 1) 3D shape, 2) the material properties of objects, and 3) the flow of illumination within a scene. Together, these three factors determine the intensity of a surface in the image. Estimating scene properties is crucial for guiding action and making decisions like whether food is edible. Visual ‘look and feel’ also plays a key role in industrial design, computer graphics and other industries. Despite this, little is known about how we visually estimate the physical properties of objects and illumination. Previous research has mainly focussed on one or two of the three causal factors independently, and from the viewpoint of a specific discipline. By contrast, in PRISM we take an integrative approach, to understand how the brain creates a richly detailed representation of the world by looking at how all three factors interact simultaneously. PRISM is radically interdisciplinary, uniting experts from psychology, neuroscience, computer science and physics to understand both the analysis and synthesis of shape, shading and materials. PRISM is intersectoral by uniting researchers from seven leading Universities and two industrial partners, enabling impact in basic research, technology and the creative industries. Through research projects, cross-discipline visits, and structured Course Modules delivered through local and network-wide training events, we will endow PRISM fellows with an unusually broad overview and the cross-sector skills they need to become future leaders in European research and development. Thus, by delivering early-career training embedded in a cutting-edge research programme, we aim to 1) springboard the next generation of interdisciplinary researchers on perceptual representations of 3D scenes and 2) cement long-term collaborations between sectors to enhance European perception research and its applications.
Partner : KAUST - King Abdullah University of Science & Technology
We propose a new approach for snapshot imaging of time-resolved, non-stationary 3D fluid flows, which we term Rainbow Particle Imaging Velocimetry (RainbowPIV). Using only a single camera, RainbowPIV will be able to track a dense set of particles advected in the flow. This is achieved by illuminating the flow volume with a stack of monochromatic light planes at different wavelengths (a “rainbow”). Particles are tracked in 3D by both following their 2D spatial position and their change in color, depending on which light plane they traverse.
RainbowPIV will provide dense measurements of 3D velocity vectors, thus obtaining a dense 3D representation of a 3D velocity field. This will allow us to accurately image and understand many new types of flow, including turbulent flows within complex 3D geometries and particle trajectories, with limited optical access. After the initial exploration stage covered in this proposal, RainbowPIV could find many applications in science and engineering, for example to help understand combustion processes or flow through catalytic converters, between turbine blades, and inside inlet manifolds.
Organization of Dagstuhl Seminar on “Computational Imaging”
Steering Board Eurographics Workshop on Graphics and Cultural Heritage
CCD 2015, CVPR 2015, Digital Heritage 2015, Eurographics 2016, Expressive 2015 (NPAR-SBIM-CAe), ICCP 2015, ICCV 2015, GMP 2016
ACM Siggraph 2015, ACM Siggraph Asia 2015, CCD 2015, CHI 2015 CVPR 2015, Eurographics 2016, Eurographics Symposium on Rendering 2015, ICCP 2015, ICCV 2015, Pacific Graphics 2015
SIAM Journal on Scientific Computing, OSA Journal of the Optical Society of America A, IEEE Transactions on Pattern Analysis and Machine Intelligence, ACM Transaction on Graphics, IEEE Transaction on Visualisation and Computer , Computer Graphics Forum, Visual Computer, Graphics, GMOD
Journée de la Recherche en Robotique, Carl Zeiss Inc., RWTH Aachen, KAUST, Bordeaux University, Heidelberg University
FMX 2015 - Conference on Animation, Effects, Games and Transmedia www.
DFG grant proposal, Schloss Dagstuhl seminar proposal
Some members are part of: Committee of the Bordeaux Sud-Ouest Inria Center, Evaluation Committee of Inria, Scientific Council of the Image & Sound team at LaBRI.
The members of our team are involved in teaching computer science at University of Bordeaux, ENSEIRB Engineering School, and Institut d'Optique Graduate School (IOGS). General computer science is concerned, as well as the following graphics related topics:
Master : Pierre Bénard and Romain Pacanowski, Photorealistic and Expressive Image Synthesis, 60 HETD, M2, Univ. Bdx, France.
Master : Xavier Granier, Numerical Techniques, 45 HETD, M1, IOGS, France
Master : Xavier Granier, Image Synthesis, 14 HETD, M2, IOGS, France
Master : Gaël Guennebaud, Geometric Modeling, 22 HETD, M2, IOGS, France
Master : Xavier Granier, Romain Pacanowski, Boris Raymond Brett Ridel, Algorithmic and Object Programming, 60 HETD, M1, IOGS, France
Master : Xavier Granier, Radiometry, 10 HETD, M1, IOGS, France
Master : Xavier Granier, Romain Pacanowski, Colorimetry and Appearance Modeling, 20 HETD, M1, IOGS, France.
Master : Gaël Guennebaud and Pierre Bénard, High-performance 3D Graphics, 60 HETD, M1, Univ. Bdx and IOGS, France.
Master : Pierre Bénard, Virtual Reality, 24 HETD, M2, Univ. Bdx, France.
Master : Ivo Ihrke, Computational Optical Imaging, 30 HETD, M1, IOGS, France
Master : Ivo Ihrke, Introduction to Image Processing, 30 HETD, M1, IOGS, France
Master : Ivo Ihrke, Advanced Display Technology, 12 HETD, M1, IOGS, France
Master : Christophe Schlick, Pierre Bénard, Image Synthesis, 60 HETD, M2, ENSEIRB, France
Licence : Patrick Reuter, Digital Imaging, 36 HETD, L3, Univ. Bdx, France.
Some members are also in charge of some fields of study:
Master : Xavier Granier, Optics and Computer Science, M1/M2, IOGS, France.
License : Patrick Reuter, Science and Modeling, L2, Univ. Bdx, France.
PhD : Alkhazur Manakov, Calibration and Characterization of Advanced Image-Based Measurement Systems, Saarland University, I. Ihrke
PhD : Boris Raymond, Rendering and manipulation of anisotropic materials, Univ. Bordeaux, P. Barla & G. Guennebaud & X. Granier
PhD : John Restrepo, Plenoptic Imaging and Computational Image Quality Metrics, Univ. Bordeaux, I. Ihrke
PhD : Brett Ridel, Interactive spatial augmented reality, Univ. Bordeaux, P. Reuter & X. Granier
PhD : Carlos Zubiaga Pena, Image-space editing of appearance, Univ. Bordeaux, P. Barla & X. Granier
PhD : Florian Canezin, Implicit Modeling, Univ. Toulouse III, G. Guennebaud & Loïc Barthe
PhD : Mathieu Diawara, Computer-Assisted 2D Animation, Univ. Bordeaux, P. Barla, P. Bénard & X. Granier
PhD : Arthur Dufay, Adaptive high-quality of virtual environments with complex photometry, Univ. Bordeaux, J.-E. Marvie R. Pacanowski & X. Granier
PhD : Thibaud Lambert, Real-time rendering of highly detailed 3D models, Univ. Bordeaux, G. Guennebaud & P. Bénard
PhD : Loïs Mignard-Debize, Plenoptic function and its application to spatial augmented reality, Univ. Bordeaux, P. Reuter & I. Ihrke
PhD : Antoine Lucat, Appearance Acquisition and Rendering, CNRS (LP2N) & IOGS, R. Pacanowski & X. Granier
February 2015: Journée numérique au Sénat on the showroom "Modèles 3D, réalité augmentée et sauvegarde du patrimoine", http://