2024Activity reportProject-TeamMAVERICK
RNSR: 201221005J- Research center Inria Centre at Université Grenoble Alpes
- In partnership with:CNRS, Université de Grenoble Alpes
- Team name: Models and Algorithms for Visualization and Rendering
- In collaboration with:Laboratoire Jean Kuntzmann (LJK)
- Domain:Perception, Cognition and Interaction
- Theme:Interaction and visualization
Keywords
Computer Science and Digital Science
- A5.2. Data visualization
- A5.5. Computer graphics
- A5.5.1. Geometrical modeling
- A5.5.2. Rendering
- A5.5.3. Computational photography
- A5.5.4. Animation
Other Research Topics and Application Domains
- B5.5. Materials
- B5.7. 3D printing
- B9.2.2. Cinema, Television
- B9.2.3. Video games
- B9.2.4. Theater
- B9.6.6. Archeology, History
1 Team members, visitors, external collaborators
Research Scientists
- Fabrice Neyret [Team leader, CNRS, Senior Researcher, HDR]
- Nicolas Holzschuch [INRIA, Senior Researcher, (décharge syndicale 60 %), HDR]
- Cyril Soler [INRIA, Researcher, HDR]
Faculty Members
- Georges-Pierre Bonneau [UGA, Professor, HDR]
- Joelle Thollot [GRENOBLE INP, Professor Delegation, from Mar 2024, HDR]
- Thibault Tricard [GRENOBLE INP, Associate Professor]
- Romain Vergne [UGA, Associate Professor Delegation, from Sep 2024]
- Romain Vergne [UGA, Associate Professor, until Aug 2024]
Post-Doctoral Fellow
- Nolan Mestres [GRENOBLE INP, Post-Doctoral Fellow, from Jun 2024]
PhD Students
- Ambre Adjevi-Neglokpe [UGA, from Oct 2024]
- Mohamed Amine Farhat [UGA]
- Ana Maria Granizo Hidalgo [UGA, until Nov 2024]
- Pacome Luton [UGA, from Sep 2024]
- Matheo Moinet [UGA]
- Antoine Richermoz [UGA]
- Ran Yu [Inria, Owens Corning, CIFRE]
Technical Staff
- Nolan Mestres [INRIA, Engineer, until May 2024]
Interns and Apprentices
- Ambre Adjevi-Neglokpe [INRIA, Intern, from Apr 2024 until Aug 2024]
- Aurelie Gillot [INRIA, Intern, from Jun 2024 until Aug 2024]
- Pacome Luton [ENS DE LYON, Intern, until Mar 2024]
- Phu Hung Nguyen [INRIA, Intern, until Aug 2024]
- Etienne Renoult [GRENOBLE INP, Intern, until Jun 2024]
Administrative Assistant
- Diane Courtiol [INRIA]
Visiting Scientist
- Arthur Novat [Atelier Pierre Novat]
2 Overall objectives
Computer-generated pictures and videos are now ubiquitous: both for leisure activities, such as special effects in motion pictures, feature movies and video games, or for more “serious” activities, such as visualization and simulation.
Maverick was created as a research team in January 2012 and upgraded as a research project in January 2014. We deal with image synthesis methods. We place ourselves at the end of the image production pipeline, when the pictures are generated and displayed (see Figure 1). We take many possible inputs: datasets, video flows, pictures and photographs, (animated) geometry from a virtual world... We produce as output pictures and videos.
These pictures will be viewed by humans, and we consider this fact as an important point of our research strategy, as it provides the benchmarks for evaluating our results: the pictures and animations produced must be able to convey the message to the viewer. The actual message depends on the specific application: data visualization, exploring virtual worlds, designing paintings and drawings... Our vision is that all these applications share common research problems: ensuring that the important features are perceived, avoiding cluttering or aliasing, efficient internal data representation, etc.
Computer Graphics, and especially Maverick, is at the crossroad between fundamental research and industrial applications. We are both looking at the constraints and needs of applicative users and targeting long term research issues such as sampling and filtering.

Position of the team in the graphics pipeline
The Maverick project-team aims at producing representations and algorithms for efficient, high-quality computer generation of pictures and animations through the study of four research problems:
- Computer Visualization, where we take as input a large localized dataset and represent it in a way that will let an observer understand its key properties.
- Expressive Rendering, where we create an artistic representation of a virtual world.
- Illumination Simulation, where our focus is modelling the interaction of light with the objects in the scene.
- Complex Scenes, where our focus is rendering and modelling highly complex scenes.
The heart of Maverick is understanding what makes a picture useful, powerful and interesting for the user, and designing algorithms to create these pictures.
We will address these research problems through three interconnected approaches:
- working on the impact of pictures, by conducting perceptual studies, measuring and removing artefacts and discontinuities, evaluating the user response to pictures and algorithms.
- developing representations for data, through abstraction, stylization and simplification.
- developing new methods for predicting the properties of a picture (e.g. frequency content, variations) and adapting our image-generation algorithm to these properties.
A fundamental element of the Maverick project-team is that the research problems and the scientific approaches are all cross-connected. Research on the impact of pictures is of interest in three different research problems: Computer Visualization, Expressive rendering and Illumination Simulation. Similarly, our research on Illumination simulation will gather contributions from all three scientific approaches: impact, representations and prediction.
3 Research program
The Maverick project-team aims at producing representations and algorithms for efficient, high-quality computer generation of pictures and animations through the study of four research problems:
- Computer Visualization where we take as input a large localized dataset and represent it in a way that will let an observer understand its key properties. Visualization can be used for data analysis, for the results of a simulation, for medical imaging data...
- Expressive Rendering, where we create an artistic representation of a virtual world. Expressive rendering corresponds to the generation of drawings or paintings of a virtual scene, but also to some areas of computational photography, where the picture is simplified in specific areas to focus the attention.
- Illumination Simulation, where we model the interaction of light with the objects in the scene, resulting in a photorealistic picture of the scene. Research include improving the quality and photorealism of pictures, including more complex effects such as depth-of-field or motion-blur. We are also working on accelerating the computations, both for real-time photorealistic rendering and offline, high-quality rendering.
- Complex Scenes, where we generate, manage, animate and render highly complex scenes, such as natural scenes with forests, rivers and oceans, but also large datasets for visualization. We are especially interested in interactive visualization of complex scenes, with all the associated challenges in terms of processing and memory bandwidth.
The fundamental research interest of Maverick is first, understanding what makes a picture useful, powerful and interesting for the user, and second designing algorithms to create and improve these pictures.
3.1 Research approaches
We will address these research problems through three interconnected research approaches:
Picture Impact
Our first research axis deals with the impact pictures have on the viewer, and how we can improve this impact. Our research here will target:
- evaluating user response: we need to evaluate how the viewers respond to the pictures and animations generated by our algorithms, through user studies, either asking the viewer about what he perceives in a picture or measuring how his body reacts (eye tracking, position tracking).
- removing artefacts and discontinuities: temporal and spatial discontinuities perturb viewer attention, distracting the viewer from the main message. These discontinuities occur during the picture creation process; finding and removing them is a difficult process.
Data Representation
The data we receive as input for picture generation is often unsuitable for interactive high-quality rendering: too many details, no spatial organization... Similarly the pictures we produce or get as input for other algorithms may contain superfluous details.
One of our goals is to develop new data representations, adapted to our requirements for rendering. This includes fast access to the relevant information, but also access to the specific hierarchical level of information needed: we want to organize the data in hierarchical levels, pre-filter it so that sampling at a given level also gives information about the underlying levels. Our research for this axis includes filtering, data abstraction, simplification and stylization.
The input data can be of any kind: geometric data, such as the model of an object, scientific data before visualization, pictures and photographs. It can be time-dependent or not; time-dependent data bring an additional level of challenge on the algorithm for fast updates.
Prediction and simulation
Our algorithms for generating pictures require computations: sampling, integration, simulation... These computations can be optimized if we already know the characteristics of the final picture. Our recent research has shown that it is possible to predict the local characteristics of a picture by studying the phenomena involved: the local complexity, the spatial variations, their orientation...
Our goal is to develop new techniques for predicting the properties of a picture, and to adapt our image-generation algorithms to these properties, for example by sampling less in areas of low variation.
Our research problems and approaches are all cross-connected. Research on the impact of pictures is of interest for three different research problems: Computer Visualization, Expressive rendering and Illumination Simulation. Similarly, our research on Illumination simulation will use all three research approaches: impact, representations and prediction.
3.2 Cross-cutting research issues
Beyond the connections between our problems and research approaches, we are interested in several issues, which are present throughout all our research:
-
Sampling:
is an ubiquitous process occurring in all our application domains, whether photorealistic rendering (e.g. photon mapping), expressive rendering (e.g. brush strokes), texturing, fluid simulation (Lagrangian methods), etc. When sampling and reconstructing a signal for picture generation, we have to ensure both coherence and homogeneity. By coherence, we mean not introducing spatial or temporal discontinuities in the reconstructed signal. By homogeneity, we mean that samples should be placed regularly in space and time. For a time-dependent signal, these requirements are conflicting with each other, opening new areas of research.
-
Filtering:
is another ubiquitous process, occuring in all our application domains, whether in realistic rendering (e.g. for integrating height fields, normals, material properties), expressive rendering (e.g. for simplifying strokes), textures (through non-linearity and discontinuities). It is especially relevant when we are replacing a signal or data with a lower resolution (for hierarchical representation); this involves filtering the data with a reconstruction kernel, representing the transition between levels.
-
Performance and scalability:
are also a common requirement for all our applications. We want our algorithms to be usable, which implies that they can be used on large and complex scenes, placing a great importance on scalability. For some applications, we target interactive and real-time applications, with an update frequency between 10 Hz and 120 Hz.
-
Coherence and continuity:
in space and time is also a common requirement of realistic as well as expressive models which must be ensured despite contradictory requirements. We want to avoid flickering and aliasing.
-
Animation:
our input data is likely to be time-varying (e.g. animated geometry, physical simulation, time-dependent dataset). A common requirement for all our algorithms and data representation is that they must be compatible with animated data (fast updates for data structures, low latency algorithms...).
3.3 Methodology
Our research is guided by several methodological principles:
-
Experimentation:
to find solutions and phenomenological models, we use experimentation, performing statistical measurements of how a system behaves. We then extract a model from the experimental data.
-
Validation:
for each algorithm we develop, we look for experimental validation: measuring the behavior of the algorithm, how it scales, how it improves over the state-of-the-art... We also compare our algorithms to the exact solution. Validation is harder for some of our research domains, but it remains a key principle for us.
-
Reducing the complexity of the problem:
the equations describing certain behaviors in image synthesis can have a large degree of complexity, precluding computations, especially in real time. This is true for physical simulation of fluids, tree growth, illumination simulation... We are looking for emerging phenomena and phenomenological models to describe them (see framed box “Emerging phenomena”). Using these, we simplify the theoretical models in a controlled way, to improve user interaction and accelerate the computations.
-
Transferring ideas from other domains:
Computer Graphics is, by nature, at the interface of many research domains: physics for the behavior of light, applied mathematics for numerical simulation, biology, algorithmics... We import tools from all these domains, and keep looking for new tools and ideas.
-
Develop new fondamental tools:
In situations where specific tools are required for a problem, we will proceed from a theoretical framework to develop them. These tools may in return have applications in other domains, and we are ready to disseminate them.
-
Collaborate with industrial partners:
we have a long experience of collaboration with industrial partners. These collaborations bring us new problems to solve, with short-term or medium-term transfer opportunities. When we cooperate with these partners, we have to find what they need, which can be very different from what they want, their expressed need.
4 Application domains
The natural application domain for our research is the production of digital images, for example for movies and special effects, virtual prototyping, video games... Our research have also been applied to tools for generating and editing images and textures, for example generating textures for maps. Our current application domains are:
- Offline and real-time rendering in movie special effects and video games;
- Virtual prototyping;
- Scientific visualization;
- Content modeling and generation (e.g. generating texture for video games, capturing reflectance properties, etc);
- Image creation and manipulation.
5 Social and environmental responsibility
While research in the Maverick team generaly involves significantly greedy computer hardware (e.g. 1 Tera-flop graphics cards) and heavy computations, the objective of most work in the Maverick team is the improvement of the performance of algorithms or to create new methods to obtain results at a lower computation cost.
6 Highlights of the year
6.1 Awards
- Antoine Richermoz – First place at the JFIG Blender Geometry Node contest
7 New software, platforms, open data
7.1 New software
7.1.1 GRATIN
-
Keywords:
GLSL Shaders, Vector graphics, Texture Synthesis
-
Functional Description:
Gratin is a node-based compositing software for creating, manipulating and animating 2D and 3D data. It uses an internal direct acyclic multi-graph and provides an intuitive user interface that allows to quickly design complex prototypes. Gratin has several properties that make it useful for researchers and students. (1) it works in real-time: everything is executed on the GPU, using OpenGL, GLSL and/or Cuda. (2) it is easily programmable: users can directly write GLSL scripts inside the interface, or create new C++ plugins that will be loaded as new nodes in the software. (3) all the parameters can be animated using keyframe curves to generate videos and demos. (4) the system allows to easily exchange nodes, group of nodes or full pipelines between people.
- URL:
-
Contact:
Romain Vergne
-
Participants:
Pascal Barla, Romain Vergne
-
Partner:
UJF
7.1.2 HQR
-
Name:
High Quality Renderer
-
Keywords:
Lighting simulation, Materials, Plug-in
-
Functional Description:
HQR is a global lighting simulation platform. It provides algorithms for handling materials, geometry and light sources, accepting various industry formats by default. It also provides algorithms for solving light transport simulation problems such as photon mapping, metropolis light transport and bidirectional path tracing. Using a plugin system it allows users to implement their own parts, allowing researchers to test new algorithms for specific tasks.
- URL:
-
Contact:
Cyril Soler
-
Participant:
Cyril Soler
7.1.3 libylm
-
Name:
LibYLM
-
Keyword:
Spherical harmonics
-
Functional Description:
This library implements spherical and zonal harmonics. It provides the means to perform decompositions, manipulate spherical harmonic distributions and provides its own viewer to visualize spherical harmonic distributions.
- URL:
-
Contact:
Cyril Soler
7.1.4 ShwarpIt
-
Name:
ShwarpIt
-
Keyword:
Warping
-
Functional Description:
ShwarpIt is a simple mobile app that allows you to manipulate the perception of shapes in images. Slide the ShwarpIt slider to the right to make shapes appear rounder. Slide it to the left to make shapes appear more flat. The Scale slider gives you control on the scale of the warping deformation.
- URL:
-
Contact:
Georges-Pierre Bonneau
7.1.5 iOS_system
-
Keyword:
IOS
-
Functional Description:
From a programmer point of view, iOS behaves almost as a BSD Unix. Most existing OpenSource programs can be compiled and run on iPhones and iPads. One key exception is that there is no way to call another program (system() or fork()/exec()). This library fills the gap, providing an emulation of system() and exec() through dynamic libraries and an emulation of fork() using threads.
While threads can not provide a perfect replacement for fork(), the result is good enough for most usage, and open-source programs can easily be ported to iOS with minimal efforts. Examples of softwares ported using this library include TeX, Python, Lua and llvm/clang.
-
Release Contributions:
This version makes iOS_system available as Swift Packages, making the integration in other projects easier.
- URL:
-
Contact:
Nicolas Holzschuch
7.1.6 Carnets for Jupyter
-
Keywords:
IOS, Python
-
Functional Description:
Jupyter notebooks are a very convenient tool for prototyping, teaching and research. Combining text, code snippets and the result of code execution, they allow users to write down ideas, test them, share them. Jupyter notebooks usually require connection to a distant server, and thus a stable network connection, which is not always possible (e.g. for field trips, or during transport). Carnets runs both the server and the client locally on the iPhone or iPad, allowing users to create, edit and run Jupyter notebooks locally.
- URL:
-
Contact:
Nicolas Holzschuch
7.1.7 a-Shell
-
Keywords:
IOS, Smartphone
-
Functional Description:
a-Shell is a terminal emulator for iOS. It behaves like a Unix terminal and lets the user run commands. All these commands are executed locally, on the iPhone or iPad. Commands available include standard terminal commands (ls, cp, rm, mkdir, tar, nslookup...) but also programming languages such as Python, Lua, C and C++. TeX is also available. Users familiar with Unix tools can run their favorite commands on their mobile device, on the go, without the need for a network connection.
- URL:
-
Contact:
Nicolas Holzschuch
7.1.8 X3D TOOLKIT
-
Name:
X3D Development pateform
-
Keywords:
X3D, Geometric modeling
-
Functional Description:
X3DToolkit is a library to parse and write X3D files, that supports plugins and extensions.
- URL:
-
Contact:
Cyril Soler
-
Participants:
Gilles Debunne, Yannick Le Goc
7.1.9 GigaVoxels
-
Keywords:
3D rendering, Real-time rendering, GPU
-
Functional Description:
Gigavoxel is a software platform which goal is the real-time quality rendering of very large and very detailed scenes which couldn't fit memory. Performances permit showing details over deep zooms and walk through very crowdy scenes (which are rigid, for the moment). The principle is to represent data on the GPU as a Sparse Voxel Octree which multiscale voxels bricks are produced on demand only when necessary and only at the required resolution, and kept in a LRU cache. User defined producer lays accross CPU and GPU and can load, transform, or procedurally create the data. Another user defined function is called to shade each voxel according to the user-defined voxel content, so that it is user choice to distribute the appearance-making at creation (for faster rendering) or on the fly (for storageless thin procedural details). The efficient rendering is done using a GPU differential cone-tracing using the scale corresponding to the 3D-MIPmapping LOD, allowing quality rendering with one single ray per pixel. Data is produced in case of cache miss, and thus only whenever visible (accounting for view frustum and occlusion). Soft-shadows and depth-of-field is easily obtained using larger cones, and are indeed cheaper than unblurred rendering. Beside the representation, data management and base rendering algorithm themself, we also worked on realtime light transport, and on quality prefiltering of complex data. Ongoing researches are addressing animation. GigaVoxels is currently used for the quality real-time exploration of the detailed galaxy in ANR RTIGE. Most of the work published by Cyril Crassin (and al.) during his PhD (see http://maverick.inria.fr/Members/Cyril.Crassin/ ) is related to GigaVoxels. GigaVoxels is available for Windows and Linux under the BSD-3 licence.
- URL:
-
Contact:
Fabrice Neyret
-
Participants:
Cyril Crassin, Eric Heitz, Fabrice Neyret, Jérémy Sinoir, Pascal Guehl, Prashant Goswami
7.1.10 MobiNet
-
Keywords:
Co-simulation, Education, Programmation
-
Functional Description:
The MobiNet software allows for the creation of simple applications such as video games, virtual physics experiments or pedagogical math illustrations. It relies on an intuitive graphical interface and language which allows the user to program a set of mobile objects (possibly through a network). It is available in public domain for Linux,Windows and MacOS.
- URL:
-
Contact:
Fabrice Neyret
-
Participants:
Fabrice Neyret, Franck Hétroy, Joelle Thollot, Samuel Hornus, Sylvain Lefebvre
-
Partners:
CNRS, LJK, INP Grenoble, Inria, IREM, Cies, GRAVIR
7.1.11 PROLAND
-
Name:
PROcedural LANDscape
-
Keywords:
Atmosphere, Masses of data, Realistic rendering, 3D, Real time, Ocean
-
Functional Description:
The goal of this platform is the real-time quality rendering and editing of large landscapes. All features can work with planet-sized terrains, for all viewpoints from ground to space. Most of the work published by Eric Bruneton and Fabrice Neyret (see http://evasion.inrialpes.fr/Membres/Eric.Bruneton/ ) has been done within Proland and integrated in the main branch. Proland is available under the BSD-3 licence.
- URL:
-
Contact:
Fabrice Neyret
-
Participants:
Antoine Begault, Eric Bruneton, Fabrice Neyret, Guillaume Piolet
7.1.12 LavaCake
-
Name:
LavaCake
-
Keywords:
Vulkan, 3D, 3D rendering
-
Functional Description:
LavaCake is an Open Source framework that aims to simplify using Vulkan in C++ for rapid prototyping. This framework provides multiple functions to help the manipulation of the usual Vulkan structure such as queue, command buffer, render pass, shader module, and many more.
- URL:
-
Contact:
Thibault Tricard
8 New results
8.1 Real-Time Rendering of Complex Scenes
8.1.1 GigaVoxels 2.0
Participants: Fabrice Neyret, Antoine Richermoz.
During the PhD thesis of Cyril Crassin (2007-11, now at nVidia Research) and after we developed (and distributed) the GigaVoxels platform (gigavoxels.inria.fr) allowing the realtime quality walkthrough in extremely larged and detailled voxel data. It heavily relies on the GPU, with hierarchical data produced and then cached on the fly on demand by the ray-marcher when a new region get visible, or required at a higher resolution. Still, the limited management of concurent tasks at the time, plus the complex treatment of tile borders at rendering and recollection of necessary information to produce new data, induced far from optimal coverage of the full GPU power and many synchronisations CPU side.

Figures
In his PhD, Antoine Richermoz revisit the problem with the availability of new tools: CUDA streams and graphs, Sparse textures, RayTracing Cores + the ability to relaunch tasks from tasks, etc.
In particular, we developed a new model exploiting Dynamic Parallelism where ray-rendering tasks can directly launch voxel production tasks at missing tiles, and those re-launch ray-tasks once completed. This allows to almost totally suppress the times of SM starvation that was occuring in the previous model due to synchronisation between globally alterning task, especially at the end of the frame where only small rendering and production tasks are emitted. This yielded a
![]() |
![]() |

compared parallelism scheme; compared SMs timelines using our viewer; realtime walk-through a forest.
Our experiments on Hardware Sparse Texture bumped into unexpected low performances, typically for allocating blocks. We are preparing a publication detailing our benches over different hardware and software implementations.
8.1.2 Fast rendering of large and detailled procedural world
Participants: Fabrice Neyret, Mathéo Moinet.
Large and detailled scenes potentially requires a huge amount of data to specify, build, store, load to GPU and render. Even the Gigavoxels approach above still requires large storage in GPU memory, and animation would require regenerating the data, invalidating the cache. Oppositely, procedural methods describe objects or scenes as mathematical constructions, requiring zero storage and only high-level specification by the user. In particular, FBM noises such as Perlin noise allows infinite amount of details, and hypertextures allows to enrich SDF-defined shapes in fractal details. But fractal noise is very costly to evaluate, especially with large range of scales. Worse: with 3D ray-marching, each step in depth counts such that empty areas in front and between objects yield prohibitive cost. Sphere marching techniques allow to do large steps to skip the voids when knowing the distance (SDF) to objects, but we don't have an efficient one for FBM, plus this technics still cause uselessly small steps near silhouettes and behind objects.
In his PhD, Mathéo Moinet revisited the previous work on SDF sphere marching to apply it to FBM. Indeed, decomposing the fractal noise of FBM allows to build an implicit hierarchy of bounding volumes of increased complexity and cost. Lazzy evaluation of this structure allows early-stop of the large-to small fractal sum as soon as we can predict 0-density threshold can't be reached, reducing computational cost. Exploiting this broad-to-detailed evaluation additionaly allows for much bigger steps when far from objects, rather than previous overly conservative small steps everywhere. Relying on higher order in Taylor series further improves this, and the gradient allows to not be deceived by silhouette proximity or behind objects. All together, we could get a 12 folds speedup compare to the naive approach (still relying on bounding box). Note that the methods applies on opaque and transparent objects, still or animated. This lead to a paper conditionaly accepted at Eurographics 2025.
![]() |
Bounding hierarchy allows to both adapt step size and evaluation cost per step. |
![]() |
Some results of real-time flyover large and detailed procedural scenes. |
![]() |
Heatmaps showing ray cost with naive vs our method. |
8.1.3 Embeding complex data in rasterizable proxy
Participants: Pacôme Luton, Fabrice Neyret, Thibault Tricard.
In 2024 we published an article 4 in which we proposed a method to use the rasterizer of the graphic pipeline to process tetrahedrons. This allows for realtime rendering of tetrahedrons while maintaining optical depth information in the fragment stage. In our article, we proposed to use this approach to compute single pass transparent rending (Figure 6), and to accelerate the rendering of volumetric procedural functions by embedding them within tetrahedrons (Figure 5).

Field of asteroids evaluated with our method. Top left: asteroid shaded using the number of steps of sphere marching before reaching the surface. Top right: rendering of the bounding tetrahedron shaded using the interval length to march. Bottom left: cost of the visible fragment (black means 0 sphere marching steps, white means 60 sphere marching steps). Bottom right: sum of the cost of all fragments evaluated (black means 0 sphere marching steps, white means 60 sphere marching steps).

Rendering of multiple parts of a tetrahedrized jet engine. Left: assembled view. Right: randomly generated exploded-view. Here both view can be achieved without having to reconstruct any acceleration structure. On the exploded we can see parts of the mechanism overlapping without creating any issues.
In his PhD started in September 2024, Pacôme Luton is under the supervision of Fabrice Neyret and Thibault Tricard. He is currently working on embedding meshless data (e.g. signed distance fields, voxels) within tetrahedrons to import techniques only available in real-time to rasterizable surface (animation, textures, etc..), to these meshless data.
8.2 Light Transport Simulation
8.2.1 Spectral Analysis of the Light Transport Operator
Participants: Cyril Soler.
In 2022 we proved in a Siggraph publication that light transport operators are not compact (which means that the light transport equation does not benefit from the guaranties that normally come with Fredholm equations).
The sensible follow up is to investigate the structure of the spectrum of light transport operators as well as links between eigenelements and paths integrals. While compact operators usually have a simple spectrum, non compact ones may show any kind of structure. A publication is ready and will be sent to TOG by the end of January. In this paper we establish very original results that take inspiration from the theory of non self-adjoint operators, the operator formulation of light transport, and Monte-Carlo methods for linear operations on large matrices. On Figure 7 below we show eigenvalues and eigenfunctions of the local reflectance operator for various materials.

8.2.2 Practical models for the reflectance properties of fiber materials
Participants: Ran Yu, Cyril Soler.
Mme Ran Yu is on first year of her Cifre PhD in collaboration with Owens Corning (Chambéry). She is working on the modeling of the reflectance of non woven fiber materials.
Her work will involve measuring and predicting the properties of complex materials made of intricate glass fiber layers, through multiple approaches including explicit calculation using path tracing, with the purpose of developing new statistical and empirical models of reflectance on these materials.
The first year of the PhD (started in Jan 2024) have been dedicated to perform and filter measurements on actual material samples. The upcoming tasks will consist in designing a statistical model to predict the reflectance of fibers based on their orient matrix, diameter, refractive index, etc, and compare it to measurements of real data. Figure 8 below shows measured data for a specific sample.

8.2.3 Computing caustics using Manifold Next-Event Estimation
Participants: Nicolas Holzschuch, Ana Maria Granizo Hidalgo.
By focusing light, specular surfaces in a scene cause bright spots, called caustics. These caustics play an important role in our perception of the scene and in photorealistic simulation of light transport, but they are challenging to compute. They correspond to high- intensity radiance transfer over a small surface area, where a specific path connects to the light source through specular reflections or refractions.
This is what makes caustics difficult to compute: they are caused by a small set of light paths, which have a null or small measure in path space; randomly sampling path space, for example using naive Monte-Carlo methods, will likely miss the paths responsible for the caustics, or under-sample them.
Several global illumination algorithms have been developed to efficiently compute caustics; Manifold Next-Event Estimation algorithms are efficient methods to compute caustics caused by direct reflection or refraction on a specular surface. They compute the light paths connecting a receiver point to the light source through this specular interaction by computing the zeroes of an objective function. This computation uses Newton-Raphson method and requires the derivatives of the objective function.
We developed two improvements for the Manifold Next-Event Estimation algorithm for computing caustics:
- The first introduces the complanarity constraint to reduce the dimensionality of the search space 3. The specular reflected or refracted vector must follow the Snell-Descartes law: the reflected vector makes the same angle with the normal as the incoming vector. Most scenes used for illumination simulation use triangle meshes with linearly interpolated normals. For these scenes, Snell-Descartes translates into a 6-degree polynomial equation, for which there are no close-form solutions, hence the use of the Newton-Raphson algorithm. But the surface normal, the incoming vector and the reflected vector must also be coplanar. For a triangle with interpolated normals, this coplanarity requirement translates into a quadratic equation: all solutions must lie on a conic arc. For the surface, this contraint defines a curve. By restraining the search for solutions to this curve instead of exploring the entire surface, we get solutions much faster. The new algorithm is temporally coherent by nature, making it ideal for animated scenes (see Figure 9).
- The Newton-Raphson method requires computing derivatives for each light path, then applying these derivatives. This makes the code relatively complex, and adding caustics computation to existing code can be difficult. We addressed this issue in a second paper 5, using the Nelder-Mead method to compute the reflected or refracted path for the caustics. The method is more general: it computes all possible paths connecting to the light source instead of just the main path, and is unbiased. It is slower than the state of the art.
8.3 Expressive rendering
8.3.1 micmap project
Participants: Nolan Mestre, Arthur Novat, Romain Vergne, Joëlle Thollot.
micmap is a startup creation project following Nolan Mestres Ph.D. thesis and a long-standing informal collaboration with Arthur Novat (Atelier Novat). This project has won the out of labs challenge (SATT Linksium) in 2021. It has then been funded by the SATT from October 2022 to May 2023 and has been funded by Inria Startup Studio starting from June 1st, 2023 to May 2024. micmap also reveived a BPI France BFTlabs fund of 120k€ in 2024.
The micmap team is composed of 3 Maverick members (Nolan Mestres, Joelle Thollot, and Romain Vergne) and 2 external collaborators (Yann Boulanger and Arthur Novat). The goal of the project is to create stylized panorama maps (Figure 10) for tourist agencies, mountain operators, and local authorities.
The project is currently in its maturation phase, combining R&D, market, and economy studies. The goal is to create the company in September 2025 as a SCOP (cooperative society).
In terms of research, micmap has been presented during the ICA 13th Mountain Cartography Workshop 8, and we have given a talk for the 5e édition du Festival Printemps des cartes11.

Ski map of 7 Laux produced by a combination of our 3D software and a manual edition by Arthur Novat.
8.3.2 Controllable motion lines generation for an abstracted depiction of 3D motion
Participants: Amine Farhat, Alexandre Bleron, Romain Vergne, Joëlle Thollot.
Motion lines are an essential tool in illustration and animation for conveying and enhancing the perceived movement of an object. In this paper, we propose a user-guided method to generate such lines on top of a rendered 3D scene. Our method is flexible enough to accommodate a variety of line styles, and allows the depiction of complex composite movements in an intuitive way, by granting the user the ability to portray individual motion components using separate sets of lines, as is often done in traditional animation.
Our method takes as input the rendered 3D object and its movement, represented as a hierarchy of transforms varying over time. Users specify a subset of those transforms which represents the movement to be depicted, and an anchor point on the object to guide the placement of the lines. From this, our algorithm generates a 3D "ribbon", a parametrized swept surface evolving over time which acts as a support for rendering motion effects which are coherent with the movement selected by the user. We show how this ribbon and its parametrization can then be used to draw motion lines with controllable lengths, placement, density, and appearance (see Figure 11).

Controllable motion lines generation for an abstracted depiction of 3D motion.
8.3.3 AI guided stylization of 3D scenes
Participants: Ambre Adjevi-Neglokpe, Romain Vergne, Joëlle Thollot, Thibault Tricard.
Ambre Adjevi-Neglokpe was first recruited as an intern working on the stylization of panorama maps using Artificial intelligence. During her internship, she proposed a strong literature review and began working on a prototype to create stylized images of panorama maps using a G-Buffer as an input.
Following her internship Ambre Adjevi-Neglokpe has been recruited as PhD student with a doctoral fellowship under the supervision of Romain Vergne, Joëlle Thollot, and Thibault Tricard, She is currently working on using AI to guide image stylization methods for real-time applications.
8.4 Geometry and new materials
8.4.1 Computational Design of Laser-Cut Bending-Active Surfaces
Participants: Emmanuel Rodriguez, Georges-Pierre Bonneau, Stefanie Hahmann, Mélina Skouras.
Two years ago we proposed a method to automatically design bending-active structures, made of wood, whose silhouettes at equilibrium match desired target curves. We published this work in the journal Elsevier CAD and obtained the Best Paper Award at SPM'2022. Our approach is based on the use of a parametric pattern that is regularly laser-cut on the structure and that allows us to locally modulate the bending stiffness of the material. This work was restricted to reproduce planar input curves. Last year we extended this approach in order to reproduce arbitrary surfaces. We first divide the surfaces in multiple ribbons. Then we approximate these ribbons by developable surfaces, and map them isometrically to the plane. This results in non-rectangular planar ribbons. We generalized the two-scale approach that was at the hearth of our previous work, to compute the geometric dimensions of the parametric pattern that is cut on the projected planar ribbons. These dimensions are computed in such a way that, once the planar ribbons are loaded by clamped constraints for the position and tangent on both ends of the ribbons, they deform into their original position on the input 3D surface. To illustrate the generality of our approach we used cardboard paper instead of mdf wood for this application. This generalization of our previous work to 3D surfaces has been published and presented at the Symposium for Computational Fabrication in July 2024 7.

Bending active structures
8.4.2 3D sketching in immersive environments
Participants: Georges-Pierre Bonneau, Stefanie Hahmann.
Immersive environments with head mounted displays (HMD) and hand-held controllers, either in Virtual or Augmented Reality (VR/AR), offer new possibilities for the creation of artistic 3D content. Some of them are exploited by mid-air drawing applications: the user's hand trajectory generates a set of stylized curves or ribbons in space, giving the impression of painting or drawing in 3D. We propose a method to extend this approach to the sketching of surfaces with a VR controller. The idea is to favor shape exploration, offering a tool, where the user creates a surface just by painting ribbons. These ribbons are not constrained to form patch boundaries for example or to completely cover the shape. They can be very sparse, disordered, overlap or not, intersect or not. The shape is computed simultaneously, starting with the first piece of ribbon drawn by the user and continuing to evolve in real-time as long as the user continues sketching. Our method involves minimizing an energy function based on the projections of the ribbon strokes on a proxy surface by taking the controller's orientations into account. The current implementation considers elevation surfaces. In addition to many examples, we evaluate the time performance of the dynamic shape modeling with respect to an increasing number of input ribbon strokes. Finally, we collaborated with a professional artist combining stylized curve drawings in VR with our surface sketching tool. Our work has been published in the journal Computer and Graphics, and presented at SMI'2024 2.

3d sketching in immersive environments
9 Bilateral contracts and grants with industry
9.1 Bilateral contracts with industry
Participants: Nicolas Holzschuch, Nicolas Guichard, Joëlle Thollot, Romain Vergne, Amine Farhat.
- We have a contract with KDAB France, connected with the PhD thesis of Nicolas Guichard (CIFRE);
- We have a contract with LeftAngle, connected with the PhD thesis of Amine Farhat (CIFRE);
- We have a contract with Owens Corning France, connected with the PhD thesis of Ran Yu (CIFRE).
10 Partnerships and cooperations
10.1 National initiatives
10.1.1 CDTools: Patrimalp
Participants: Nicolas Holzschuch [contact], Romain Vergne.
The cross-disciplinary project Patrimalp (2018-2022) on Cultural Heritage was extended by Univ. Grenoble-Alpes under the new funding “`Cross-Disciplinary Tools”, for a period of 36 months (2023-2026).
The main objective and challenge of the CDTools Patrimalp is to develop a cross-disciplinary approach in order to get a better knowledge of the material cultural heritage in order to ensure its sustainability, valorization and diffusion in society. Carried out by members of UGA laboratories, combining skills in human sciences, geosciences, digital engineering, material sciences, in close connection with stakeholders of heritage and cultural life, curators and restorers, the CDTools Patrimalp intends to develop of a new interdisciplinary science: Cultural Heritage Science.
10.1.2 Collaboration with University of Edinburgh
Participants: Cyril Soler [contact].
As a follow up of the work conducted during the ANR CaLiTrOp that finished in 2022, we have pursuit the spectral analysis of light transport in collaboration with Pr. Kartic Subr at Univ. of Edinburgh. Pr.Subr was invited for a few days in Grenoble and gave a seminar in october 2023. Cyril Soler was invited in Edinburgh in june 2023.
Participants: Georges-Pierre Bonneau, Nicolas Holzschuch, Fabrice Neyret, Cyril Soler, Joëlle Thollot, Thibault Tricard, Romain Vergne.
11 Dissemination
11.1 Promoting scientific activities
11.1.1 Scientific events: organisation
General chair, scientific chair
- Georges-Pierre Bonneau: Paper co-chair SMI 2024
11.1.2 Scientific events: selection
Member of the conference program committees
- Nicolas Holzschuch: Siggraph Technical Papers, 2024.
- Thibault Tricard : Eurographics Short Papers, 2024.
- Thibault Tricard : Best paper committee member, JFIG 2024
- Fabrice Neyret: High-Performance Graphics 2024
- Georges-Pierre Bonneau: Solid and Physical Modeling 2024
- Georges-Pierre Bonneau: Geometric Modeling and Processing 2024
Reviewer
Maverick faculties are regular reviewers of most of the major journals and conferences of the domain.
11.1.3 Journal
Member of the editorial boards
- Georges-Pierre Bonneau: editorial board member of Computer Aided Design, Elsevier
11.1.4 Research administration
- Nicolas Holzschuch is an elected member of the Conseil National de l'Enseignement Supérieur et de la Recherche (CNESER) (2019-2027).
- Nicolas Holzschuch is co-responsible (with Anne-Marie Kermarrec of EPFL) of the Inria International Lab (IIL) “Inria-EPFL”.
- Nicolas Holzschuch and Georges-Pierre Bonneau are members of the Habilitation committee of the École Doctorale MSTII of Univ. Grenoble Alpes.
- Georges-Pierre Bonneau is member of the Scientific Council of the GdR IG-RV
- Georges-Pierre Bonneau is in charge of the "Geometrie-Image" department of Laboratoire Jean Kuntzmann
11.2 Teaching - Supervision - Juries
11.2.1 Teaching
Joëlle Thollot and Georges-Pierre Bonneau are both full Professor of Computer Science. Romain Vergne and Thibault Tricard are both Associate Professor in Computer Science. They teach general computer science topics at basic and intermediate levels, and advanced courses in computer graphics, visualization and artifical intelligence at the master levels. In 2024 Joëlle was in délégation inria during one semester. Romain Vergne had a half délégation Inria in 2024. He is in full time délégation Inria in 2025. Nicolas Holzschuch teaches advanced courses in computer graphics at the master level.
- Licence: Joëlle Thollot, Théorie des langages, 45h, L3, ENSIMAG, France
- Licence: Joëlle Thollot, innovation, 10h, L3, ENSIMAG, France
- Master : Joelle Thollot, English courses using theater, 18h, M1, ENSIMAG, France.
- Master : Nicolas Holzschuch, Computer Graphics II, 18h, M2 MoSIG, France.
- Master : Nicolas Holzschuch, Synthèse d’Images et Animation, 32h, M2, ENSIMAG, France.
- Master: Georges-Pierre Bonneau, Image Synthesis, 23h, M1, Polytech-Grenoble, France
- Master: Georges-Pierre Bonneau, Data Visualization, 40h, M2, Polytech-Grenoble, France
- Master: Georges-Pierre Bonneau, Digital Geometry, 23h, M1, UGA
- Master: Georges-Pierre Bonneau, Information Visualization, 22h, Mastere, ENSIMAG, France.
- Master: Georges-Pierre Bonneau, Scientific Visualization, M2, ENSIMAG, France.
- Master: Georges-Pierre Bonneau, Computer Graphics II, 18h, M2 MoSiG, UGA, France.
- Master : Thibault Tricard, Artificial Intelligence Project, 18h, M2 ENSIMAG, France
- Master : Thibault Tricard, 3Ds graphics, 30h, M1 ENSIMAG, France
- Master : Thibault Tricard, Intro to AI, 10h, M1 ENSIMAG, France
- Master : Thibault Tricard, Intro to AI, 10h, M1 ENSIMAG, France
- Master : Thibault Tricard, Image Synthesis, 12h M1 UGA, France
- Master : Thibault Tricard, 3Ds graphics, 36h M1 UGA, France
- Doctorate: Fabrice Neyret, Linear algebra: reminders and intuitions, 16h, INRIA
11.2.2 Supervision
- Romain Vergne and Joëlle Thollot co-supervise the PhD of Amine Farhat.
- Thibault Tricard, Romain Vergne and Joëlle Thollot co-supervise the PhD of Ambre Adjevi-Neglokpe.
- Thibault Tricard, and Fabrice Neyret co-supervise the PhD of Pacôme Luton.
- Fabrice Neyret supervises the PhD of Antoine Richermoz.
- Fabrice Neyret supervises the PhD of Matheo Moinet.
- Cyril Soler supervises the PhD of Ran Yu.
- Nicolas Holzschuch supervises the PhD of Anita Granizo-Hidalgo.
- Georges-Pierre Bonneau co-supervises the PhD of Emmanuel Rodriguez with Stefanie Hahmann and Melina Skouras.
11.2.3 Juries
- Joëlle Thollot was president of the jury for the PhD defense of Emmanuel Rodriguez (Université Grenoble-Alpes), February 21, 2024.
- Joëlle Thollot was examiner for the jury for the PhD defense of Melvin Even (Université de Bordeaux), December 20, 2024.
- Nicolas Holzschuch was president of the jury for the PhD defense of Bastien Doignies (Université de Lyon), November 25, 2024.
- Nicolas Holzschuch was reviewer for the PhD defense of Simon Lucas (Université de Bordeaux), December 6, 2024.
- Georges-Pierre Bonneau was reviewer for the PhD of Pierre Bourquat (Université de Lyon), September 20, 2024.
- Georges-Pierre Bonneau was reviewer for the PhD of David Henri-Garnier (Ecole Polytechnique), May 22, 2024.
11.3 Popularization
11.3.1 Specific official responsibilities in science outreach structures
- Romain Vergne was co-responsible of the Groupe de Travail Rendu of the French association for Computer Graphics (AFIG).
- Fabrice Neyret presented " Les maths et la physique dans les effets spéciaux et les jeux vidéo"
during the awards ceremony of Mathematics Olympiades
(Concours René Merckhoffer)
of Académie de Grenoble
(
and classes).
11.3.2 Participation in Live events
- Fabrice Neyret presented "Les maths et la physique dans les effets spéciaux et les jeux vidéo" at Bibliothèque Kateb Yacine in the INRIA-coorganized cycle of conférences "La société numérique en question(s)". [replay](53mn) )
11.3.3 Others science outreach relevant activities
- Fabrice Neyret maintains the blog shadertoy-Unofficial and various shaders examples on Shadertoy site to popularize GPU technologies as well as disseminates academic models within computer graphics, computer science, applied math and physics fields. About 28k pages viewed and 13k unique visitors (93% out of France) in 2023.
- Fabrice Neyret maintains the blog desmosGraph-Unofficial to popularize the use of interactive grapher DesmosGraph for research, communication and pedagogy. For this year, about 17k pages viewed and 11k unique visitors (99% out of France) in 2023.
- Fabrice Neyret maintains the the blog chiffres-et-paradoxes (in French) to popularize common errors, misunderstandings and paradoxes about statistics and numerical reasoning. About 17k pages viewed and 9k unique visitors since then (15% out of France, knowing the blog is in French) on the blog, plus the viewers via the Facebook and Twitter pages.
- Thibault Tricard is part of the Evaluation Committee of the Graphics Replicability Stamp Initiative (GRSI) a group that evaluate the Replicability of computer graphics publications.
12 Scientific production
12.1 Major publications
- 1 inproceedingsA Low-Dimensional Perceptual Space for Intuitive BRDF Editing.EGSR 2021 - Eurographics Symposium on Rendering - DL-only TrackSaarbrücken, GermanyJune 2021, 1-13HAL
12.2 Publications of the year
International journals
- 2 article3D sketching in immersive environments: Shape from disordered ribbon strokes.Computers and Graphics123October 2024, 103978HALDOIback to text
- 3 articleInteractive Rendering of Caustics using Dimension Reduction for Manifold Next-Event Estimation.Proceedings of the ACM on Computer Graphics and Interactive Techniques71May 2024, 1-16HALDOIback to text
- 4 articleInterval Shading: using Mesh Shaders to generate shading intervals for volume rendering.Proceedings of the ACM on Computer Graphics and Interactive Techniques73April 2024, 1-11HALDOIback to text
International peer-reviewed conferences
- 5 inproceedingsComputing Manifold Next-Event Estimation without Derivatives using the Nelder-Mead Method.Eurographics Symposium on RenderingEGSR 2024 - 35th edition of Eurographics Symposium on RenderingRendering 2024 - Symposium TrackLondon, United Kingdom2024, 1-9HALDOIback to text
- 6 inproceedingsGigaVoxels DP: Starvation-Less Render and Production for Large and Detailed Volumetric Worlds Walkthrough.Proceedings of ACM on Computer Graphics and Interactive Techniques (PACMCGIT)HPG 2024 - High Performance Graphics73Denver, United States2024, 1-11HALback to text
- 7 inproceedingsDesigning Bending-Active Freeform Surfaces.Proceedings of the 9th ACM Symposium on Computational FabricationSCF 2024 - 9th ACM Symposium on Computational Fabricationarticle no. 3Aarhus, DenmarkJuly 2024, 1-11HALDOIback to text
Conferences without proceedings
- 8 inproceedingsFaking reality for generating panorama maps.13th Mountain Cartography WorkshopZakopane, Poland2024, 1-25HALback to text
Edition (books, proceedings, special issue of a journal)
Scientific popularization
- 11 inproceedingsDe l'atelier Novat au projet micmap: Une collaboration entre un artiste et des chercheurs pour la création de cartes 3D digitales, interactives et didactiques..5e édition du Festival Printemps des cartesMontmorillon, France2024HALback to text