Homepage Inria website

Section: Partnerships and Cooperations

National Initiatives


ANR iSpace&Time

Participants : Fabrice Lamarche [contact] , Carl Jorgensen, Julien Pettré, Marc Christie.

The iSpace&Time project is founded by the ANR and gathers six partners: IGN, Lamea, University of Rennes 1, LICIT (IFSSTAR), Telecom ParisTech and the SENSE laboratory (Orange). The goal of this project is the establishment of a demonstrator of a 4D Geographic Information System of the city on the web. This portal will integrate technologies such as web2.0, sensor networks, immersive visualization, animation and simulation. It will provide solutions ranging from simple 4D city visualization to tools for urban development. Main aspects of this project are:

  • Creation of an immersive visualization based on panoramic acquired by a scanning vehicle using hybrid scanning (laser and image).

  • Fusion of heterogeneous data issued by a network of sensor enabling to measure flows of pedestrians, vehicles and other mobile objects.

  • Use of video cameras to measure, in real time, flows of pedestrians and vehicles.

  • Study of the impact of a urban development on mobility by simulating vehicles and pedestrians.

  • Integration of temporal information into the information system for visualization, data mining and simulation purpose.

The mimetic team is involved in the pedestrian simulation part of this project. This project started in 2011 and ended in november 2014.


Participants : Armel Crétual [contact] , Anthony Sorel, Richard Kulpa.

The goal of RePLiCA project is to build and test a new rehabilitation program for facial praxia in children with cerebral palsy using an interactive device.

In a classical rehabilitation program, the child tries to reproduce the motion of his/her therapist. The feedback he/she has lays on the comparison of different modalities: the gesture of the therapist he/she has seen few seconds ago (visual space) and his/her own motion (proprioceptive space). Unfortunately, besides motor troubles these children often have some cognitive troubles and among them a difficulty to convert the information from a mental space to another one.

The principle of our tool is that during a rehabilitation session the child will observe simultaneously on the same screen an avatar, the virtual therapist’s one, performing the gesture to be done, and a second avatar animated from the motion he actually performs. To avoid the use of a too complex motion capture system, the child will be filmed by a simple video camera. One first challenge is thus to be able to capture the child's facial motion with enough accuracy. A second one is to be able to provide him/her an additional feedback upon the gesture quality comparing it to a database of healthy children of the same age.

RePLiCA did start in january 2012 and will end in July 2015.

ANR JCJC Cinecitta

Participants : Marc Christie [contact] , Cunka Sanokho.

Cinecitta is a 3-year young researcher project funded by the French Research Agency (ANR) lead by Marc Christie. The project started in October 2012 and will end in October 2015.

The main objective of Cinecitta is to propose and evaluate a novel workflow which mixes user interaction using motion-tracked cameras and automated computation aspects for interactive virtual cinematography that will better support user creativity. We propose a novel cinematographic workflow that features a dynamic collaboration of a creative human filmmaker with an automated virtual camera planner. We expect the process to enhance the filmmaker's creative potential by enabling very rapid exploration of a wide range of viewpoint suggestions. The process has the potential to enhance the quality and utility of the automated planner's suggestions by adapting and reacting to the creative choices made by the filmmaker. This requires three advances in the field. First, the ability to generate relevant viewpoint suggestions following classical cinematic conventions. The formalization of these conventions in a computationally efficient and expressive model is a challenging task in order to select and propose the user with a relevant subset of viewpoints among millions of possibilities. Second, the ability to analyze data from real movies in order to formalize some elements of cinematographic style and genre. Third, the integration of motion-tracked cameras in the workflow. Motion-tracked cameras represent a great potential for cinematographic content creation. However given that tracking spaces are of limited size, there is a need to provide novel interaction metaphors to ease the process of content creation with tracked cameras. Finally we will gather feedback on our prototype by involving professionals (during dedicated workshops) and will perform user evaluations with students from cinema schools.

ANR Contint Entracte

Participants : Charles Pontonnier, Georges Dumont, Steve Tonneau, Franck Multon, Julien Pettré [contact] , Ana Lucia Cruz Ruiz, Antoine Muller, Anthony Sorel, Nicolas Bideau, Richard Kulpa.

The ANR project ENTRACTE is a collaboration between the Gepetto team in LAAS, Toulouse (head of the project) and the Inria/MimeTIC team. The project started in November 2013 and will end in August 2017. The purpose of the ENTRACTE project is to address the action planning problem, crucial for robots as well as for virtual human avatars, in analyzing human motion at a biomechanical level and by defining from this analysis bio-inspired motor control laws and bio-inspired paradigms for action planning. The project is launched since november 2013 and Ana-Lucia Cruz-Ruiz has been recruited as a PhD student since this date to begin to work on musculoskeletal-based methods for avatar animation. Moreover, Steve Tonneau, a PhD student currently in third year is also developing bio-inspired posture generators for avatar navigation in encumbered environments.



Participants : Franck Multon [contact] , Julian Joseph.

The ADT-MAN-IP aims at proposing a common production pipeline for both MimeTIC and Hybrid teams. This pipeline intends to facilitate the production of populated virtual reality environments.

The pipeline starts with the motion capture of an actor, using motion capture devices such as a Vicon (product of Oxford Metrics) system. To do so, we need to design new methods to automatically adapt all motion captures data to an internal skeleton that can be reused to retarget the motion to various types of skeletons and characters. The purpose is then to play this motion capture data on any type of virtual characters used in the demos, regardless their individual skeletons and morphology. The key point here is to make this process be as automatic as possible. During the first year, a software toolbox has been developed in Motion Builder (product of Autodesk) to automate this process. We also developed automatic following methods to make virtual humans locomote along a given path in the environment in Unity 3D.

The second step in the pipeline is to design a high level scenario framework to describe a virtual scene and the possible user's interactions with this scene so that he/she can interact with the story directly. This work will be performed in 2015.

In this ADT we also will have to connect these two opposite parts into a unique framework that can be used by non-experts in computer animation to design new immersive experiments involving autonomous virtual humans. The resulting framework could consequently be used in the Immersia immersive room for various types of application.