Section: Partnerships and Cooperations

National Initiatives

ANR Contint: iSpace&Time

Participants : Fabrice Lamarche [contact] , Julien Pettré, Marc Christie, Carl Jorgensen.

The iSpace&Time project is founded by the ANR and gathers six partners: IGN, Lamea, University of Rennes 1, LICIT (IFSTAR), Telecom ParisTech and the SENSE laboratory (Orange). The goal of this project is the establishment of a demonstrator of a 4D Geographic Information System of the city on the web. This portal will integrate technologies such as web2.0, sensor networks, immersive visualization, animation and simulation. It will provide solutions ranging from simple 4D city visualization to tools for urban development. Main aspects of this project are:

  • Creation of an immersive visualization based on panoramic acquired by a scanning vehicle using hybrid scanning (laser and image).

  • Fusion of heterogeneous data issued by a network of sensor enabling to measure flows of pedestrians, vehicles and other mobile objects.

  • Use of video cameras to measure, in real time, flows of pedestrians and vehicles.

  • Study of the impact of a urban development on mobility by simulating vehicles and pedestrians.

  • Integration of temporal information into the information system for visualization, data mining and simulation purpose.

  • The mimetic team is involved in the pedestrian simulation part of this project. This project started in 2011 and will end in 2013.

ANR Contint: Chrome

Participants : Julien Pettré [julien.pettre@inria.fr] , Kevin Jordao, Orianne Siret.

Chrome is a national project funded by the French Research Agency (ANR). The project is leaded by Julien Pettré, member of MimeTIC. Partners are: Inria-Grenoble IMAGINE team (Remi Ronfard), Golaem SAS (Stephane Donikian), and Archivideo (Francois Gruson). The project has been launched in september 2012.

The Chrome project develops new and original techniques to massively populate huge environments. The key idea is to base our approach on the crowd patch paradigm that enables populating environments from sets of pre-computed portions of crowd animation. These portions undergo specific conditions to be assembled into large scenes. The question of visual exploration of these complex scenes is also raised in the project. We develop original camera control techniques to explore the most relevant part of the animations without suffering occlusions due to the constantly moving content. A far term goal of the project is to enable populating a large digital mockup of the whole France (Territoire 3D, provided by Archivideo). Dedicated efficient Human animation techniques are required (Golaem). A strong originality of the project is to address the problem a crowded scene visualisation thorugh the scope of virtual camera control (Inria Rennes and Grenoble)


Participant : Armel Crétual [contact] .

The goal of RePLiCA project is to build and test a new rehabilitation program for facial praxia in children with cerebral palsy using an interactive device.

In a classical rehabilitation program, the child tries to reproduce the motion of his/her therapist. The feedback he/she has lays on the comparison of different modalities: the gesture of the therapist he/she has seen few seconds ago (visual space) and his/her own motion (proprioceptive space). Unfortunately, besides motor troubles these children often have some cognitive troubles and among them a difficulty to convert the information from a mental space to another one.

The principle of our tool is that during a rehabilitation session the child will observe simultaneously on the same screen an avatar, the virtual therapist's one, performing the gesture to be done, and a second avatar animated from the motion he actually performs. To avoid the use of a too complex motion capture system, the child will be filmed by a simple video camera. One first challenge is thus to be able to capture the child's facial motion with enough accuracy. A second one is to be able to provide him/her an additional feedback upon the gesture quality comparing it to a database of healthy children of the same age.

ANR JCJC: Cinecitta

Participants : Marc Christie [marc.christie@irisa.fr] , Cunka Sanokho.

Cinecitta is a 3-year young researcher project funded by the French Research Agency (ANR), lead by Marc Christie and that started in October 2012.

The main objective of Cinecitta is to propose and evaluate a novel workflow which mixes user interaction using motion-tracked cameras and automated computation aspects for interactive virtual cinematography that will better support user creativity. We propose a novel cinematographic workflow that features a dynamic collaboration of a creative human filmmaker with an automated virtual camera planner. We expect the process to enhance the filmmaker's creative potential by enabling very rapid exploration of a wide range of viewpoint suggestions. The process has the potential to enhance the quality and utility of the automated planner's suggestions by adapting and reacting to the creative choices made by the filmmaker. This requires three advances in the field. First, the ability to generate relevant viewpoint suggestions following classical cinematic conventions. The formalization of these conventions in a computationally efficient and expressive model is a challenging task in order to select and propose the user with a relevant subset of viewpoints among millions of possibilities. Second, the ability to analyze data from real movies in order to formalize some elements of cinematographic style and genre. Third, the integration of motion-tracked cameras in the workflow. Motion-tracked cameras represent a great potential for cinematographic content creation. However given that tracking spaces are of limited size, there is a need to provide novel interaction metaphors to ease the process of content creation with tracked cameras. Finally we will gather feedback on our prototype by involving professionals (during dedicated workshops) and will perform user evaluations with students from cinema schools.