EN FR
EN FR


Section: Partnerships and Cooperations

National Initiatives

ANR Contint: iSpace&Time

Participants : Fabrice Lamarche [contact] , Julien Pettré, Marc Christie, Carl-Johan Jorgensen.

The iSpace&Time project is founded by the ANR and gathers six partners: IGN, Lamea, University of Rennes 1, LICIT (IFSTTAR), Telecom ParisTech and the SENSE laboratory (Orange). The goal of this project is the establishment of a demonstrator of a 4D Geographic Information System of the city on the web. This portal will integrate technologies such as web2.0, sensor networks, immersive visualization, animation and simulation. It will provide solutions ranging from simple 4D city visualization to tools for urban development. Main aspects of this project are:

  • Creation of an immersive visualization based on panoramic acquired by a scanning vehicle using hybrid scanning (laser and image).

  • Fusion of heterogeneous data issued by a network of sensor enabling to measure flows of pedestrians, vehicles and other mobile objects.

  • Use of video cameras to measure, in real time, flows of pedestrians and vehicles.

  • Study of the impact of a urban development on mobility by simulating vehicles and pedestrians.

  • Integration of temporal information into the information system for visualization, data mining and simulation purpose.

  • The mimetic team is involved in the pedestrian simulation part of this project. This project started in 2011 and will end in 2014.

ANR Contint: Chrome

Participants : Julien Pettré [contact] , Kevin Jordao, Orianne Siret.

The Chrome project is leaded by Julien Pettré, member of MimeTIC. Partners are: Inria-Grenoble IMAGINE team (Remi Ronfard), Golaem SAS (Stephane Donikian), and Archivideo (Francois Gruson). The project has been launched in september 2012.

The Chrome project develops new and original techniques to massively populate huge environments. The key idea is to base our approach on the crowd patch paradigm that enables populating environments from sets of pre-computed portions of crowd animation. These portions undergo specific conditions to be assembled into large scenes. The question of visual exploration of these complex scenes is also raised in the project. We develop original camera control techniques to explore the most relevant part of the animations without suffering occlusions due to the constantly moving content. A far term goal of the project is to enable populating a large digital mockup of the whole France (Territoire 3D, provided by Archivideo). Dedicated efficient Human animation techniques are required (Golaem). A strong originality of the project is to address the problem a crowded scene visualisation thorugh the scope of virtual camera control (Inria Rennes and Grenoble)

ANR TecSan: RePLiCA

Participant : Armel Crétual [contact] .

The goal of RePLiCA project is to build and test a new rehabilitation program for facial praxia in children with cerebral palsy using an interactive device. RePLiCA started in january 2012 and will end in July 2015.

In a classical rehabilitation program, the child tries to reproduce the motion of his/her therapist. The feedback he/she has lays on the comparison of different modalities: the gesture of the therapist he/she has seen few seconds ago (visual space) and his/her own motion (proprioceptive space). Unfortunately, besides motor troubles these children often have some cognitive troubles and among them a difficulty to convert the information from a mental space to another one.

The principle of our tool is that during a rehabilitation session the child will observe simultaneously on the same screen an avatar, the virtual therapist’s one, performing the gesture to be done, and a second avatar animated from the motion he actually performs. To avoid the use of a too complex motion capture system, the child will be filmed by a simple video camera. One first challenge is thus to be able to capture the child's facial motion with enough accuracy. A second one is to be able to provide him/her an additional feedback upon the gesture quality comparing it to a database of healthy children of the same age.

ANR JCJC: Cinecitta

Participants : Marc Christie [contact] , Cunka Sanokho.

Cinecitta is a 3-year young researcher project funded by the French Research Agency (ANR) lead by Marc Christie. The project started in October 2012 and will end in October 2015.

The main objective of Cinecitta is to propose and evaluate a novel workflow which mixes user interaction using motion-tracked cameras and automated computation aspects for interactive virtual cinematography that will better support user creativity. We propose a novel cinematographic workflow that features a dynamic collaboration of a creative human filmmaker with an automated virtual camera planner. We expect the process to enhance the filmmaker's creative potential by enabling very rapid exploration of a wide range of viewpoint suggestions. The process has the potential to enhance the quality and utility of the automated planner's suggestions by adapting and reacting to the creative choices made by the filmmaker. This requires three advances in the field. First, the ability to generate relevant viewpoint suggestions following classical cinematic conventions. The formalization of these conventions in a computationally efficient and expressive model is a challenging task in order to select and propose the user with a relevant subset of viewpoints among millions of possibilities. Second, the ability to analyze data from real movies in order to formalize some elements of cinematographic style and genre. Third, the integration of motion-tracked cameras in the workflow. Motion-tracked cameras represent a great potential for cinematographic content creation. However given that tracking spaces are of limited size, there is a need to provide novel interaction metaphors to ease the process of content creation with tracked cameras. Finally we will gather feedback on our prototype by involving professionals (during dedicated workshops) and will perform user evaluations with students from cinema schools.

ANR Contint: ENTRACTE

Participants : Charles Pontonnier [contact] , Georges Dumont, Nicolas Bideau, Franck Multon, Julien Pettré, Richard Kulpa, Ana Lucia Cruz Ruiz, Steve Tonneau.

The ANR project ENTRACTE is a collaboration between the Gepetto team in LAAS, Toulouse (head of the project) and the Inria/MimeTIC team. The project started in November 2013 and will end in August 2017. The purpose of the ENTRACTE project is to address the action planning problem, crucial for robots as well as for virtual human avatars, in analyzing human motion at a biomechanical level and in defining from this analysis bio-inspired motor control laws and bio-inspired paradigms for action planning. The project is launched since november 2013 and Ana-Lucia Cruz-Ruiz has been recruited as a PhD student since this date to begin to work on musculoskeletal-based methods for avatar animation. Moreover, Steve Tonneau, a PhD student currently entering in its third year is also developing bio-inspired posture generators for avatar navigation in encumbered environments.

ADT: Man-IP

Participant : Franck Multon [contact] .

The ADT-MAN-IP aims at proposing a common production pipeline for both MimeTIC and Hybrid teams. This pipeline intends to facilitate the production of populated virtual reality environments.

The pipeline starts with the motion capture of an actor, using motion capture devices such as a Vicon (product of Oxford Metrics) system. To do so, we need to design new methods to automatically adapt all motion captures data to an internal skeleton that can be reused to retarget the motion to various types of skeletons and characters. The purpose is then to play this motion capture data on any type of virtual characters used in the demos, regardless their individual skeletons and morphology. The key point here is to make this process be as automatic as possible.

The second step in the pipeline is to design a high level scenario framework to describe a virtual scene and the possible user's interactions with this scene so that he/she can interact with the story directly.

In this ADT we also will have to connect these two opposite parts into a unique framework that can be used by non-experts in computer animation to design new immersive experiments involving autonomous virtual humans. The resulting framework could consequently be used in the Immersia immersive room for various types of application.