EN FR
EN FR


Section: Contracts and Grants with Industry

ANR SignCom: Sign-Based Communication Between Real and Virtual Agents

Participants : Franck Multon [contact] , Stéphane Donikian.

The SignCom project aims to improve the quality of real-time interaction between humans and virtual agents by exploiting natural communication modalities such as gestures, facial expressions, and gaze direction. Using structured and coded French Sign Language signs, the real and virtual humans are able to converse with each other. This project is funded by the ANR ("Audiovisuel et Multimedia" call in 2007) and is leaded by VALORIA (University Bretagne Sud). The partners are: IRIT-TCI in Toulouse, M2S Lab in University Rennes2, Polymoprh company in Rennes and Websourd company in Toulouse.

MimeTIC was involved in three main parts:

  • designing a database of motion capture data for French Sign Language gestures, according to a scenario defined by the other partners. This task involves gathering information from various devices so that the face, gingers and body motions are catured and gathered in a unique file. We thus have developed a specific experimental platform based on the Vicon-MX (product of Oxford Metrics) motion capture system, the 5-DT glove (product of Fifth Dimension Technologies), and FaceLAB (product of seeingmachines). We thus have developed specific algorithm to coordinate and make the fusion with all the data (which were recorded with various sampling frequencies). This work has been performed in close collaboration with M2S Lab. The resulting database if used both for gesture recognition and motion synthesis.

  • developing a dialog manager which is able to use information provided by gesture recognition, analyze the sentence, find the relevant answer to the user and then call the motion synthesis module. This dialog manager is also used for integration of the contributions of the other partners.

  • proposing an innovative gesture recognition that is able to address the intrinsic variability of gestures used in sign language: variability of the users and styles, but also variability in space and speed. We thus have proposed a machine-learning approach in three stages which enables us to recognize more than 90% of the 70 gestures involved in the scenario. This work has been performed in close collaboration with M2S Lab and State Key Lab in CAD&CG (Zheijiang University, China).

The SignCom project ends in December 2011.