Section: Partnerships and Cooperations
National Initiatives
ANR ArtSpeech
-
Other partners: Gipsa-Lab (Grenoble), IADI (Nancy), LPP (Paris)
-
Participants: Ioannis Douros, Yves Laprie, Anastasiia Tsukanova
-
Abstract: The objective is to synthesize speech via the numerical simulation of the human speech production processes, i.e. the articulatory, aerodynamic and acoustic aspects. Articulatory data comes from MRI and EPGG acquisitions.
ANR JCJC KAMoulox
-
Project title: Kernel additive modelling for the unmixing of large audio archives
-
Abstract: The objective is to develop theoretical and applied tools to embed audio denoising and separation tools in web-based audio archives. The applicative scenario is to deal with the notorious audio archive “Archives du CNRS — Musée de l'Homme”, gathering recordings dating back to the early 1900s.
PIA2 ISITE LUE
-
Abstract: LUE (Lorraine Université d’Excellence) was designed as an “engine” for the development of excellence, by stimulating an original dialogue between knowledge fields. Within challenge number 6: “Knowledge engineering” this project funds the PhD thesis of Ioannis Douros on articulatory modeling.
OLKI LUE
-
Project title: Open Language and Knowledge for Citizens, Lorraine Université d’Excellence
-
Abstract: The initiative aims at developing new algorithms that improve the automatic understanding of natural language documents, and a federated language resource distribution platform to enable and facilitate the sharing of open resources. This project funds the PhD thesis of Tulika Bose on the detection and classification of hate speech.
E-FRAN METAL
-
Project title: Modèles Et Traces au service de l’Apprentissage des Langues
-
Other partners: Interpsy, LISEC, ESPE de Lorraine, D@NTE (Univ. Versailles Saint Quentin), Sailendra SAS, ITOP Education, Rectorat.
-
Participants: Theo Biasutto-Lervat, Anne Bonneau, Vincent Colotte, Dominique Fohr, Elodie Gauthier, Thomas Girod, Denis Jouvet, Odile Mella, Slim Ouni, Leon Rohrbacher
-
Abstract: METAL aims at improving the learning of languages (written and oral) through development of new tools and analysis of numeric traces associated with students' learning. MULTISPEECH is concerned by oral language learning aspects.
ANR VOCADOM
-
Project acronym: VOCADOM (http://vocadom.imag.fr/)
-
Project title: Robust voice command adapted to the user and to the context for ambient assisted living
-
Other partners: Inria (Nancy), Univ. Lyon 2 - GREPS, THEORIS (Paris)
-
Participants: Dominique Fohr, Md Sahidullah, Sunit Sivasankaran, Emmanuel Vincent
-
Abstract: The goal is to design a robust voice control system for smart home applications. MULTISPEECH is responsible for wake-up word detection, overlapping speech separation, and speaker recognition.
ANR JCJC DiSCogs
-
Project title: Distant speech communication with heterogeneous unconstrained microphone arrays
-
Participants: Nicolas Furnon, Irène Illina, Romain Serizel, Emmanuel Vincent
-
Abstract: The objective is to solve fundamental sound processing issues in order to exploit the many devices equipped with microphones that populate our everyday life. The solution proposed is to apply deep learning approaches to recast the problem of synchronizing devices at the signal level as a multi-view learning problem.
ANR DEEP-PRIVACY
-
Project title: Distributed, Personalized, Privacy-Preserving Learning for Speech Processing
-
Other partners: LIUM (Le Mans), MAGNET (Inria Lille), LIA (Avignon)
-
Participants: Pierre Champion, Denis Jouvet, Emmanuel Vincent
-
Abstract: The objective is to elaborate a speech transformation that hides the speaker identity for an easier sharing of speech data for training speech recognition models; and to investigate speaker adaptation and distributed training.
ANR ROBOVOX
-
Project title: Robust Vocal Identification for Mobile Security Robots
-
Participants: Antoine Deleforge, Sandipana Dowerah, Denis Jouvet, Romain Serizel
-
Abstract: The aim is to improve speaker recognition robustness for a security robot in real environment. Several aspects will be particularly considered such as ambiant noise, reverberation and short speech utterances.
ANR LEAUDS
-
Participants: Mauricio Michel Olvera Zambrano, Romain Serizel, Emmanuel Vincent, and Christophe Cerisara (CNRS - LORIA)
-
Abstract: LEAUDS aims to make a leap towards developing machines that understand audio input through breakthroughs in the detection of thousands of audio events from little annotated data, the robustness to “out-of-the lab” conditions, and language-based description of audio scenes. MULTISPEECH is responsible for research on robustness and for bringing expertise on natural language generation.
Inria Project Lab HyAIAI
-
Other partners: Inria TAU (Saclay), SEQUEL, MAGNET (Lille), MULTISPEECH, ORPAILLEUR (Nancy)
-
Participants: Irène Illina, Emmanuel Vincent, Georgios Zervakis
-
Abstract: HyAIAI is about the design of novel, interpretable artificial intelligence methods based on hybrid approaches that combine state of the art numeric models with explainable symbolic models.
ANR BENEPHIDIRE
-
Project title: Stuttering: Neurology, Phonetics, Computer Science for Diagnosis and Rehabilitation
-
Other partners: LORIA (Nancy), INM (Toulouse), LiLPa (Strasbourg).
-
Abstract: This project brings together neurologists, speech-language pathologists, phoneticians, and computer scientists specializing in speech processing to investigate stuttering as a speech impairment and to develop techniques for diagnosis and rehabilitation.
ANR HAIKUS
-
Project title: Artificial Intelligence applied to augmented acoustic Scenes
-
Abstract: HAIKUS aims to achieve seamless integration of computer-generated immersive audio content into augmented reality (AR) systems. One of the main challenges is the rendering of virtual auditory objects in the presence of source movements, listener movements and/or changing acoustic conditions.
ANR Flash Open Science HARPOCRATES
-
Project title: Open data, tools and challenges for speaker anonymization
-
Abstract: HARPOCRATES will form a working group that will collect and share the first open datasets and tools in the field of speech privacy, and launch the first open challenge on speech privacy, specifically on the topic of voice de-identification.
ATT Dynalips & ATT Dynalips-2
-
Abstract: This is a technology transfer project of our research solution that aims to synchronize precisely and automatically the movement of the mouth of a 3D character with speech. We address 3D animation and video game industries.