EN FR
EN FR


Section: Partnerships and Cooperations

European Initiatives

DREAM

  • Title: Deferred Restructuring of Experience in Autonomous Machines

  • Programm: H2020

  • Duration: January 2015 - December 2018

  • Coordinator: UPMC

  • Partners:

    • Armines (ENSTA ParisTech)

    • Edimbourgh (Scotland)

    • University of A Coruna (Spain)

    • Vrije University Amsterdam (Holland)

  • Contact: David Filliat

  • Abstract: A holy grail in robotics and artificial intelligence is to design a machine that can accumulate adaptations on developmental time scales of months and years. From infancy through adult- hood, such a system must continually consolidate and bootstrap its knowledge, to ensure that the learned knowledge and skills are compositional, and organized into meaningful hierarchies. Consolidation of previous experience and knowledge appears to be one of the main purposes of sleep and dreams for humans, that serve to tidy the brain by removing excess information, to recombine concepts to improve information processing, and to consolidate memory. Our approach – Deferred Restructuring of Experience in Autonomous Machines (DREAM) – incorporates sleep and dream-like processes within a cognitive architecture. This enables an individual robot or groups of robots to consolidate their experience into more useful and generic formats, thus improving their future ability to learn and adapt. DREAM relies on Evo- lutionary Neurodynamic ensemble methods (Fernando et al, 2012 Frontiers in Comp Neuro; Bellas et al., IEEE-TAMD, 2010 ) as a unifying principle for discovery, optimization, re- structuring and consolidation of knowledge. This new paradigm will make the robot more autonomous in its acquisition, organization and use of knowledge and skills just as long as they comply with the satisfaction of pre-established basic motivations. DREAM will enable robots to cope with the complexity of being an information-processing entity in domains that are open-ended both in terms of space and time. It paves the way for a new generation of robots whose existence and purpose goes far beyond the mere execution of dull tasks. http://www.robotsthatdream.eu

Collaborations in European Programs, except FP7 & H2020

IGLU
  • Title: Interactive Grounded Language Understanding (IGLU)

  • Programm: CHIST-ERA

  • Duration: October 2015 - September 2018

  • Coordinator: University of Sherbrooke, Canada

  • Partners:

    • University of Sherbrooke, Canada

    • Inria Bordeaux, France

    • University of Mons, Belgium

    • KTH Royal Institute of Technology, Sweden

    • University of Zaragoza, Spain

    • University of Lille 1 , France

    • University of Montreal, Canada

  • Inria contact: Pierre-Yves Oudeyer

  • Language is an ability that develops in young children through joint interaction with their caretakers and their physical environment. At this level, human language understanding could be referred as interpreting and expressing semantic concepts (e.g. objects, actions and relations) through what can be perceived (or inferred) from current context in the environment. Previous work in the field of artificial intelligence has failed to address the acquisition of such perceptually-grounded knowledge in virtual agents (avatars), mainly because of the lack of physical embodiment (ability to interact physically) and dialogue, communication skills (ability to interact verbally). We believe that robotic agents are more appropriate for this task, and that interaction is a so important aspect of human language learning and understanding that pragmatic knowledge (identifying or conveying intention) must be present to complement semantic knowledge. Through a developmental approach where knowledge grows in complexity while driven by multimodal experience and language interaction with a human, we propose an agent that will incorporate models of dialogues, human emotions and intentions as part of its decision-making process. This will lead anticipation and reaction not only based on its internal state (own goal and intention, perception of the environment), but also on the perceived state and intention of the human interactant. This will be possible through the development of advanced machine learning methods (combining developmental, deep and reinforcement learning) to handle large-scale multimodal inputs, besides leveraging state-of-the-art technological components involved in a language-based dialog system available within the consortium. Evaluations of learned skills and knowledge will be performed using an integrated architecture in a culinary use-case, and novel databases enabling research in grounded human language understanding will be released. IGLU will gather an interdisciplinary consortium composed of committed and experienced researchers in machine learning, neurosciences and cognitive sciences, developmental robotics, speech and language technologies, and multimodal/multimedia signal processing. We expect to have key impacts in the development of more interactive and adaptable systems sharing our environment in everyday life. http://iglu-chistera.github.io/