EN FR
EN FR


Section: Partnerships and Cooperations

European Initiatives

FP7 & H2020 Projects

3rd HAND
  • Title: Semi-Autonomous 3rd Hand

  • Programm: FP7

  • Duration: October 2013 - September 2017

  • Coordinator: Inria

  • Partners:

    • Technische Universität Darmstadt (Germany)

    • Universität Innsbruck (Austria)

    • Universität Stuttgart (Germany)

  • Inria contact: Manuel Lopes

  • Robots have been essential for keeping industrial manufacturing in Europe. Most factories have large numbers of robots in a fixed setup and few programs that produce the exact same product hundreds of thousands times. The only common interaction between the robot and the human worker has become the so-called 'emergency stop button'. As a result, re-programming robots for new or personalized products has become a key bottleneck for keeping manufacturing jobs in Europe. The core requirement to date has been the production in large numbers or at a high price.Robot-based small series production requires a major breakthrough in robotics: the development of a new class of semi-autonomous robots that can decrease this cost substantially. Such robots need to be aware of the human worker, alleviating him from the monotonous repetitive tasks while keeping him in the loop where his intelligence makes a substantial difference.In this project, we pursue this breakthrough by developing a semi-autonomous robot assistant that acts as a third hand of a human worker. It will be straightforward to instruct even by an untrained layman worker, allow for efficient knowledge transfer between tasks and enable a effective collaboration between a human worker with a robot third hand. The main contributions of this project will be the scientific principles of semi-autonomous human-robot collaboration, a new semi-autonomous robotic system that is able to: i) learn cooperative tasks from demonstration; ii) learn from instruction; and iii) transfer knowledge between tasks and environments. We will demonstrate its efficiency in the collaborative assembly of an IKEA-like shelf where the robot acts as a semiautonomous 3rd-Hand .

DREAM
  • Title: Deferred Restructuring of Experience in Autonomous Machines

  • Programm: H2020

  • Duration: January 2015 - December 2018

  • Coordinator: UPMC

  • Partners:

    • Armines (ENSTA ParisTech)

    • Queen Mary University London (England)

    • University of A Coruna (Spain)

    • Vrije University Amsterdam (Holland)

  • Contact: David Filliat

  • Abstract: A holy grail in robotics and artificial intelligence is to design a machine that can accumulate adaptations on developmental time scales of months and years. From infancy through adult- hood, such a system must continually consolidate and bootstrap its knowledge, to ensure that the learned knowledge and skills are compositional, and organized into meaningful hierarchies. Consolidation of previous experience and knowledge appears to be one of the main purposes of sleep and dreams for humans, that serve to tidy the brain by removing excess information, to recombine concepts to improve information processing, and to consolidate memory. Our approach – Deferred Restructuring of Experience in Autonomous Machines (DREAM) – incorporates sleep and dream-like processes within a cognitive architecture. This enables an individual robot or groups of robots to consolidate their experience into more useful and generic formats, thus improving their future ability to learn and adapt. DREAM relies on Evo- lutionary Neurodynamic ensemble methods (Fernando et al, 2012 Frontiers in Comp Neuro; Bellas et al., IEEE-TAMD, 2010 ) as a unifying principle for discovery, optimization, re- structuring and consolidation of knowledge. This new paradigm will make the robot more autonomous in its acquisition, organization and use of knowledge and skills just as long as they comply with the satisfaction of pre-established basic motivations. DREAM will enable robots to cope with the complexity of being an information-processing entity in domains that are open-ended both in terms of space and time. It paves the way for a new generation of robots whose existence and purpose goes far beyond the mere execution of dull tasks. http://www.robotsthatdream.eu

Collaborations in European Programs, except FP7 & H2020

IGLU
  • Title: Interactive Grounded Language Understanding (IGLU)

  • Programm: CHIST-ERA

  • Duration: October 2015 - September 2018

  • Coordinator: University of Sherbrooke, Canada

  • Partners:

    • University of Sherbrooke, Canada

    • Inria Bordeaux, France

    • University of Mons, Belgium

    • KTH Royal Institute of Technology, Sweden

    • University of Zaragoza, Spain

    • University of Lille 1 , France

    • University of Montreal, Canada

  • Inria contact: Pierre-Yves Oudeyer

  • Language is an ability that develops in young children through joint interaction with their caretakers and their physical environment. At this level, human language understanding could be referred as interpreting and expressing semantic concepts (e.g. objects, actions and relations) through what can be perceived (or inferred) from current context in the environment. Previous work in the field of artificial intelligence has failed to address the acquisition of such perceptually-grounded knowledge in virtual agents (avatars), mainly because of the lack of physical embodiment (ability to interact physically) and dialogue, communication skills (ability to interact verbally). We believe that robotic agents are more appropriate for this task, and that interaction is a so important aspect of human language learning and understanding that pragmatic knowledge (identifying or conveying intention) must be present to complement semantic knowledge. Through a developmental approach where knowledge grows in complexity while driven by multimodal experience and language interaction with a human, we propose an agent that will incorporate models of dialogues, human emotions and intentions as part of its decision-making process. This will lead anticipation and reaction not only based on its internal state (own goal and intention, perception of the environment), but also on the perceived state and intention of the human interactant. This will be possible through the development of advanced machine learning methods (combining developmental, deep and reinforcement learning) to handle large-scale multimodal inputs, besides leveraging state-of-the-art technological components involved in a language-based dialog system available within the consortium. Evaluations of learned skills and knowledge will be performed using an integrated architecture in a culinary use-case, and novel databases enabling research in grounded human language understanding will be released. IGLU will gather an interdisciplinary consortium composed of committed and experienced researchers in machine learning, neurosciences and cognitive sciences, developmental robotics, speech and language technologies, and multimodal/multimedia signal processing. We expect to have key impacts in the development of more interactive and adaptable systems sharing our environment in everyday life. http://iglu-chistera.github.io/