Homepage Inria website
  • Inria login
  • The Inria's Research Teams produce an annual Activity Report presenting their activities and their results of the year. These reports include the team members, the scientific program, the software developed by the team and the new results of the year. The report also describes the grants, contracts and the activities of dissemination and teaching. Finally, the report gives the list of publications of the year.

  • Legal notice
  • Cookie management
  • Personal data
  • Cookies

Section: Partnerships and Cooperations

National Initiatives

eTAC: Tangible and Augmented Interfaces for Collaborative Learning:

  • Funding: EFRAN

  • Duration: 2017-2021

  • Coordinator: Université de Lorraine

  • Local coordinator: Martin Hachet

  • Partners: Université de Lorraine, Inria, ESPE, Canopé, OpenEdge,

  • the e-TAC project proposes to investigate the potential of technologies ”beyond the mouse” in order to promote collaborative learning in a school context. In particular, we will explore augmented reality and tangible interfaces, which supports active learning and favors social interaction.


ANR Rebel:

  • Duration: 2016-2019

  • Coordinator: Fabien Lotte

  • Funding: ANR Jeune Chercheur Jeune Chercheuse Project

  • Partners: Disabilities and Nervous Systems Laboratory Bordeaux

  • Brain-Computer Interfaces (BCI) are communication systems that enable their users to send commands to computers through brain activity only. While BCI are very promising for assistive technologies or human-computer interaction (HCI), they are barely used outside laboratories, due to a poor reliability. Designing a BCI requires 1) its user to learn to produce distinct brain activity patterns and 2) the machine to recognize these patterns using signal processing. Most research efforts focused on signal processing. However, BCI user training is as essential but is only scarcely studied and based on heuristics that do not satisfy human learning principles. Thus, currently poor BCI reliability is probably due to suboptimal user training. Thus, we propose to create a new generation of BCI that apply human learning principles in their design to ensure the users can learn high quality control skills, hence making BCI reliable. This could change HCI as BCI have promised but failed to do so far.


Inria Project Lab BCI-LIFT:

  • Duration: 2015-2018

  • Partners: Inria team Athena (Inria Sophia-Antipolis), Inria team Hybrid (Inria Rennes), Inria team Neurosys (Inria Nancy), LITIS (Université de Rouen), Inria team DEMAR (Inria Sophia-Antipolis), Inria team MINT (Inria Lille), DyCOG (INSERM Lyon)

  • Coordinator: Maureen Clerc (Inria Sophia Antipolis)

  • Local coordinator: Fabien Lotte

  • The aim is to reach a next generation of non-invasive Brain-Computer Interfaces (BCI), more specifically BCI that are easier to appropriate, more efficient, and suit a larger number of people. With this concern of usability as our driving objective, we will build non-invasive systems that benefit from advanced signal processing and machine learning methods, from smart interface design, and where the user immediately receives supportive feedback. What drives this project is the concern that a substantial proportion of human participants is currently categorized “BCI-illiterate” because of their apparent inability to communicate through BCI. Through this project we aim at making it easier for people to learn to use the BCI, by implementing appropriate machine learning methods and developping user training scenarios.

  • website:


Inria Project Lab AVATAR:

  • Duration: 2018-2022

  • Partners: Inria project-teams: GraphDeco, Hybrid, Loki, MimeTIC, Morpheo

  • Coordinator: Ludovic Hoyet (Inria Rennes)

  • Local coordinator: Martin Hachet

  • This project aims at designing avatars (i.e., the user’s representation in virtual environments) that are better embodied, more interactive and more social, through improving all the pipeline related to avatars, from acquisition and simulation, to designing novel interaction paradigms and multi-sensory feedback.

  • website: