Section: New Results
Machine Learning
In this section, we report on some neuronal adaptive mechanisms, that we develop at the frontier between Machine Learning and Computational Neuroscience. Our goal is to consider and adapt models in Machine Learning for their integration in a bio-inspired framework.
Concerning the manipulation of temporal sequences, we have proposed an original algorithm for the extraction of sequences from LSTM, a major class of recurrent neural networks [1]. These sequences are then interpreted as rules representing implicit knowledge within electrical diagrams (cf. § 8.1).
Concerning our work on reservoir computing, X. Hinaut is collaborating with Michael Spranger (Sony Lab, Tokyo, Japan) on grounding of language, adapting Hinaut's previous Reservoir Language Model (RLM) with the representational system of Spranger: IRL (Incremental Recruitment Language). He is also collaborating with Hamburg on the use of reservoir models for robotic tasks (cf. § 9.3). In this work, we have shown that the RLM can successfully learn to parse sentences related to home scenarios in fifteen languages [6]. This demonstrates that (1) the learning principle of our model is not limited to a particular language (or particular sentence structures), and (2) it can deal with various kinds of representations (not only predicates), which enable users to adapt it to their own needs.
Regarding the extraction of characteristics from and the use of hierarchical networks, as in the case of deep networks, we have been able to consider how to deal with not-so-big data sets, and target the notion of interpretability of the obtained results which is a key issue: since deep learning applications are increasingly present in the society, it is important that the underlying processes be accessible and understandable to every one. In order to target these challenges, we have analyzed how considering prototypes in a rather generalized sense (with respect to the state of the art) allows to reasonably work with small data sets while providing an interpretable view of the obtained results. Some mathematical interpretation of this proposal have also been discussed. Sensitivity to hyperparameters is a key issue for reproducible deep learning results, and has been carefully considered in our methodology. Performances and (even more interesting, in a sense) limitations of the proposed setup have been explored in details, under different hyperparameters sets, in an analogous way as biological experiments are conducted. We obtain a rather simple architecture, easy to explain, and which allows, combined with a standard method, to target both performances and interpretability [4].