## Section: New Results

### Neural Networks as dynamical systems

#### A modular architecture for transparent computation in recurrent neural networks

Participants : Giovanni Carmantini [Plymouth University, UK] , Peter Beim Graben [Humbolt University (Berlin), Germany] , Mathieu Desroches [Inria MathNeuro] , Serafim Rodrigues [Plymouth University, UK] .

Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments.

This work has been published in Neural Networks and is available as [13].

#### Latching dynamics in neural networks with synaptic depression

Participants : Pascal Chossat [Inria MathNeuro] , Martin Krupa [Inria MathNeuro] , Frédéric Lavigne [Université de Nice - BCL] .

Priming is the ability of the brain to more quickly activate a target concept in response to a related stimulus (prime). Experiments point to the existence of an overlap between the populations of the neurons coding for different stimuli. Other experiments show that prime-target relations arise in the process of long term memory formation. The classical modelling paradigm is that long term memories correspond to stable steady states of a Hopfield network with Hebbian connectivity. Experiments show that short term synaptic depression plays an important role in the processing of memories. This leads naturally to a computational model of priming, called latching dynamics; a stable state (prime) can become unstable and the system may converge to another transiently stable steady state (target). Hopfield network models of latching dynamics have been studied by means of numerical simulation, however the conditions for the existence of this dynamics have not been elucidated. In this work we use a combination of analytic and numerical approaches to confirm that latching dynamics can exist in the context of Hebbian learning, however lacks robustness and imposes a number of biologically unrealistic restrictions on the model. In particular our work shows that the symmetry of the Hebbian rule is not an obstruction to the existence of latching dynamics, however fine tuning of the parameters of the model is needed.

This work has been submitted for publication and is available as [23].

#### On the Hamiltonian structure of large deviations in stochastic hybrid systems

Participants : Paul Bressloff [University of Utah, USA] , Olivier Faugeras [Inria MathNeuro] .

We present a new derivation of the classical action underlying a large deviation principle (LDP) for a stochastic hybrid system, which couples a piecewise deterministic dynamical system in ${\mathbb{R}}^{d}$ with a time-homogeneous Markov chain on some discrete space $\Gamma $. We assume that the Markov chain on $\Gamma $ is ergodic, and that the discrete dynamics is much faster than the piecewise deterministic dynamics (separation of timescales). Using the Perron-Frobenius theorem and the calculus-of-variations, we show that the resulting Hamiltonian is given by the Perron eigenvalue of a $\left|\Gamma \right|$-dimensional linear equation. The corresponding linear operator depends on the transition rates of the Markov chain and the nonlinear functions of the piecewise deterministic system. We compare the Hamiltonian to one derived using WKB methods, and show that the latter is a reduction of the former. We also indicate how the analysis can be extended to a multi-scale stochastic process, in which the continuous dynamics is described by a piecewise stochastic differential equations (SDE). Finally, we illustrate the theory by considering applications to conductance-based models of membrane voltage fluctuations in the presence of stochastic ion channels.

This work has been submitted for publication and is available as [22].

#### Large Deviations of a Spatially-Stationary Network of Interacting Neurons

Participants : Olivier Faugeras [Inria MathNeuro] , James Maclaurin [University of Sydney, USA] .

In this work we determine a process-level Large Deviation Principle (LDP) for a model of interacting neurons indexed by a lattice ${\mathbb{Z}}^{d}$. The neurons are subject to noise, which is modelled as a correlated martingale. The probability law governing the noise is strictly stationary, and we are therefore able to find a LDP for the probability laws ${\Pi}^{n}$ governing the stationary empirical measure ${\widehat{\mu}}^{n}$ generated by the neurons in a cube of length $(2n+1)$. We use this LDP to determine an LDP for the neural network model. The connection weights between the neurons evolve according to a learning rule / neuronal plasticity, and these results are adaptable to a large variety of neural network models. This LDP is of great use in the mathematical modelling of neural networks, because it allows a quantification of the likelihood of the system deviating from its limit, and also a determination of which direction the system is likely to deviate. The work is also of interest because there are nontrivial correlations between the neurons even in the asymptotic limit, thereby presenting itself as a generalisation of traditional mean-field models.

This work has been submitted for publication and is available as [25].

#### The Period adding and incrementing bifurcations: from rotation theory to applications

Participants : Albert Granados [Technical University of Denmark, Denmark] , Lluís Alsedà [Autonomous University of Barcelona, Spain] , Martin Krupa [Inria MathNeuro] .

This survey article is concerned with the study of bifurcations of piecewise-smooth maps. We review the literature in circle maps and quasi-contractions and provide paths through this literature to prove sufficient conditions for the occurrence of two types of bifurcation scenarios involving rich dynamics. The first scenario consists of the appearance of periodic orbits whose symbolic sequences and "rotation" numbers follow a Farey tree structure; the periods of the periodic orbits are given by consecutive addition. This is called the *period adding* bifurcation, and its proof relies on results for maps on the circle. In the second scenario, symbolic sequences are obtained by consecutive attachment of a given symbolic block and the periods of periodic orbits are incremented by a constant term. It is called the *period incrementing* bifurcation, in its proof relies on results for maps on the interval. We also discuss the expanding cases, as some of the partial results found in the literature also hold when these maps lose contractiveness. The higher dimensional case is also discussed by means of *quasi-contractions*. We also provide applied examples in control theory, power electronics and neuroscience where these results can be applied to obtain precise descriptions of their dynamics.

This work has been accepted for publication in SIAM Review and is available as [26].