EN FR
EN FR


Section: New Results

Mean field theory and stochastic processes

Emergence of collective phenomena in a population of neurons

Participants : Benjamin Aymard, Fabien Campillo, Romain Veltz.

In this work, we propose a new model of biological neural network, combining a two-dimensional integrate-and-fire neuron model with a deterministic model of electrical synapse, and a stochastic model of chemical synapse. We describe the dynamics of a population of neurons in interaction as a piecewise deterministic Markov process. We prove the weak convergence of the associated empirical process, as the population size tends to infinity, towards a McKean-Vlasov type process and we describe the associated PDE. We are also interested in the simulation of these dynamics, in particular by comparing “detailed” simulations of a finite population of neurons with a simulation of the system with infinite population. Benjamin Aymard has the adapted toolkit to attack these questions numerically. The mean field equations studied by Benjamin are of transport type for which numerical methods are technical. However, they are the domain of expertise of Benjamin. His postdoc is funded by the Flagship Human Brain Project.

Latest results are as follows. A first manuscript concerning the numerical simulation of the mean field model is in preparation. We managed to find a new numerical scheme which is positive, semi-implicit and adaptive in time (of order 2). A second manuscript concerning the existence of stationary distribution for the mean field limit is also in preparation.

Long time behavior of a mean-field model of interacting neurons

Participants : Quentin Cormier [Inria TOSCA] , Étienne Tanré [Inria TOSCA] , Romain Veltz.

We study the long time behavior of the solution to some McKean-Vlasov stochastic differential equation (SDE) driven by a Poisson process. In neuroscience, this SDE models the asymptotic dynamic of the membrane potential of a spiking neuron in a large network. We prove that for a small enough interaction parameter, any solution converges to the unique (in this case) invariant measure. To this aim, we first obtain global bounds on the jump rate and derive a Volterra type integral equation satisfied by this rate. We then replace temporary the interaction part of the equation by a deterministic external quantity (we call it the external current). For constant current, we obtain the convergence to the invariant measure. Using a perturbation method, we extend this result to more general external currents. Finally, we prove the result for the non-linear McKean-Vlasov equation.

This work has been submitted for publication and is available as [24].

Exponential stability of the stationary distribution of a mean field of spiking neural network

Participants : Audric Drogoul [Thales, France] , Romain Veltz.

In this work, we study the exponential stability of the stationary distribution of a McKean-Vlasov equation, of nonlinear hyperbolic type which was recently derived in [33], [40]. We complement the convergence result proved in [40] using tools from dynamical systems theory. Our proof relies on two principal arguments in addition to a Picard-like iteration method. First, the linearized semigroup is positive which allows to precisely pinpoint the spectrum of the infinitesimal generator. Second, we use a time rescaling argument to transform the original quasilinear equation into another one for which the nonlinear flow is differentiable. Interestingly, this convergence result can be interpreted as the existence of a locally exponentially attracting center manifold for a hyperbolic equation.

This work has been submitted for publication and is available as [23].

On a toy network of neurons interacting through their dendrites

Participants : Nicolas Fournier Cormier [LPSM, Sorbonne Université, Paris] , Étienne Tanré [Inria TOSCA] , Romain Veltz.

Consider a large number n of neurons, each being connected to approximately N other ones, chosen at random. When a neuron spikes, which occurs randomly at some rate depending on its electric potential, its potential is set to a minimum value vmin, and this initiates, after a small delay, two fronts on the (linear) dendrites of all the neurons to which it is connected. Fronts move at constant speed. When two fronts (on the dendrite of the same neuron) collide, they annihilate. When a front hits the soma of a neuron, its potential is increased by a small value wn. Between jumps, the potentials of the neurons are assumed to drift in [vmin,), according to some well-posed ODE. We prove the existence and uniqueness of a heuristically derived mean-field limit of the system when n,N with wnN1/2. We make use of some recent versions of the results of Deuschel and Zeitouni [37] concerning the size of the longest increasing subsequence of an i.i.d. collection of points in the plan. We also study, in a very particular case, a slightly different model where the neurons spike when their potential reach some maximum value vmax, and find an explicit formula for the (heuristic) mean-field limit.

This work has been submitted for publication and is available as [26].

Mathematical statistical physics applied to neural populations

Participants : Émilie Soret, Olivier Faugeras, Étienne Tanré [Inria, project-team TOSCA, Sophia-Antipolis] .

This project focuses on Mean-Field descriptions or thermodynamics limits of large populations of neurons. They study a system of Stochastic Differential Equations (SDEs) which describes the evolution of membrane potential of each neuron over the time when the synaptic weights are random variables (not assumed to be independent). This setup is well suited to Émilie, who has worked during her PhD and first postdoc on mathematical statistical physics and stochastic processes. Her postdoc is funded by the Flagship Human Brain Project. A manuscript is in preparation.