NeuroMathComp focuses on the exploration of the brain from the mathematical and computational perspectives.

We want to unveil the principles that govern the functioning of neurons and assemblies thereof and to use our results to bridge the gap between biological and computational vision.

Our work is quite mathematical but we make heavy use of computers for numerical experiments and simulations. We have close ties with several top groups in biological neuroscience. We are pursuing the idea that the "unreasonable effectiveness of mathematics" can be brought, as it has been in physics, to bear on neuroscience.

Computational neuroscience attempts to build models of neurons at a variety of levels, microscopic, i.e., the single neuron, the minicolumn containing of the order of one hundred or so neurons, mesoscopic, i.e., the macrocolumn containing of the order of

Modeling such assemblies of neurons and simulating their behavior involves putting together a mixture of the most recent results in neurophysiology with such advanced mathematical methods as dynamic systems theory, bifurcation theory, probability theory, stochastic calculus, theoretical physics and statistics, as well as the use of simulation tools.

We conduct research in the following main areas:

Neural networks dynamics

Mean-field approaches

Neural fields

Spike train statistics

Synaptic plasticity

Visual neuroscience

Neuromorphic vision

The study of neural networks is certainly motivated by the long term goal to understand how brain is working. But, beyond the comprehension of brain or even of simpler neural systems in less evolved animals, there is also the desire to exhibit general mechanisms or principles at work in the nervous system. One possible strategy is to propose mathematical models of neural activity, at different space and time scales, depending on the type of phenomena under consideration. However, beyond the mere proposal of new models, which can rapidly result in a plethora, there is also a need to understand some fundamental keys ruling the behaviour of neural networks, and, from this, to extract new ideas that can be tested in real experiments. Therefore, there is a need to make a thorough analysis of these models. An efficient approach, developed in our team, consists of analysing neural networks as dynamical systems. This allows to address several issues. A first, natural issue is to ask about the (generic) dynamics exhibited by the system when control parameters vary. This naturally leads to analyse the bifurcations occurring in the network and which phenomenological parameters control these bifurcations. Another issue concerns the interplay between neuron dynamics and synaptic network structure.

In this spirit, our team has been able to characterize the generic dynamics exhibited by models such as Integrate and Fire models , conductance-based Integrate and Fire models , , , models of epilepsy , effects of synaptic plasticity , , homeostasis and intrinsic plasticity .

Modeling neural activity at scales integrating the effect of thousands of neurons is of central importance for several reasons. First, most imaging techniques are not able to measure individual neuron activity (“microscopic” scale), but are instead measuring mesoscopic effects resulting from the activity of several hundreds to several hundreds of thousands of neurons. Second, anatomical data recorded in the cortex reveal the existence of structures, such as the cortical columns, with a diameter of about 50μm to 1mm, containing of the order of one hundred to one hundred thousand neurons belonging to a few different species. The description of this collective dynamics requires models which are different from individual neurons models. In particular, when the number of neurons is large enough averaging effects appear, and the collective dynamics is well described by an effective mean-field, summarizing the effect of the interactions of a neuron with the other neurons, and depending on a few effective control parameters. This vision, inherited from statistical physics requires that the space scale be large enough to include a large number of microscopic components (here neurons) and small enough so that the region considered is homogeneous.

Our group is developing mathematical and numerical methods allowing on one hand to produce dynamic mean-field equations from the physiological characteristics of neural structure (neurons type, synapse type and anatomical connectivity between neurons populations), and on the other so simulate these equations. These methods use tools from advanced probability theory such as the theory of Large Deviations and the study of interacting diffusions . Our investigations have shown that the rigorous dynamics mean-field equations can have a quite more complex structure than the ones commonly used in the literature (e.g. ) as soon as realistic effects such as synaptic variability are taken into account. Our goal is to relate those theoretical results with experimental measurement, especially in the field of optical imaging. For this we are collaborating with Institut des Neurosciences de la Timone, Marseille.

Neural fields are a phenomenological way of describing the activity of population of neurons by delay integro-differential equations. This continuous approximation turns out to be very useful to model large brain areas such as those involved in visual perception. The mathematical properties of these equations and their solutions are still imperfectly known, in particular in the presence of delays, different time scales and of noise.

Our group is developing mathematical and numerical methods for analysing these equations. These methods are based upon techniques from mathematical functional analysis , bifurcation theory , equivariant bifurcation analysis, delay equations, and stochastic partial differential equations. We have been able to characterize the solutions of these neural fields equations and their bifurcations, apply and expand the theory to account for such perceptual phenomena as edge, texture , and motion perception. We have also developed a theory of the delayed neural fields equations, in particular in the case of constant delays and propagation delays that must be taken into account when attempting to model large size cortical areas . This theory is based on center manifold and normal forms ideas. We are currently extending the theory to take into account various sources of noise using tools from the theory of stochastic partial differential equations.

The neuronal activity is manifested by the emission of action potentials (“spikes”) constituting spike trains. Those spike trains are usually not exactly reproducible when repeating the same experiment, even with a very good control ensuring that experimental conditions have not changed. Therefore, researchers are seeking models for spike train statistics, assumed to be characterized by a canonical probabilities giving the statistics of spatio-temporal spike patterns. A current goal in experimental analysis of spike trains is to approximate this probability from data. Several approach exist either based on (i) generic principles (maximum likelihood, maximum entropy); (ii) phenomenological models (Linear-Non linear, Generalized Linear Model, mean-field); (iii) Analytical results on spike train statistics in Neural Network models.

Our group is working on those 3 aspects, on a fundamental and on a practical (numerical) level. On one hand, we have published analytical (and rigorous) results on statistics of spike trains in canonical neural network models (Integrate and Fire, conductance based with chemical and electric synapses) , , . The main result is the characterization of spike train statistics by a Gibbs distribution whose potential can be explicitly computed using some approximations. Note that this result does not require an assumption of stationarity. We have also shown that the distributions considered in the cases (i), (ii), (iii) above are all Gibbs distributions . On the other hand, we are proposing new algorithms for data processing . We have developed a C++ software for spike train statistics based on Gibbs distributions analysis and freely available at https://

Neural networks show amazing abilities to evolve and adapt, and to store and process information. These capabilities are mainly conditioned by plasticity mechanisms, and especially synaptic plasticity, inducing a mutual coupling between network structure and neuron dynamics. Synaptic plasticity occurs at many levels of organization and time scales in the nervous system (Bienenstock, Cooper, and Munroe, 1982). It is of course involved in memory and learning mechanisms, but it also alters excitability of brain areas and regulates behavioral states (e.g. transition between sleep and wakeful activity). Therefore, understanding the effects of synaptic plasticity on neurons dynamics is a crucial challenge.

Our group is developing mathematical and numerical methods to analyse this mutual interaction. On one hand, we have shown that plasticity mechanisms, Hebbian-like or STDP, have strong effects on neuron dynamics complexity, such as dynamics complexity reduction, and spike statistics (convergence to a specific Gibbs distribution via a variational principle), resulting in a response-adaptation of the network to learned stimuli , , . We are also studying the conjugated effects of synaptic and intrinsic plasticity in collaboration with H. Berry (Inria Beagle) and B. Delord, J. Naudé, ISIR team, Paris. On the other hand, we have pursued a geometric approach in which we show how a Hopfield network represented by a neural field with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space . We have also pursued an approach based on the ideas developed in the theory of slow-fast systems (in this case a set of neural fields equations) in the presence of noise and applied temporal averaging methods to recurrent networks of noisy neurons undergoing a slow and unsupervised modification of their connectivity matrix called learning .

Our group focuses on the visual system to understand how information is encoded and processed resulting in visual percepts. To do so, we propose functional models of the visual system using a variety of mathematical formalisms, depending on the scale at which models are built, such as spiking neural networks or neural fields. So far, our efforts have been focused on the study of retinal processing, edge and texture perception, motion integration at the level of V1 and MT cortical areas.

At the retina level, we are modeling its circuitry and we are studying the statistics of the spike train output (see, e.g., the software ENAS https://

From the simplest vision architectures in insects to the extremely complex cortical hierarchy in primates, it is fascinating to observe how biology has found efficient solutions to solve vision problems. Pioneers in computer vision had this dream to build machines that could match and perhaps outperform human vision. This goal has not been reached, at least not on the scale that was originally planned, but the field of computer vision has met many other challenges from an unexpected variety of applications and fostered entirely new scientific and technological areas such as computer graphics and medical image analysis. However, modelling and emulating with computers biological vision largely remains an open challenge while there are still many outstanding issues in computer vision.

Our group is working on neuromorphic vision by proposing bio-inspired methods following our progress in visual neuroscience. Our goal is to bridge the gap between biological and computer vision, by applying our visual neuroscience models to challenging problems from computer vision such as optical flow estimation , coding/decoding approaches , or classification , .

Virtual Retina is a simulation software developped by Adrien Wohrer during his PhD , that allows large-scale simulations of biologically-plausible retinas.

Virtual Retina has a variety of biological features implemented such as (i) spatio-temporal linear filter implementing the basic center/surround organization of retinal filtering, (ii) non-linear contrast gain control mechanism providing instantaneous adaptation to the local level of contrast; (iii) spike generation by one or several layers of ganglion cells paving the visual field.

Virtual Retina is under Inria CeCill C open-source licence, so that one can download it, install it and run it on one's own image sequences. Virtual Retina also offers a web service (v 2.1), so that users may test directly the main software on their own data, without any installation. This webservice was developed in collaboration with Nicolas Debeissat (engineer, 2002).

We are now interested in the analysis of the collective behavior of ganglion cells responses. To take this collective behavior into account, Virtual Retina needs to be extended since in its current version, ganglion cells are independent. The goal is to produce better retinal models from experimental recordings obtained with our collaborators at the Institut de la Vision (Olivier Marre and Serge Picaud), Evelyne Sernagor (New Castle University) and Luca Berdondini (IIT) using e.g. multi-electrode arrays. This will allow us to better understand the correlations between retina spikes trains and to improve the Virtual Retina model in such a way that it could reproduce the retinal response at the population level. Another application is to the electric stimulation of a retina with implanted multi-electrode arrays in collaboration with the Institut de la Vision and the INT (Frédéric Chavane). Other evolutions of Virtual Retina are also investigated by external partners like the role/implementation of starbust amacrine cells involved in direction selectivity (collaboration with Universidad Técnica Federico Santa María, Valparaiso, Chile, and Centro de Neurociencia de Valaparaiso) (see also e.g., ).

IDDN number: IDDN.FR.001.210034.000.S.P.2007.000.31235

Version: v 2.2.2 (September 2011)

Link: http://

With the advent of new Multi-Electrod Arrays (MEA) techniques, the simultaneously recording of the activity of groups of neurons (up to several hundreds) over a dense configuration, supplies today a critical database to unravel the role of specific neural assemblies. Thus, the analysis of spike trains obtained from in vivo or in vitro experimental data, requires suitable statistical models. The Enas software offers new computational methods taking into account time constraints in neural networks (such as memory effects). It also offers several statistical model choices, some of these models already used in this community, and some others developed by us, and allows a quantitative comparison between these models. It also offers a control of finite-size sampling effects inherent to empirical statistics.

Compared to existing software (Pandora; Sigtool; Spyke Viewer; Orbital Spikes) Enas offers new computational methods taking into account time constraints in neural networks (such as memory effects), based on theoretical methods rooted in statistical physics and applied mathematics. The algorithms used are based on linear programming, nonlinear parameter estimations, statistical methods.

EnaS allows interfaces with existing toolboxes used by this community such as Matlab.

EnaS is developed joinly by the Neuromathcomp, CORTEX/Mnemosyne, and DREAM Inria teams, under CeCILL-C licence, APP logiciel Enas : IDDN.FR.OO1.190004.000.S.P.2014.000.31235. It can be freely downloaded.

It has benefited from the support of an ADT Inria from 2011 to 2013.

The software is freely downloadable at https://enas.inria.fr/#download.

Website: `https:// enas.inria.fr/`

Olivier Faugeras received the Okawa prize for his pioneering contributions for computer vision and for computational neuroscience. The ceremony will be held in Tokyo in March 2015.

Learning or memory formation are associated with the strengthening of the synaptic connections between neurons according to a pattern reflected by the input. According to this theory a retained memory sequence is associated to a dynamic pattern of the associated neural circuit. In this work we consider a class of network neuron models, known as Hopfield networks, with a learning rule which consists of transforming an information string to a coupling pattern. Within this class of models we study dynamic patterns, known as robust heteroclinic cycles, and establish a tight connection between their existence and the structure of the coupling.

This work is available as and has been submitted to a Journal.

Inhibition stabilized networks (ISNs) are neural architectures with strong positive feedback among pyramidal neurons balanced by strong negative feedback from in-hibitory interneurons, a circuit element found in the hippocampus and the primary visual cortex. In their working regime, ISNs produce damped oscillations in the γ-range in response to inputs to the inhibitory population. In order to understand the proper-ties of interconnected ISNs, we investigated periodic forcing of ISNs. We show that ISNs can be excited over a range of frequencies and derive properties of the resonance peaks. In particular, we studied the phase-locked solutions, the torus solutions and the resonance peaks. More particular, periodically forced ISNs respond with (possibly multi-stable) phase-locked activity whereas networks with sustained intrinsic oscilla-tions respond more dynamically to periodic inputs with tori. Hence, the dynamics are surprisingly rich and phase effects alone do not adequately describe the network re-sponse. This strengthens the importance of phase-amplitude coupling as opposed to phase-phase coupling in providing multiple frequencies for multiplexing and routing information.

This work has been submitted to a Journal and is available as .

We prove a Large Deviation Principle for a stationary Gaussian process over

This work has been accepted for publication in Entropy .

In this paper we derive an integral (with respect to time) representation of the relative entropy (or Kullback-Leibler Divergence)

This work has been accepted for publication in the Journal Entropy .

We study the asymptotic law of a network of interacting neurons when the number of neurons becomes infinite. Given a completely connected network of neurons in which the synaptic weights are Gaussian correlated random variables, we describe the asymptotic law of the network when the number of neurons goes to infinity. Unlike previous works which made the biologically unplausible assumption that the weights were i.i.d. random variables, we assume that they are correlated. We introduce the process-level empirical measure of the trajectories of the solutions to the equations of the finite network of neurons and the averaged law (with respect to the synaptic weights) of the trajectories of the solutions to the equations of the network of neurons. The result is that the image law through the empirical measure satisfies a large deviation principle with a good rate function. We provide an analytical expression of this rate function. This work has appeared in the Comptes Rendus de l'Academie des Sciences. Serie 1, Mathematique .

We have continued the development, started in , of the asymptotic description of certain stochastic neural networks. We use the Large Deviation Principle (LDP) and the good rate function

J. Inglis, together with F. Delarue (Univ. Nice – Sophia Antipolis), E. Tanré (Inria Tosca) and S. Rubenthaler (Univ. Nice – Sophia Antipolis) completed their study of
the mean-field convergence of a highly discontinuous particle system modeling the behavior of a spiking network of neurons, based on
the integrate-and-fire model. Due to the highly singular nature of the system, it was convenient to work with a relatively unknown
Skorohod topology. The resulting article has been accepted for publication in *Stochastic Processes and Related Fields*.

In this note, we clarify the well-posedness of the limit equations to the mean-field N -neuron models proposed in and we prove the associated propagation of chaos property. We also complete the modeling issue in by discussing the well-posedness of the stochastic differential equations which govern the behaviour of the ion channels and the amount of available neurotransmitters.

This work has been submitted for publication to a Journal and is available as .

We develop the connection between large deviation theory and more applied approaches to stochastic hybrid systems by highlighting a common underlying Hamiltonian structure. A stochastic hybrid system involves the coupling between a piecewise deterministic dynamical system in

We extend the theory of neural fields which has been developed in a deterministic framework by considering the influence spatio-temporal noise. The outstanding problem that we here address is the development of a theory that gives rigorous meaning to stochastic neural field equations, and conditions ensuring that they are well-posed. Previous investigations in the field of computational and mathematical neuroscience have been numerical for the most part. Such questions have been considered for a long time in the theory of stochastic partial differential equations, where at least two different approaches have been developed, each having its advantages and disadvantages. It turns out that both approaches have also been used in computational and mathematical neuroscience, but with much less emphasis on the underlying theory. We present a review of two existing theories and show how they can be used to put the theory of stochastic neural fields on a rigorous footing. We also provide general conditions on the parameters of the stochastic neural field equations under which we guarantee that these equations are well-posed. In so doing we relate each approach to previous work in computational and mathematical neuroscience. We hope this will provide a reference that will pave the way for future studies (both theoretical and applied) of these equations, where basic questions of existence and uniqueness will no longer be a cause for concern. This work has appeared in the Journal of Mathematical Biology .

This work has been accepted for publication in the SIAM Journal on Mathematical Analysis and is available as .

Understanding how stimuli and synaptic connectivity in uence the statistics of spike patterns in neural networks is a central question in computational neuroscience. Maximum Entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. But, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuro-mimetic models) provide a probabilistic mapping between stimulus, network architecture and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuro-mimetic and Maximum Entropy models.

This work has been published in and presented in , , , .

We propose a numerical method to learn maximum entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers, [10] and [4], which proposed the estimation of parameters where only spatial constraints were taken into account. The extension we propose allows one to properly handle memory effects in spike statistics, for large-sized neural networks.

This work has been published in and presented in , , , .

In this work we determine a Large Deviation Principle (LDP) for a model of neurons interacting on a lattice

This work is available and is under review in a Journal.

We consider the problem of describing mathematically the spontaneous activity of V1 by combining several important experimental observations including 1) the organization of the visual cortex into a spatially periodic network of hypercolumns structured around pinwheels, 2) the difference between short-range and long-range intra-cortical connections, the first ones being rather isotropic and producing naturally doubly-periodic patterns by Turing mechanisms, the second one being patchy and 3) the fact that the Turing patterns spontaneously produced by the short-range connections and the network of pinwheels have similar periods. By analyzing the Preferred Orientation (PO) map, we are able to classify all possible singular points of the PO maps (the pinwheels) as having symmetries described by a small subset of the wallpaper groups. We then propose a description of the spontaneous activity of V1 using a classical voltage-based neural field model that features isotropic short-range connectivities modulated by non-isotropic long-range connectivities. A key observation is that, with only short-range connections and because the problem has full translational invariance in this case, a spontaneous doubly-periodic pattern generates a 2-torus in a suitable functional space which persists as a flow-invariant manifold under small perturbations, hence when turning on the long-range connections. Through a complete analysis of the symmetries of the resulting neural field equation and motivated by a numerical investigation of the bifurcations of their solutions, we conclude that the branches of solutions which are stable over an extended set of parameters are those corresponding to patterns with an hexagonal (or nearly hexagonal) symmetry. The question of which patterns persist when turning on the long-range connections is answered by 1) analyzing the remaining symmetries on the perturbed torus and 2) combining this information with the Poincaré-Hopf theorem. We have developed a numerical implementation of the theory that has allowed us to produce the patterns of activities predicted by the theory, the planforms. In particular we generalize the contoured and non-contoured planforms predicted by previous authors and predict the existence of mixed contoured/non-contoured planforms. We also found that these planforms are most likely to be time evolving. This work is available as a preprint and has been submitted to a Journal.

How a population of retinal ganglion cells (RGCs) encode the visual scene remains an open question. Several coding strategies have been investigated out of which two main views have emerged: considering RGCs as independent encoders or as synergistic encoders, i.e., when the concerted spiking in a RGC population carries more information than the sum of the information contained in the spiking of individual RGCs. Although the RGCs assumed as independent encode the main information, there is currently a growing body of evidence that considering RGCs as synergistic encoders provides complementary and more precise information. Based on salamander retina recordings, it has been suggested that a code based on differential spike latencies between RGC pairs could be a powerful mechanism. Here, we have tested this hypothesis in the mammalian retina. We recorded responses to stationary gratings from 469 RGCs in 5 mouse retinas. Interestingly, we did not find any RGC pairs exhibiting clear latency correlations (presumably due to the presence of spontaneous activity), showing that individual RGC pairs do not provide sufficient information in our conditions. However considering the whole RGC population, we show that the shape of the wave of first spikes (WFS) successfully encodes for spatial cues. To quantify its coding capabilities, we performed a discrimination task and we showed that the WFS was more robust to the spontaneous firing than the absolute latencies are. We also investigated the impact of a post-processing neural layer. The recorded spikes were fed into an artificial lateral geniculate nucleus (LGN) layer. We found that the WFS is not only preserved but even refined through the LGN-like layer, while classical independent coding strategies become impaired. These findings suggest that even at the level of the retina, the WFS provides a reliable strategy to encode spatial cues.

This work is ongoing and was presented as a poster at CNS 2014. See for more details.

Fixational eye movements are common across vertebrates, yet their functional roles, if any, are debated . To investigate this issue, we exposed the Virtual Retina simulator to natural images, generated realistic drifts and microsaccades , and analyzed the output spike trains of the parvocellular retinal ganglion cells (RGC).
We first computed cross-correlograms between pairs of RGC that are strongly excited by the image corresponding to the mean eye position. Not surprisingly, in the absence of eye movements, that is when analyzing the tonic (sustained) response to a static image, these cross-correlograms are flat. Adding some slow drift (

This work is ongoing and was presented as a poster at CNS 2014.

Recent studies on the visual system reveal that retina is smarter than scientists believed. One low level processing occurring at the retina is feature extraction, becoming an inspiration to build novel image descriptors for image categorization. However only few methods have taken advantage of this idea, such as FREAK descriptor , which consists of a circular grid of a concentric distribution of overlapping receptive fields (RFs) in which average image intensities are compared pairwise. In this work we extended such a descriptor but sticking much more to biological data and models of retina. Each RF in our model is described with a linear-nonlinear model (LN) taking into account inhibitory surrounds with parameters based on biological findings. Based on the activity of retinal ganglion cells, we investigated several methods to define a set of descriptors. The performance of each descriptor was tested on computer vision datasets for texture and scene categorization.

This work is ongoing and was presented as a poster at the 1st Workshop of Visual Image Interpretation in Humans and Machine (VIIHM, EPSRC Network for Biological and Computer Vision in the UK).

The spike triggered averaged (STA) technique has been widely used to estimate the receptive fields (RF) of sensory neurons . Theoretically, it has been shown that when the neurons are stimulated with a white noise stimulus the STA is an unbiased estimator of the neuron RF (up to a multiplicative constant). The error decreases with the number of spikes at a rate proportional to the stimulus variance . Experimentally, for visual neurons, the standard stimuli are checkerboards where block size is heuristically tuned. This raises difficulties when dealing with large neurons assemblies: When the block size is too small, neuron's response might be too weak, and when it is too large, one may miss RFs. Previously online updating the stimulus in the direction of larger stimulus-neural response correlation or mutual information , has been proposed. However, these approaches can not be applied for an ensemble of cells recorded simultaneously since each neuron would update the stimulus in a different direction. We propose an improved checkerboard stimulus where blocks are shifted randomly in space at fixed time steps. Theoretically, we show that the STA remains an unbiased estimator of the RF. Additionally, we show two major properties of this new stimulus: (i) For a fixed block sized, RF spatial resolution is improved as a function of the number of possible shifts; (ii) Targeting a given RF spatial resolution, our method converges faster than the standard one. Numerically, we perform an exhaustive analysis of the performance of the approach based on simulated spiked trains from LNP cascades neurons varying RF sizes and positions. Results show global improvements in the RF representation even after short stimulation times. This makes this approach a promising solution to improve RF estimation of large ensemble of neurons.

This work is ongoing and was submitted to COSYNE 2015.

Motion estimation has been studied extensively in neurosciences in the last two decades. The general consensus that has evolved from the studies in the primate vision is that it is done in a two stage process involving cortical areas V1 and MT in the brain. Spatio temporal filters are leading contenders in terms of models that capture the characteristics exhibited in these areas. Even though there are many models in the biological vision literature covering the optical flow estimation problem based on the spatio-temporal filters little is known in terms of their performance on the modern day computer vision datasets such as Middlebury. In this paper, we start from a mostly classical feedforward V1-MT model introducing a additional decoding step to obtain an optical flow estimation. Two extensions are also discussed using nonlinear filtering of the MT response for a better handling of discontinuities. One essential contribution of this paper is to show how a neural model can be adapted to deal with real sequences and it is here for the first time that such a neural model is benchmarked on the modern computer vision dataset Middlebury. Results are promising and suggest several possible improvements.

This work is ongoing and was presented as a poster at the 1st Workshop of Visual Image Interpretation in Humans and Machine (VIIHM, EPSRC Network for Biological and Computer Vision in the UK). See for more details.

The balance of excitatory and inhibitory interactions between neurons is one of the characteristic aspects of neural computation. In both neural network and neural field models these interactions have been modeled using center-surround connectivity kernels. Depending on the relative strength of excitation and inhibition these networks have been found to exhibit rich and interesting dynamical behavior. Although many models have been reported in the literature using center-surround connectivity kernels and many experimental studies have shown evidence for changes in observed behavior from winner-take-all to gain control, a thorough bifurcation analysis of these networks in terms of sensitivity of the network to peak strength, discriminability of the peaks and speed of convergence has not been done. In our present work we visit this question in order to identify the parameter regimes where this important switch in the behavior of the network occurs and also establish the trade offs that arise with the choice of a particular connectivity kernel.

This work is ongoing and was presented as a poster at the conference "Nonlinear dynamics and stochastic methods: from neuroscience to other biological applications"

We use a diurnal rodent retina (O. Degus), which has the advantage of present a 3:1 proportion of rods and cones, respectively, to study the RGC population responses to habitat-based natural stimuli. In order to do this, we have developed a mobile robot that is capable to record movies in the natural habitat of this rodent, simulating both his movements and the eye-ground distance, which allows us to stimulate and record an in vitro retina patch using MEA (multi electrode array) with a sequence of images taken from the animal natural habitat. The analysis of spike statistics has been done using the Enas software to characterize spatio-temporal pairwise correlation with Gibbs distributions. potential constitutes a useful tool for comparing pairwise spatio-temporal correlations between different conditions for the same RGC population. We show that correlated spiking patterns represents a major deviation between White Noise and Natural Movies conditions. We also conclude that population coding for this monophasic OFF RGC population is mostly based on spatial correlation when stimulated with Natural Movies.

This ongoing work has been presented in .

...

See section “International Initiatives” below.

Title: BrainScaleS: Brain-inspired multiscale computation in neuromorphic hybrid systems

Type: COOPERATION (ICT)

Defi: Brain-inspired multiscale computation in neuromorphic hybrid systems

Instrument: Integrated Project (IP)

Objectif: FET proactive 8: Brain Inspired ICT

Duration: January 2011 - December 2014

Coordinator: Universitaet Ruprecht- Karls Heidelberg (Germany)

Other Partners: Nederlandse Akademie van Wetenschappen, Amsterdam; Universitetet For Miljo Og Biovitenskap, Aas; Universitat Pompeu Fabra, Barcelona; University of Cambridge, Cambridge; Debreceni Egyetem, Debrecen; Technische Universität Dresden, Dresden; CNRS-UNIC, Gif-sur- Yvette; CNRS-INCM, Marseille; CNRS-ISM, Marseille; TUG, Graz; Ruprecht-Karls-Universität Heidelberg, Heidelberg; Forschungszentrum Jülich GmbH, Jülich; EPFL LCN, Lausanne; EPFL- BBP, Lausanne; The University Of Manchester, Manchester; KTH, Stockholm; Universität Zuürich, Zuürich

Inria contact: Olivier Faugeras

Abstract: The BrainScaleS project aims at understanding function and interaction of multiple spatial and temporal scales in brain information processing. The fundamentally new approach of Brain-ScaleS lies in the in-vivo biological experimentation and computational analysis. Spatial scales range from individual neurons over larger neuron populations to entire functional brain areas. Temporal scales range from milliseconds relevant for event based plasticity mechanisms to hours or days relevant for learning and development. In the project generic theoretical principles will be extracted to enable an artificial synthesis of cortical-like cognitive skills. Both, numerical simulations on petaflop supercomputers and a fundamentally different non-von Neumann hardware architecture will be employed for this purpose. Neurobiological data from the early perceptual visual and somatosen- sory systems will be combined with data from specifically targeted higher cortical areas. Functional databases as well as novel project-specific experimental tools and protocols will be developed and used. New theoretical concepts and methods will be developed for understanding the computational role of the complex multi-scale dynamics of neural systems in-vivo. Innovative in-vivo experiments will be carried out to guide this analytical understanding. Multiscale architectures will be synthesized into a non-von Neumann computing device realised in custom designed electronic hardware. The proposed Hybrid Multiscale Computing Facility (HMF) combines microscopic neuromorphic physical model circuits with numerically calculated mesoscopic and macroscopic functional units and a virtual environment providing sensory, decision-making and motor interfaces. The project also plans to employ petaflop supercomputing to obtain new insights into the specific properties of the different hardware architectures. A set of demonstration experiments will link multiscale analysis of biological systems with functionally and architecturally equivalent synthetic systems and offer the possibility for quantitative statements on the validity of theories bridging multiple scales. The demonstration experiments will also explore non-von Neumann computing outside the realm of brain-science. BrainScaleS will establish close links with the EU Brain-i-Nets and the Blue Brain project at the EPFL Lausanne. The consortium consists of a core group of 10 partners with 13 indi- vidual groups. Together with other projects and groups the BrainScaleS consortium plans to make important contributions to the preparation of a future FET flagship project. This project will address the understanding and exploitation of information processing in the human brain as one of the major intellectual challenges of humanity with vast potential applications.

This project started on January 1st, 2011 and is funded for four years.

Title: Mathematics of Multilevel Anticipatory Complex Systems

Type: Collaborative project (generic) (FP7-ICT)

Defi: develop a mathematical theory of complex multilevel systems and their dynamics.

Instrument: Integrated Project (IP)

Objectif: NC

Duration: October 2012 - September 2015

Coordinator: Fatihcan Atay, Max Planck Institute for Mathematics in the Sciences, Leipzig (Germany)

Other Partners: Max Planck Institute for Mathematics in the Sciences (Leipzig, Germany), Universität Bielefeld (Germany), Chalmers University of Technology (Gothenburg, Sweden), Ca’Foscari University of Venice (Italy), Università Politecnica delle Marche (Ancona, Italy).

See also: http://

Inria contact: Olivier Faugeras

Abstract: The MATHEMACS project aims to develop a mathematical theory of complex multi-level systems and their dynamics. This is done through a general formulation based on the mathematical tools of information and dynamical systems theories. To ensure that the theoretical framework is at the same time practically applicable, three key application areas are represented within the project, namely neurobiology, human communication, and economics. These areas not only provide some of the best-known epitomes of complex multi-level systems, but also constitute a challenging test bed for validating the generality of the theory since they span a vast range of spatial and temporal scales. Furthermore, they have an important common aspect; namely, their complexity and self-organizational character is partly due to the anticipatory and predictive actions of their constituent units. The MATHEMACS project contends that the concepts of anticipation and prediction are particularly relevant for multi-level systems since they often involve different levels. Thus, as a further unique feature, the project includes the mathematical representation and modeling of anticipation in its agenda for understanding complex multi-level systems.

This project started on October 1st, 2012 and is funded for four years.

Type: COOPERATION, FP7 FET (Future Emerging technology) proactive program: Neuro-Bio-Inspired Systems Call 9 Objective 9.11

Defi: Retina-inspired ENcoding for advanced VISION tasks (RENVISION)

Instrument: Specific Targeted Research Project

Objectif: NC

Duration: March 2013 - February 2016

Coordinator: Vittorio Murino, PAVIS, IIT (Italy)

Partner: PAVIS, IIT (Italy), NBT, IIT (Italy), NAPH, IIT (Italy), The Institute of Neuroscience, Newcastle University (UK), Institute for Adaptive and Neural Computation, The University of Edimburgh (UK), Neuromathcomp project-team, Inria (France)

Inria contact: Pierre Kornprobst

Abstract: The retina is a sophisticated distributed processing unit of the central nervous system encoding visual stimuli in a highly parallel, adaptive and computationally efficient way. Recent studies show that rather than being a simple spatiotemporal filter that encodes visual information, the retina performs sophisticated non-linear computations extracting specific spatio-temporal stimulus features in a highly selective manner (e.g. motion selectivity). Understanding the neurobiological principles beyond retinal functionality is essential to develop successful artificial computer vision architectures.

RENVISION's goal is, therefore, twofold:

To achieve a comprehensive understanding of how the retina encodes visual information through the different cellular layers;

To use such insights to develop a retina-inspired computational approach to high-level computer vision tasks.

To this aim, exploiting the recent advances in high-resolution light microscopy 3D imaging and high-density multielectrode array technologies, RENVISION will be in an unprecedented position to investigate pan-retinal signal processing at high spatio-temporal resolution, integrating these two technologies in a novel experimental setup. This will allow for simultaneous recording from the entire population of ganglion cells and functional imaging of inner retinal layers at near-cellular resolution, combined with 3D structural imaging of the whole inner retina. The combined analysis of these complex datasets will require the development of novel multimodal analysis methods.

Resting on these neuroscientific and computational grounds, RENVISION will generate new knowledge on retinal processing. It will provide advanced pattern recognition and machine learning technologies to ICTs by shedding a new light on how the output of retinal processing (natural or modelled) allows solving complex vision tasks such as automated scene categorization and human action recognition.

Type: COOPERATION, FET Flagship' project

Defi: Understanding the brain

Instrument: FET Flagship' project

Objectif: NC

Duration: October 2013 - March 2016

Coordinator: EPFL (Switzerland)

Partner: see http://www.humanbrainproject.eu.

Inria contact: Olivier Faugeras

Abstract: The Human Brain Project (HBP) is supported by the European Union as a 'FET Flagship' project and the 86 institutions involved will receive one billion euro in funding over ten years. HBP should lay the technical foundations for a new model of ICT-based brain research, driving integration between data and knowledge from different disciplines, and catalysing a community effort to achieve a new understanding of the brain, new treatments for brain disease and new brain-like computing technologies. http://www.humanbrainproject.eu

Title: Algorithms for modeling the visual system: From natural vision to numerical applications.

principal investigator: Thierry Viéville (Mnemosyne)

International Partner:

Institution: University of Valparaiso (Chile)

Laboratory: Centro Interdiciplinario de Neurociencia de Valparaiso

Researcher: Adrian PALACIOS

International Partner:

Institution: UTFSM Valparaiso (Chile)

Laboratory: Direccion General de Investigacion y Postgrado

Researcher: Maria-Jose ESCOBAR

Duration: 2011 - 2014

See also: http://

KEOpS attempts to study and model the non-standard behavior of retinal (ganglion cells) sensors observed in natural scenarios. KEOpS also attempts to incorporate the resulting models into real engineering applications as new dynamical early-visual modules. The retina, an accessible part of the brain, is a unique model for studying the neural coding principles for natural scenarios. A recent study proposes that some visual functions (e.g. movement, orientation, anticipatory temporal prediction, contrast), thought to be the exclusive duty of higher brain centers, are actually carried at the retina level. The anatomical and physiological segregation of visual scenes into spatial, temporal and chromatic channels begins at the retina through the action of local neural networks. However, how the precise articulation of this neural network contributes to local solutions and global perception necessary to resolve natural task remains in general a mystery. KEOpS thus attempts to study the complexity of retinal ganglion cells (the output to the brain) behaviors observed in natural scenarios2 and to apply this result to artificial visual systems. We revisit both the retinal neural coding information sent to the brain, and at the same time, the development of new engineering applications inspired by the understanding of such neural encoding mechanisms. We develop an innovative formalism that takes the real (natural) complexity of retinal responses into account. We also develop new dynamical early-visual modules necessary to solve visual problems task.

Paul Bressloff, Professor of mathematics at the University of Utah won an international chair at Inria (2013-2017).

Michele Migliore, Research Scientist at the Institute of Biophysics, National Research Council, Palermo, Italy. Funded by the "Axe Interdisciplinaire de Recherche de l’Université de Nice – Sophia Antipolis".

Cyan O'Donnell, Postdoc at the Computational Neurobiology Laboratory in the Salk Institute, Califorina, from 9th July until 19th July 2014. Funded by the "Axe Interdisciplinaire de Recherche de l’Université de Nice – Sophia Antipolis".

Cesar Ravello, pHd student with A. Palacios, Centro Interdisciplinario de Neurociencia de Valparaíso, Univ de Valparaíso, Valparaíso, Chile. From May 2014 until Jun 2014

Ruben Herzog, Master student in Valparaiso, with A. Palacios, Centro Interdisciplinario de Neurociencia de Valparaíso, Univ de Valparaíso, Valparaíso. From November, 12th 2014 until November 14th 2014.

Olivier Faugeras is the General Chair of the 1st International Conference on Mathematical Neuroscience, to be held in Antibe-Juan les Pins, June 8-10 2015.

Romain Veltz and James Inglis are members of organizing committee of the 1st International Conference on Mathematical Neuroscience, to be held in Antibe-Juan les Pins, June 8-10 2015.

Bruno Cessac has organized the symposium “FROM STATISTICAL PHYSICS TO NEURONAL NETWORKS DYNAMICS” in Sophia Antipolis, November 6th 2014.

Pascal Chossat is a member of the program committee of the 1st International Conference on Mathematical Neuroscience, to be held in Antibe-Juan les Pins, June 8-10 2015.

Pierre Kornprobst was a member of the program committee of the 22nd International Conference on Pattern Recognition (ICPR) and of the 8th International Conference on Bio-inspired Information and Communications Technologies (formerly BIONETICS).

Bruno Cessac was a reviewer for the Netherlands Organisation for Scientific Research (NWO) and for the Journal of Mathematical Biology.

Olivier Faugeras is the co-editor in chief of the open access Journal of Mathematical Neuroscience.

Bruno Cessac was a reviewer for the Journal of Mathematical Biology.

Licence: Pierre Kornprobst, Modélisations mathématiques, 24h, Université Nice Sophia Antipolis, France.

Licence 2 : Rodrigo Cofre, Traitement du signal, 50h, L2, Université Nice Sophia Antipolis, France.

Licence 3 : Hassan Nasser, Electronique numérique, 36h, L3, Université Nice Sophia Antipolis, France

License 3 : Hassan Nasser, Microprocesseurs, 28h, L3, Université Nice Sophia Antipolis, France

Master 2: Bruno Cessac, *Neuronal dynamics*, 36 hours, Master of Computational Biology and Biomedicine, Université Nice Sophia Antipolis, France.

Master 2: Romain Veltz, *Mathematical Methods for Neuroscience*, 24h, M2, ENS Paris, France.

Summer school: Bruno Cessac, *Neural Networks Dynamics.*, 3h, Lecture given in the LACONEU School 2014, Jan 2014, Valparaiso, Chile.

Researchers: Bruno Cessac, *Mean-Field Models in neuroscience* , 6h, Lecture given in the Mathemacs workshop organized at Inria Sophia, 14-16 May 2014.

PhD : Hassan Nasser, “Analysis of large scale spiking networks dynamics with spatio-temporal constraints: application to Multi-Electrodes acquisitions in the retina”, Université de Nice, defence 14-03-14, supervised by Bruno Cessac.

PhD : Rodrigo Cofre, “Neuronal Networks, Spike TrainsStatistics and Gibbs Distributions. Dynamical Systems”, Université de Nice, defence 05-11-14, supervised by Bruno Cessac.

PhD in progress: Theodora Karvouniari, «Retinal waves in the retina: theory and experiments», defence planned in October 2017, supervised by Bruno Cessac.

PhD in progress: Kartheek Medathati, « Perception du mouvement et attention: Des neurosciences à la vision artificielle», defence planned in 2016, co-supervised by Pierre Kornprobst and Guillaume S. Masson (Institut de Neurosciences de la Timone, UMR 6193, CNRS, Marseille, France).

Bruno Cessac. Reviewer of Jules Lalouette's thesis, "Modélisation des réponses calciques de réseaux d'astrocytes : relations entre topologie et dynamiques" (supervision Hugues Berry). INSA Lyon, 05-12-14.

Pierre Kornprobst. Reviewer of Julian Quiroga Sepulveda's thesis "Scene flow from RGDB Images". Université de Grenoble, France, 07-11-2014.

Pierre Kornprobst. Jury member of Hassan Nasser's Thesis, "Analysis of large scale spiking networks dynamics with spatio-temporal constraints: application to Multi-Electrodes acquisitions in the retina", Université Nice Sophia Antipolis, France, 14-03-2014.