NeuroMathComp is a joint project team between INRIA (Méditerranée and Rocquencourt), École Normale Supérieure de Paris (DI), Université de Nice Sophia-Antipolis (JAD Laboratory) and CNRS (LIENS, UMR 8548. LJAD, UMR 6621)
NeuroMathCompfocuses on the exploration of the brain from the mathematical and computational perspectives.
We want to unveil the principles that govern the functioning of neurons and assemblies thereof and to use our results to bridge the gap between biological and computational vision.
Our work is very mathematical but we make heavy use of computers for numerical experiments and simulations. We have close ties with several top groups in biological neuroscience. We are pursuing the idea that the "unreasonable effectiveness of mathematics" can be brought, as it has been in physics, to bear on neuroscience.
Computational neuroscience attempts to build models of neurons at a variety of levels, microscopic, i.e., the minicolumn containing of the order of one hundred or so neurons, mesoscopic, i.e., the macrocolumn containing of the order of 10 4-10 5neurons, and macroscopic, i.e., a cortical area such as the primary visual area V1.
Modeling such assemblies of neurons and simulating their behaviour involves putting together a mixture of the most recent results in neurophysiology with such advanced mathematics as dynamic systems theory, bifurcation theory, probability theory, stochastic calculus, and statistics, as well as the use of simulation tools.
We conduct research in the following three main areas.
Modeling and simulating single neurons.
Modeling and simulating assemblies of neurons.
Visual perception modeling.
A usually invasive technique that allows to display a visual correlate of the activity of the cortex. One distinguishes intrinsic and extrinsic optical imaging.
Understanding the principles of information processing in the brain is challenging, theoretically and experimentally. Computational neuroscience attempts to build models of neurons at a variety of levels, microscopic, i.e., the minicolumn containing of the order of one hundred or so neurons, mesoscopic, i.e., the macrocolumn containing of the order of 10 4-10 5neurons, and macroscopic, i.e., a cortical area such as the primary visual area V1.
Modeling such assemblies of neurons and simulating their behaviour involves putting together a mixture of the most recent results in neurophysiology with such advanced mathematics as dynamical systems theory, bifurcation theory, probability theory, stochastic calculus, and statistics, as well as the use of simulation tools
In order to test the validity of the models we rely heavily on experimental data. These data come from single or multi electrode recordings and optical imaging and are provided by our
collaborations with neurophysiology laboratories such as the UNIC
http://
The NeuroMathComp team works at the three levels. We have proposed two realistic models of single neurons , . by making use of physiological data and the theory of dynamical systems and bifurcations. At this level of analysis we have also proposed a variety of theoretical tools from the theory of stochastic calculus and solved an open problem of determining the probability law of the spike intervals for a simple but realistic neuron model, the leaky integrate and fire with exponentially decaying synaptic currents . We have also provided a mathematical analysis , through bifurcation theory, of the behaviour of a particular mesoscopic model, the one due to Jansen and Rit . Finally we have studied in detail several models for the statistics of spike trains. , .
We have also started some efforts at the macroscopic level, in particular for modeling visual areas, see Section . For this particular level, information about the anatomical connectivity such as the one provided by diffusion imaging techniques is of fundamental importance.
Another scientific focus of the team is the combined study of computer and biological vision. We think that a more detailed knowledge of the visual perception in humans and primates can have a potential impact on algorithm design and performance. Thus, we develop so-called bio-inspired approaches to model visual tasks. This work is multidisciplinary: It involves knowledge from neuroscience and physiology, it tries to reproduce what psychophysical experiments reveal and, as a final goal, we want to compete with recent computer vision approaches (see, e.g. for a presentation of variational approaches in computer vision)
The models that we develop are bio-inspired with regards to several aspects, depending on the scale chosen for the modelization.
At the microscopic level, one interesting aspect it to study the neural code: The nervous system use spikes as a way to emit and code the information, which is certainly one explanation of the extraordinary performance of the visual system. So we need to define a mathematical framework to be able to analyze this spiking langage and, based on those results, one can imagine some computer vision applications where spikes are used to code signals, .
At the macroscopic level, we imitate the functional hierarchy of the visual cortex and propose the variational framework and integro-differential equations as a way to model cortical layers activity.
We also develop phenomenological models, in order to reproduce a percept. For example, we are developing bio-inspired models for motion estimation, focusing on V1-MT and V2 layers and interactions, see .
Validation of these models is crucial. Since we claim that our models are bio-inspired, our goal is also to validate them through biology. For example, the spiking retina simulator (Virtual Retina) reproduces closely cell measurements done on cat ganglion cells, for various kinds of experiments. At the perceptual level, our models should also be able to reproduce a percept, which may be not trivial to reproduce with standard computer vision approaches. Computer vision is another way to prove the efficiency of our approaches, and it is one goal to show compare the performances of our approaches with respect to state-of-the-art computer vision approaches. This is currently done for example for action recognition, based on classical image databases.
This modeling activity brings new insight and tools for computer vision. But it also raises fundamental issues that will be the focus of future research. Understanding the neural code is certainly the most challenging one. Since we believe that spikes are one possible explanation of the visual system performance, and represent a new paradigm for computer vision, more fundamental work has to be done to understand how to better exploit the richness of this code.
We developed a retina simulation software which transforms a video into spike trains (see also Adrien Wohrer PhD ). This project was developped in the scope of the EC project FACETS. Our goal was twofold: Allow large scale simulations (up to 100,000 neurons) in reasonable processing times and keep a strong biological plausibility (see for a review), taking into account implementation constraints. The underlying model includes a linear model of filtering in the Outer Plexiform Layer, a well-posed shunting feedback at the level of bipolar cells accounting for rapid contrast gain control , and a spike generation process modeling ganglion cells. We proved the pertinence of our software by reproducing several experimental measurements from single ganglion cells such as cat X and Y cells. This software is an evolutionary tool for neuroscientists that need realistic large-scale input spike trains in subsequent treatments, and for educational purposes. We also developed a web service, so that one may test directly the main software on his own data, without any installation. Virtual Retina was distributed in 2007 with an open-source licence (CeCILL C). Up to our knowledge, this is the only free retina simulator that allows to reproduce a spiking output from a video stream, which has been confronted to fine biological data.
Website:
http://
Virtual Retina is under CeCILL C licence:
APP logiciel Virtual Retina: IDDN.FR.OO1.210034.000.S.P.2007.000.31235
This is a library allowing to simulate and analyze so called "event neural assemblies". It is designed mainly as
existing simulator plug-in (e.g. MVASpike or other simulators via the NeuralEnsemble meta-simulation platform),
additional modules for computations with neural unit assembly on standard platforms (e.g. Python or the Scilab platform).
Achievements include:
spike trains statistical analysis via Gibbs distributions. They are based on the estimates of a polynomial Gibbs potential from spikes trains and subsequently the population firing rate, correlations, higher order statistics and relative entropy. They not only allow us to estimate the spikes statistics but also to compare different models, thus answering such questions about the neural code as: are correlations (or time synchrony or a given set of spike patterns,. . . ) significant with respect to rate coding.
spiking network programing for exact event's sequence restitution;
discrete neural field parameters algorithmic adjustments and time-constrained event-based network simulation reconciling clock and event based simulation methods.
It is neither a new "simulator", nor a new "platform", but a set of new routines available for such existing tools.
Website:
http://
Event neural assembly Simulation is under CeCILL C licence
APP logiciel Enas: IDDN.FR.OO1.360008.000.S.P.2009.000.10600
Spiking neuron models are hybrid dynamical systems combining differential equations and discrete resets, which generate complex dynamics. Several two-dimensional spiking models have been recently introduced, modelling the membrane potential and an additional variable, and where spikes are defined by the divergence to infinity of the membrane potential variable. These simple models reproduce a large number of electrophysiological features displayed by real neurons, such as spike frequency adaptation and bursting. The patterns of spikes, which are the discontinuity points of the hybrid dynamical system, have been mainly studied numerically. Here we show that the spike patterns are related to orbits under a discrete map, the adaptation map, and we study its dynamics and bifurcations. Regular spiking corresponds to fixed points of the adaptation map while bursting corresponds to periodic orbits. We find that the models undergo a transition to chaos via a cascade of period adding bifurcations. Finally, we discuss the physiological relevance of our results with regard to electrophysiological classes.
This work has appeared in SIAM Dynamical Systems
This work was partially supported by the EC IP project FP6-015879, FACETS and the Fondation d'Entreprise EADS.
The quadratic adaptive integrate-and-fire model is recognized as very interesting for its computational efficiency and its ability to reproduce many behaviors observed in cortical neurons. For this reason it is currently widely used, in particular for large scale simulations of neural networks. This model emulates the dynamics of the membrane potential of a neuron together with an adaptation variable. The subthreshold dynamics is governed by a two-parameter differential equation, and a spike is emitted when the membrane potential variable reaches a given cutoff value. Subsequently the membrane potential is reset, and the adaptation variable is added a fixed value called the spike-triggered adaptation parameter. We show in this note that when the system does not converge to an equilibrium point, both variables of the subthreshold dynamical system blow up in finite time whatever the parameters of the dynamics. The cutoff is therefore essential for the model to be well defined and simulated. The divergence of the adaptation variable makes the system very sensitive to the cutoff. Changing this parameter dramatically changes the spike patterns produced. Furthermore from a computational viewpoint, the fact that the adaptation variable blows up and the very sharp slope it has when the spike is emitted implies that the time step of the numerical simulation needs to be very small (or adaptive) in order to catch an accurate value of the adaptation at the time of the spike. It is not the case for the similar quartic and exponential models whose adaptation variable does not blow up in finite time, and which are therefore very robust to changes in the cutoff value.
This work has appeared in Neural Computation
This work was partially supported by the EC IP project FP6-015879, FACETS, , and the Fondation d'Entreprise EADS.
We consider the deterministic evolution of a time-discretized spiking network of neurons with connection weights having delays, modeled as a discretized neural network of the generalized integrate and fire (gIF) type. The purpose is to study a class of algorithmic methods allowing to calculate the proper parameters to reproduce exactly a given spike train generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing to provide an efficient resolution. This allows us to "back-engineer" a neural network, i.e., to find out, given a set of initial conditions, which parameters (i.e., connection weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the back-engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed, with a gIF model. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an "Hebbian" rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to "program" a spiking network, i.e., find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.
This work has appeared in BMC Neuroscience
This work was supported by the ARC MACACC.
This paper addresses two questions in the context of neuronal networks dynamics, using methods from dynamical systems theory and statistical physics: (i) How to characterize the statistical properties of sequences of action potentials ("spike trains") produced by neuronal networks ? and; (ii) what are the effects of synaptic plasticity on these statistics ? We introduce a framework in which spike trains are associated to a coding of membrane potential trajectories, and actually, constitute a symbolic coding in important explicit examples (the so-called gIF models). On this basis, we use the thermodynamic formalism from ergodic theory to show how Gibbs distributions are natural probability measures to describe the statistics of spike trains, given the empirical averages of prescribed quantities. As a second result, we show that Gibbs distributions naturally arise when considering "slow" synaptic plasticity rules where the characteristic time for synapse adaptation is quite longer than the characteristic time for neurons dynamics.
This work has appeared in Journal of Statistical Physics
This work was supported by the ARC MACACC.
In this article, our wish is to demystify some aspects of coding with spike-timing, through a simple review of well-understood technical facts regarding spike coding. The goal is to help better understanding to which extend computing and modelling with spiking neuron networks can be biologically plausible and computationally efficient. We intentionally restrict ourselves to a deterministic dynamics, in this review, and we consider that the dynamics of the network is defined by a non-stochastic mapping. This allows us to stay in a rather simple framework and to propose a review with concrete numerical values, results and formula on (i) general time constraints, (ii) links between continuous signals and spike trains, (iii) spiking networks parameter adjustments. When implementing spiking neuron networks, for computational or biological simulation purposes, it is important to take into account the indisputable facts here reviewed. This precaution could prevent from implementing mechanisms meaningless with regards to obvious time constraints, or from introducing spikes artificially, when continuous calculations would be sufficient and simpler. It is also pointed out that implementing a spiking neuron network is finally a simple task, unless complex neural codes are considered.
This work has been accepted in Journal of Physiology, Paris (in press)
This work was supported by the ARC MACACC.
We consider neural networks from the point of view of dynamical systems theory. In this spirit we review recent results dealing with the following questions, adressed in the context of specific models. 1. Characterizing the collective dynamics; 2. Statistical analysis of spikes trains; 3. Interplay between dynamics and network structure; 4. Effects of synaptic plasticity. This work has been accepted in International Journal of Bifurcations and Chaos (in press)
This work was supported by the ARC MACACC.
We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide, by a fixed point method, a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit , their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales.
This work has appeared in Frontiers in Neuroscience
This work was partially supported by the EC IP project FP6-015879, FACETS, the ERC advanced grant NerVi number 227747, and the Fondation d'Entreprise EADS.
We introduce and study a mathematical framework for characterizing and simulating networks of noisy integrate-and-fire neurons based on the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network.
We apply this modeling to different linear integrate-and-fire neuron models with or without noise synaptic integration, different types of synapses, with possibly transmission delays and absolute and relative refractory period, and this way generalize results previously obtained in certain particular cases . , This approach provides a powerful framework to study some properties of the network, and an extremely efficient way to simulate the dynamics of large networks. In particular, it allows a parallel implementation, that was implemented on GPU.
This work has appeared in Neural Computation and available on arXiv
This work was partially funded by the ERC advanced grant NerVi number 227747.
Temporal lobe epilepsy is one of the most common chronic neurological disorder characterized by the occurrence of spontaneous recurrent seizures which can be observed at the level of populations through electroencephalogram (EEG) recordings. The aim of this work is to understand from a theoretical viewpoint the occurrence of this type of seizures and the origin of the oscillatory activity in some classical cortical column models. We relate these rhythmic activities to the structure of the set of periodic orbits in the models, and therefore to their bifurcations. We will be mainly interested Jansen and Rit model, and study the codimension one, two and a codimension three bifurcations of equilibria and cycles of this model. We can therefore understand the effect of the different biological parameters of the system of the apparition of epileptiform activity and observe the emergence of alpha, delta and theta sleep waves in a certain range of parameter.
This work was partially supported by the EC IP project FP6-015879, FACETS, the ERC advanced grant NerVi number 227747, and the Fondation d'Entreprise EADS.
Neural continuum networks are an important aspect of the modeling of macroscopic parts of the cortex. Two classes of such networks are considered: voltage- and activity-based. In both
cases our networks contain an arbitrary number,
n, of interacting neuron populations. Spatial non-symmetric connectivity functions represent cortico-cortical, local, connections, external inputs represent non-local connections.
Sigmoidal nonlinearities model the relationship between (average) membrane potential and activity. Departing from most of the previous work in this area we do not assume the nonlinearity to
be singular, i.e., represented by the discontinuous Heaviside function. Another important difference with previous work is our relaxing of the assumption that the domain of definition where
we study these networks is infinite, i.e., equal to
Ror
R2. We explicitly consider the biologically more relevant case of a bounded subset
of
Rq,
q= 1, 2, 3, a better model of a piece of cortex. The time behaviour of these networks is described by systems of integro-differential equations. Using methods of
functional analysis, we study the existence and uniqueness of a stationary, i.e., time-independent, solution of these equations in the case of a stationary input. These solutions can be seen
as “persistent”, they are also sometimes called “bumps”. We show that under very mild assumptions on the connectivity functions and because we do not use the Heaviside function for the
nonlinearities, such solutions always exist. We also give sufficient conditions on the connectivity functions for the solution to be absolutely stable, that is to say independent of the
initial state of the network. We then study the sensitivity of the solution(s) to variations of such parameters as the connectivity functions, the sigmoids, the external inputs, and, last but
not least, the shape of the domain of existence
of the neural continuum networks. These theoretical results are illustrated and corroborated by a large number of numerical experiments in most of the cases
2
n
3, 2
q
3.
This work has appeared in Neural Computation
This work was partially supported by the EC IP project FP6-015879, FACETS, the ERC advanced grant NerVi number 227747, and the Fondation d'Entreprise EADS.
Neural or cortical fields are continuous assemblies of mesoscopic models, also called neural masses, of neural populations that are fundamental in the modeling of macroscopic parts of the brain. Neural fields are described by nonlinear integro-differential equations. The solutions of these equations represent the state of activity of these populations when submitted to inputs from neighbouring brain areas. Understanding the properties of these solutions is essential in advancing our understanding of the brain. In this paper we study the dependency of the stationary solutions of the neural fields equations with respect to the stiffness of the nonlinearity and the contrast of the external inputs. This is done by using degree theory and bifurcation theory in the context of functional, in particular infinite dimensional, spaces. The joint use of these two theories allows us to make new detailed predictions about the global and local behaviours of the solutions. We also provide a generic finite dimensional approximation of these equations which allows us to study in great details two models. The first model is a neural mass model of a cortical hypercolumn of orientation sensitive neurons, the ring model . The second model is a general neural field model where the spatial connectivity is described by heterogeneous Gaussian-like functions.
This work has appeared in SIAM Journal on Applied Dynamical Systems, available on arXiv .
This work was partially funded by the ERC advanced grant NerVi number 227747.
In this paper we study neural fields models with delays which define a useful framework for modeling macroscopic parts of the cortex involving several populations of neurons. Nonlinear delayed integro-differential equations describe the spatio-temporal behavior of these fields. Using methods from the theory of delay differential equations, we show the existence and uniqueness of a solution of these equations. A Lyapunov analysis gives us sufficient conditions for the solutions to be asymptotically stable. We also present a fairly detailed study of the numerical computation of these solutions. This is, to our knowledge, the first time that a serious analysis of the problem of the existence and uniqueness of a solution of these equations has been performed. Another original contribution of ours is the definition of a Lyapunov functional and the result of stability it implies. We illustrate our numerical schemes on a variety of examples that are relevant to modeling in neuroscience.
This work has appeared in Journal Physica D and accepted in a special issue on mathematical neuroscience
This work was partially funded by the ERC advanced grant NerVi number 227747.
Our goal is to understand the dynamics of the cortical states of parts of the primate visual cortex when stationary (independent of time) stimuli are presented. A first step to achieve this goal is to understand the stationary cortical states. We do this in the framework of a mesoscopic neural network model also called a neural mass model. These neural mass models have been used to describe the activity of neural populations that are found in the visual cortex of primates. They feature stationary solutions, when submitted to a stationary input. These solutions depend quite sensitively on such parameters of the model as the stiffness of the nonlinearity and the contrast of the external input. We characterize this sensitivity by using degree theory and bifurcation theory in the context of functional, in particular infinite dimensional, spaces. The joint use of these two theories allows us to make new detailed predictions about the global and local behaviours of the solutions. We apply these results to the study of a neural mass model of a cortical hypercolumn of orientation sensitive neurons called the ring model .
This work has appeared in Neuroimage
This work was partially funded by the ERC advanced grant NerVi number 227747.
We propose to use bifurcation theory and pattern formation as theoretical probes for various hypotheses about the neural organization of the brain. This allows us to make predictions about the kinds of patterns that should be observed in the activity of real brains through, e.g. optical imaging, and opens the door to the design of experiments to test these hypotheses. We study the specific problem of visual edges and textures perception and suggest that these features may be represented at the population level in the visual cortex as a specific second-order tensor, the structure tensor, perhaps within a hypercolumn. We then extend the classical ring model to this case and show that its natural framework is the non-Euclidean hyperbolic geometry. This brings in the beautiful structure of its group of isometries and certain of its subgroups which have a direct interpretation in terms of the organization of the neural populations that are assumed to encode the structure tensor. By studying the bifurcations of the solutions of the structure tensor equations, the analog of the classical Wilson and Cowan equations, under the assumption of invariance with respect to the action of these subgroups, we predictthe appearance of characteristic patterns. These patterns can be described by what we call hyperbolic or H-planforms that are reminiscent of Euclidean planar waves and of the planforms that were used in previous work to account for some visual hallucinations. If these patterns could be observed through brain imaging techniques they would reveal the built-in or acquired invariance of the neural organization to the action of the corresponding subgroups.
This work has been accepted in Plos Computional Biology and is available on arXiv, (in press)
This work was partially supported by the EC IP project FP6-015879, FACETS and the Fondation d'Entreprise EADS.
Precise, integrated models of retinal processing are relatively rare in the visual neuroscience community. Also, the models which do exist are not widely used as input to higher level models of visual processing, because potential users are generally not convinced of the need to use such a detailed model. Instead, modelers of thalamo-cortical areas often use over-simplified retina models, because they either ignore subtle features of retinal processing, or do not wish to spend time on re-implementing an existing retina model. These considerations led Adrien Wohrer, during his PhD , us to implement a retina simulation platform. This research was part of our lab's contribution to the FACETS European research project. The resulting simulator, termed Virtual Retina and distributed as an open-source software (See section ), intends to fulfill the requirements of thalamo-cortical modelers through three particular points: Increased biological precision, Large-scale simulations and Convenient usage.
The underlying model is mostly state-of-the-art, with formulations adapted to large-scale simulation and nonlinearities which include Y cells, spike generation and an original model for contrast gain control. We insured ourselves of its good behavior by reproducing different experimental measurements on real ganglion cells, from the literature.
Contrast gain control, more precisely, has been implemented in an original framework (although bearing similarities with pre-existing models). The proposed mechanism is based on a very simple feedback loop, whose general formulation could also be used elsewhere than in the retina. To better assess its behavior, we have led a mathematical analysis of this feedback loop . This analysis revealed more arduous than could be expected from the simple form of the system, but we proved some interesting results, which confirm the gain control properties of our feedback loop.
This work has appeared in Journal of Computational Neuroscience
Website:
http://
This work was partially supported by the EC IP project FP6-015879, FACETS and the Fondation d'Entreprise EADS.
In her PhD, Maria-Jose Escobar started from a classical view of the brain areas V1-MT and proposed several "implementations" depending on the objectives. The model is a feedforward model restricted to V1-MT cortical layers and cortical cells cover the visual space with a foveated structure. Interestingly, as observed in neurophysiology, our MT cells not only behave like simple velocity detectors, but also respond to several kinds of motion contrasts.
Our major achievment here was to show how such a bio-inspired model could be the heart of an action recognition model which could have some interest for the computer vision community. Two main results were obtained. The first resultis that better modeling functional properties of the visual system can improve the performance of a algorithmic model. In particular, we showed that reproducing some of the richness of center-surround interactions of MT cells allows recognition rates to be significantly improved. Defining motion maps as our feature vectors, we used a standard classification method on the Weizmann database: We obtained an average recognition rate of 98.9%, which is superior to the recent results by Jhuang et al. . These promising results published at ECCV 2008 encourage us to further develop bio–inspired models incorporating other brain mechanisms and cortical layers in order to deal with more complex videos. Note that this model here was completely analogue. But if one is able to simulate properly a spiking signal out of a visual entry, the second resultis that the analysis of spike trains may bring extra improvement in the performance. Considering a spiking version of the V1-MT model, we showed that the correlation information between neurons tuned for the same orientations also improve motion recognition in a complementray way than just the mean firing rate.
Based on the same motion architecture, we consider now the problem of motion integration for the solution of the aperture problem. We investigate the role of delayed V1 surround suppression, and how the 2D information extracted through this mechanism can be integrated to propose a solution for the aperture problem (see for some preliminary results).
This work was partially supported by the EC IP project FP6-015879 (FACETS), the EC ICT project No. 215866 (SEARISE) and the Région Provence-Alpes-Côte d'Azur.
In his PhD, Emilien Tlapale proposed a model of motion integration where diffusion is modulated by luminance. This model incorporates feedforward, feedback and inhibitive lateral connections and is inspired by the motion processing cortical areas. Our main contribution is to propose an anisotropic integration model where motion diffusion is gated by the luminance distribution in the image. The proposed approach produces results compatible with several psychophysical experiments concerning not only the resulting global motion percept but also the motion integration dynamics (see and ). It can also explain several properties of MT neurons regarding the dynamics of selective motion integration, a fundamental property of object motion segmentation.
This work has appeared in Vision Research, available as an INRIA research report
Duration:January 2008 to December 2009
This project involves the following partners : The INRIA project teams ODYSSEE, ALCHEMY, CORTEX the Institut de Neurosciences Cognitives de la Méditerranée (INCM-Dyva), and the Laboratoire de Mathématiques Jean-Alexandre Dieudonné (Nice University). It is jointly founded by an ARC INRIA and the Doeblin fundation. Neuronal information processing is related to the brain bio-electrical activity. Current neuro imaging techniques allow the measurement of this bio-electric activity at different time and space scales, from neurons to the brain as a whole (e.g. LFP, ECoG, EEG, MEG). But the analysis of data coming from these measures requires the parallel development of suitable models. Namely, these models have to be, on one hand, close enough to phenomenology, taking into account the various type of bio-electrical activity and their scales relations, in order to propose a coherent representation of information processing in the brain (from neurons to neuronal populations, cortical columns, brain area, etc). On the other hand, these models must be well posed and analytically tractable. This requires a constant interaction between neurobiology, modelling and mathematics. In this spirit, this project aims to tackle the following questions, combining results from neuroscience, dynamical systems theory and statistical physics.
Statistical models of spikes trains. The analysis of experimental data, in vivo or in vitro, of spike trains, requires suitable statistical models. The models typically used (e.g. Poisson) are ad hoc and may not be adapted to all situations. Our goal is to propose a generic method to construct the probability distribution of spikes trains, using an approach combining mathematical modelling and analysis and in vivo experiments, together with numerical simulations.
Mesoscopic models of cortical columns. Brain imaging techniques, like optical imaging, require a modelling of cortical brain activity at a space scale of order 0.1-1 mm2. The goal is, on the theoretical side, to propose a mesoscopic model of the biological signal measured in optical imaging, at the space scale of a cortical column, and to analyse this model, using analytical methods and numerical simulations. This model will be then compared to the cortical activity of the visual system (area V1-V2), measured by optical imaging.
Website:
http://
Duration:1st December 2006 to 30th November 2009
This project combines different expertises, such as mathematics, computer science, computational neuroscience and electrophysiology (in vitro and in vivo), to yield accurate and reliable methods to properly characterize high-conductance states in neurons. The partners in this project are Odyssée and UNIC (CNRS - Gif-sur-Yvette, France). We plan to address several of the caveats of present recording techniques, namely (1) the impossibility to perform reliable high-resolution dynamic-clamp with sharp electrodes, which is the intracellular technique mostly used in vivo; (2) the unreliability and low time resolution of single-electrode voltage-clamp recordings in vivo; (3) the impossibility of extracting single-trial conductances from Vm activity in vivo. We propose to address these caveats with the following goals:
Obtain high-resolution recordings applicable to any type of electrode (sharp and patch), any type of protocol (current-clamp, voltage-clamp, dynamic-clamp) and different preparations (in vivo, in vitro, dendritic patch recordings).
Obtain methods to reliably extract single-trial conductances from Vm activity, as well as to “probe” the intrinsic conductances in cortical neurons. These methods will be applied to intracellular recordings during visual responses in cat V1 area in vivo.
Obtain methods to extract correlations from Vm activity and apply these methods to intracellular recordings in vivo to measure changes in correlation in afferent activity.
Obtain methods to estimate spike-triggered averages from Vm activity and obtain estimates of the optimal patterns of conductances that trigger spikes in vivo. These results will be integrated into computational models to test mechanisms for selectivity.
In all of these methods, we take advantage of the real-time feedback between a computer and the recorded neuron. This real-time feedback will be used to (a) design a new type of recording paradigm, which we call Active Electrode Compensation (AEC), and which consists in a real-time computer-controlled compensation of the electrode artefacts and bias which currently limit recording precision; (b) to use the AEC method to improve current-clamp, voltage-clamp and dynamic-clamp recordings of cortical neurons; (c) use this method as an essential tool to design methods for estimating conductances and statistical characteristics of network activity from intracellular recordings.
Thus, we expect this project to provide three main contributions: (1) It will provide technical advances in the precision and resolution of several currently-used recording techniques, such as dynamic-clamp and voltage-clamp, which are currently limited. We aim at obtaining high-resolution (>= 20 KHz) reliable measurement or conductance injection. This advance should be of benefit for in vivo and in vitro electrophysiologists. (2) It will enable us to perform high-resolution conductance measurements in high-conductance states in vivo and in vitro and better understand this type of network activity. (3) It will enable us to better understand the spike selectivity of cortical neurons, by directly measuring single-trial conductances underlying visual responses, as well as the conductance time courses linked to the genesis of spikes. Those measurements will be directly integrated into computational models. The mechanisms of spike selectivity in cortical neurons is still a subject of intense debate, and we expect to provide here crucial measurements, which we hope will help us better understand input selectivity in visual cortex.
SEARISE is a three-year project started in March 2008. It involves the following academic partners: Fraunhofer-Gesellschaft (Germany), University of Genoa (Italy), Ulm University (Germany) University of Bangor (Wales). Two industrial partners are also involved: TrackMen Ltd. and LTU Arena.
The SEARISE project develops a trinocular active cognitive vision system, the Smart-Eyes, for detection, tracking and categorization of salient events and behaviours. Unlike other approaches in video surveillance, the system will have human-like capability to learn continuously from the visual input, self-adjust to ever changing visual environment, fixate salient events and follow their motion, categorize salient events dependent on the context. Inspired by the human visual system, a cyclopean camera will perform wide range monitoring of the visual field while active binocular stereo cameras will fixate and track salient objects, mimicking a focus of attention that switches between different interesting locations.
The core of this artificial cognitive visual system will be a dynamic hierarchical neural architecture – a computational model of visual processing in the brain. Information processing in Smart-Eyes will be highly efficient due to a multi-scale design: Controlled by the cortically plausible neural model, the active cameras will provide a multi-scale video record of salient events. The processing will self-organize to adapt to scale variations and to assign the majority of computational resources to the informative parts of the scene.
The Smart-Eyes system will be tested in real-life scenarios featuring the activity of people in different scales. In a long-range distance scenario, the system will be monitoring crowd behaviour of sport fans in a football arena. In a short range scenario, the system will be monitoring the behaviour of small groups of people and single individuals. The system’s capability for self-adaptation will be specifically demonstrated and quantified compared to systems with ‘classical’ architecture that are trained once and then used on a set of test scenes.
FACETS is an integrated project within the biologically inspired information systems branch of IST-FET. The FACETS project aims to address, with a concerted action of neuroscientists, computer scientists, engineers and physicists, the unsolved question of how the brain computes. It combines a substantial fraction of the European groups working in the field into a consortium of 13 groups from Austria, France, Germany, Hungary, Sweden, Switzerland and the UK. About 80 scientists will join their efforts over a period of 4 years, starting in September 2005. A project of this dimension has rarely been carried out in the context of brain-science related work in Europe, in particular with such a strong interdisciplinary component.
Olivier Faugeras responded to the 2008 ERC call “IDEAS”. His project, NerVi, submitted to the “Mathematics and Interfaces” panel, has been accepted and obtained a 5 years funding for a total amount of 1.7 Million Euros.
The project is to develop a formal model of information representation and processing in the part of the neocortex that is mostly concerned with visual information. This model will open new horizons in a well-principled way in the fields of artificial and biological vision as well as in computational neuroscience. Specifically the goal is to develop a universally accepted formal framework for describing complex, distributed and hierarchical processes capable of processing seamlessly a continuous flow of images. This framework features notably computational units operating at several spatiotemporal scales on stochastic data arising from natural images. Mean- field theory and stochastic calculus are used to harness the fundamental stochastic nature of the data, functional analysis and bifurcation theory to map the complexity of the behaviours of these assemblies of units. In the absence of such foundations the development of an understanding of visual information processing in man and machines could be greatly hindered. Although the proposal addresses fundamental problems its goal is to serve as the basis for ground-breaking future computational development for managing visual data and as a theoretical framework for a scientific understanding of biological vision.
Bruno Cessac was member of the Conseil National des Universités (CNU) section 29 (constituants élémentaires), from 2003 to 2007. He is reviewer for the CONYCIT program and for the reviews Physica D, Nonlinearity, Chaos, Journal of Statistical Physics, IEEE Transaction in Neural Networks, Journal of Mathematical Biology. He was a lecturer in Ecole de Physique Théorique des Houches in September 2009, on the topics “Dynamical systems and statistical physics for the study of the nervous system."
Pascal Chossat is director of the Centre International de Rencontres Mathématiques (CIRM) in Marseille, an internaitonal conference centre in mathematics jointly operated by CNRS and the French Mathematical Society. He is deputy scientific director of INSMI, the newly created mathematics institute of CNRS, in charge of the international relations of CNRS in this field. He is the coordinator of a geographic EraNet (EC program) named New Indigo for the development of scientific networks between European member states and India.
Olivier Faugeras is a member of the French Academy of Sciences, the French Academy of Technology. He was on the Administration boards of the Agence Nationale de la Recherche (ANR) and the Fondation d'Entreprise EADS until October 2008. He is on the Editorial board of the International Journal of Computer Vision (IJCV). He is a member of the Institut Thématique Multi-organismes Neurosciences, Sciences cognitives, Neurologie, Psychiatrie. He was on the review committee of the 2009 round of funding applications for Bernstein Centers for Computational Neuroscience by the German Federal Ministry for Education and Research.
Pierre Kornprobst is a member of the comité de suivi doctoral (CSD) in Sophia Antipolis. He has organized a workshop on computational vision (co-organized with Pascal Mamassian) for the Neurocomp 2008 conference. He served as referee for the conference ICPR 2008.
One PhD thesis was defended at NeuroMathComp in 2009: Maria-José Escobar, .
Bruno Cessac is Maitre de conférences in Nice University. He is teaching thermodynamics (L2 level), quantum physics (L2 level),statistical physics (M1 level), neural networks dynamics (M2 level, master of physics and master of computational biology), probability theory (L3 level),C programming (L3 level), for Physics students.
Olivier Faugeras teaches at the ENS Paris the course ”Mathematical methods for neuroscience“ in the Master MVA and the ENS Math/Info section - (24H). Grégory Faye, Mathieu Galtier and Geoffroy Hermann do exercise sections for this course at ENS - (16H). He also teaches neural networks dynamics in the new Master of Science program in Computational Biology from Nice Sophia Antipolis University (9H).
Pierre Kornprobst is the coordinator of a new Master of Science program in Computational Biology from Nice Sophia Antipolis University. The scientific goal of this program is to focus on the human being from different perspectives (understanding and modeling functional aspects or interpreting biomedical signals from various devices) and at different scales (from molecules to organs and the whole organism).
Sandrine Chemla is a teaching assistant of numerical electronics courses (combinatory logic and sequential logic) for first and second year students.
Grégory Faye is a teaching assistant of analysis courses (LM110 Functions) for first year students at UPMC (Université Pierre et Marie Curie, Paris) in the mathematic department and of mathematical neuroscience courses (Mathematical methods for neurosciences) for fifth year students at the ENS Ulm (Paris).
Emilien Tlapale teaches "Programming in C++", at EPU (Université de Nice-Sophia Antipolis) in the electronic department.
Olivier Faugeras gave in March an invited conference entitled ”Connecting scales: Mathematics and Neuroscience“ in the Ian P. Howard Lectures Series in Vision Science at the centre for vision research at York University (Canada), in Mai he gave an invited lecture entitled ”Mean-Field description of populations of neurons“ at the Workshop on Deterministic and Stochastic Modeling in Computational Neuroscience and Other Biological Topics in Barcelone (Spain), in September he gave an invited lecture entitled ”Revisiting the Geometry of low-level Vision from the point of view of Biological and Machine Vision“ at the Colloquium on Fundamental Advances in Computer Vision, held in Kyoto (Japan) in celebration of Katsu Ikeuchi's 60th birthday and in October he gave an invited lecture entitled ”Recent advances in mesoscopic and macroscopic modeling of cortical areas“ in the Laboratory for Mathematical Neuroscience at the RIKEN Brain Science Institute in Tokyo (Japan). In December he gave an invited conference entitled ”Mathématiques de la perception visuelle des contours et des textures“ at the MATHS A VENIR 2009 conference in Paris and an invited conference entitled ”Revisiting the geometry of low-level vision from the point of view of biological and machine vision“ at the Humanoids09 conference in Collège de France, Paris.
Together with Quang-Tuan Luong and Steve Maybank he received the Koenderink Prize for the most influential papers published in ECCV from 1990 to 1998 at ECCV 2008. Their paper was presented at ECCV 1992.