MathNeuro focuses on the applications of multi-scale dynamics to neuroscience. This involves the modelling and analysis of systems with multiple time scales and space scales, as well as stochastic effects. We look both at single-cell models, microcircuits and large networks. In terms of neuroscience, we are mainly interested in questions related to synaptic plasticity and neuronal excitability, in particular in the context of pathological states such as epileptic seizures and neurodegenerative diseases such as Alzheimer.

Our work is quite mathematical but we make heavy use of computers for numerical experiments and simulations. We have close ties with several top groups in biological neuroscience. We are pursuing the idea that the "unreasonable effectiveness of mathematics" can be brought, as it has been in physics, to bear on neuroscience.

Modeling such assemblies of neurons and simulating their behavior involves putting together a mixture of the most recent results in neurophysiology with such advanced mathematical methods as dynamical systems theory, bifurcation theory, probability theory, stochastic calculus, theoretical physics and statistics, as well as the use of simulation tools.

We conduct research in the following main areas:

The study of neural networks is certainly motivated by the long term goal to understand how brain is working. But, beyond the comprehension of brain or even of simpler neural systems in less evolved animals, there is also the desire to exhibit general mechanisms or principles at work in the nervous system. One possible strategy is to propose mathematical models of neural activity, at different space and time scales, depending on the type of phenomena under consideration. However, beyond the mere proposal of new models, which can rapidly result in a plethora, there is also a need to understand some fundamental keys ruling the behaviour of neural networks, and, from this, to extract new ideas that can be tested in real experiments. Therefore, there is a need to make a thorough analysis of these models. An efficient approach, developed in our team, consists of analysing neural networks as dynamical systems. This allows to address several issues. A first, natural issue is to ask about the (generic) dynamics exhibited by the system when control parameters vary. This naturally leads to analyse the bifurcations occurring in the network and which phenomenological parameters control these bifurcations. Another issue concerns the interplay between the neuron dynamics and the synaptic network structure.

Modeling neural activity at scales integrating the effect of thousands of neurons is of central importance for several reasons. First, most imaging techniques are not able to measure individual neuron activity (microscopic scale), but are instead measuring mesoscopic effects resulting from the activity of several hundreds to several hundreds of thousands of neurons. Second, anatomical data recorded in the cortex reveal the existence of structures, such as the cortical columns, with a diameter of about 50

Our group is developing mathematical and numerical methods allowing on one hand to produce dynamic mean-field equations from the physiological characteristics of neural structure (neurons type, synapse type and anatomical connectivity between neurons populations), and on the other so simulate these equations; see Figure . These methods use tools from advanced probability theory such as the theory of Large Deviations and the study of interacting diffusions .

Neural fields are a phenomenological way of describing the activity of population of neurons by delayed integro-differential equations. This continuous approximation turns out to be very useful to model large brain areas such as those involved in visual perception. The mathematical properties of these equations and their solutions are still imperfectly known, in particular in the presence of delays, different time scales and noise.

Our group is developing mathematical and numerical methods for analysing these equations. These methods are based upon techniques from mathematical functional analysis, bifurcation theory , , equivariant bifurcation analysis, delay equations, and stochastic partial differential equations. We have been able to characterize the solutions of these neural fields equations and their bifurcations, apply and expand the theory to account for such perceptual phenomena as edge, texture , and motion perception. We have also developed a theory of the delayed neural fields equations, in particular in the case of constant delays and propagation delays that must be taken into account when attempting to model large size cortical areas , . This theory is based on center manifold and normal forms ideas .

Neuronal rhythms typically display many different timescales, therefore it is important to incorporate this slow-fast aspect in models. We are interested in this modeling paradigm where slow-fast point models, using Ordinary Differential Equations (ODEs), are investigated in terms of their bifurcation structure and the patterns of oscillatory solutions that they can produce. To insight into the dynamics of such systems, we use a mix of theoretical techniques — such as geometric desingularisation and centre manifold reduction — and numerical methods such as pseudo-arclength continuation . We are interested in families of complex oscillations generated by both mathematical and biophysical models of neurons. In particular, so-called mixed-mode oscillations (MMOs), , , which represent an alternation between subthreshold and spiking behaviour, and bursting oscillations, , also corresponding to experimentally observed behaviour ; see Figure . We are working on extending these results to spatio-temporal neural models .

Excitability refers to the all-or-none property of neurons , . That is, the ability to respond nonlinearly to an input with a dramatic change of response from “none” — no response except a small perturbation that returns to equilibrium — to “all” — large response with the generation of an action potential or spike before the neuron returns to equilibrium. The return to equilibrium may also be an oscillatory motion of small amplitude; in this case, one speaks of resonator neurons as opposed to integrator neurons. The combination of a spike followed by subthreshold oscillations is then often referred to as mixed-mode oscillations (MMOs) . Slow-fast ODE models of dimension at least three are well capable of reproducing such complex neural oscillations. Part of our research expertise is to analyse the possible transitions between different complex oscillatory patterns of this sort upon input change and, in mathematical terms, this corresponds to understanding the bifurcation structure of the model. Furthermore, the shape of time series of this sort with a given oscillatory pattern can be analysed within the mathematical framework of dynamic bifurcations; see the section on slow-fast dynamics in Neuronal Models. The main example of abnormal neuronal excitability is hyperexcitability and it is important to understand the biological factors which lead to such excess of excitability and to identify (both in detailed biophysical models and reduced phenomenological ones) the mathematical structures leading to these anomalies. Hyperexcitability is one important trigger for pathological brain states related to various diseases such as chronic migraine , epilepsy or even Alzheimer's Disease . A central central axis of research within our group is to revisit models of such pathological scenarios, in relation with a combination of advanced mathematical tools and in partnership with biological labs.

Neural networks show amazing abilities to evolve and adapt, and to store and process information. These capabilities are mainly conditioned by plasticity mechanisms, and especially synaptic plasticity, inducing a mutual coupling between network structure and neuron dynamics. Synaptic plasticity occurs at many levels of organization and time scales in the nervous system . It is of course involved in memory and learning mechanisms, but it also alters excitability of brain areas and regulates behavioral states (e.g., transition between sleep and wakeful activity). Therefore, understanding the effects of synaptic plasticity on neurons dynamics is a crucial challenge.

Our group is developing mathematical and numerical methods to analyse this mutual interaction. On the one hand, we have shown that plasticity mechanisms , , Hebbian-like or STDP, have strong effects on neuron dynamics complexity, such as synaptic and propagation delays , dynamics complexity reduction, and spike statistics.

The processes by which memories are formed and stored in the brain are multiple and not yet fully understood. What is hypothesised so far is that memory formation is related to the activation of certain groups of neurons in the brain. Then, one important mechanism to store various memories is to associate certain groups of memory items with one another, which then corresponds to the joint activation of certain neurons within different subgroup of a given population. In this framework, plasticity is key to encode the storage of chains of memory items. Yet, there is no general mathematical framework to model the mechanism(s) behind these associative memory processes. We are aiming at developing such a framework using our expertise in multi-scale modelling, by combining the concepts of heteroclinic dynamics, slow-fast dynamics and stochastic dynamics.

The general objective that we wish to pursue in this project is to investigate non-equilibrium phenomena pertinent to storage and retrieval of sequences of learned items. In previous work by team members , , , it was shown that with a suitable formulation, heteroclinic dynamics combined with slow-fast analysis in neural field systems can play an organizing role in such processes, making the model accessible to a thorough mathematical analysis. Multiple choice in cognitive processes require a certain flexibility in the neural network, which has recently been investigated in the submitted paper .

Our goal is to contribute to identify general processes under which cognitive functions can be organized in the brain.

The project underlying MathNeuro revolves around pillars of neuronal behaviour –excitability, plasticity, memory– in link with the initiation and propagation of pathological brain states in diseases such as cortical spreading depression (in link with certain forms of migraine with aura), epileptic seizures and Alzheimer’s Disease. Our work on memory processes can also potentially be applied to studying mental disorders such as schizophrenia or obsessive disorder troubles.

The first three PhD students of the EPI MathNeuro, Louisiane Lemaire, Yuri Rodrigues and Halgurd Taher, have all successfully defended their respective PhD in 2021. They all found postdoctoral positions, at Humbolt University (Berlin, Germany), the University of Sussex (Brighton, UK) and Charité Medical University (Berlin, Germany), respectively.

Simona Olmi, who held a Starting Research Position (SRP) in MathNeuro from 2018 to 2021, has obtained a permanent position as researcher in Computational Neuroscience at the National Research Council (CRN) in Florence, Italy.

Dynamics underlying epileptic seizures span multiple scales in space and time, therefore understanding seizure mechanisms requires identifying the relations between seizure components within and across these scales, together with the analysis of their dynamical repertoire. In this view, mathematical models have been developed, ranging from single neuron to neural population. In this study, we consider a neural mass model able to exactly reproduce the dynamics of heterogeneous spiking neural networks. We combine mathematical modeling with structural information from non invasive brain imaging, thus building large-scale brain network models to explore emergent dynamics and test the clinical hypothesis. We provide a comprehensive study on the effect of external drives on neuronal networks exhibiting multistability, in order to investigate the role played by the neuroanatomical connectivity matrices in shaping the emergent dynamics. In particular, we systematically investigate the conditions under which the network displays a transition from a low activity regime to a high activity state, which we identify with a seizure-like event. This approach allows us to study the biophysical parameters and variables leading to multiple recruitment events at the network level. We further exploit topological network measures in order to explain the differences and the analogies among the subjects and their brain regions, in showing recruitment events at different parameter values. We demonstrate, along with the example of diffusion-weighted magnetic resonance imaging (dMRI) connectomes of 20 healthy subjects and 15 epileptic patients, that individual variations in structural connectivity, when linked with mathematical dynamic models, have the capacity to explain changes in spatiotemporal organization of brain dynamics, as observed in network-based brain disorders. In particular, for epileptic patients, by means of the integration of the clinical hypotheses on the epileptogenic zone (EZ), i.e., the local network where highly synchronous seizures originate, we have identified the sequence of recruitment events and discussed their links with the topological properties of the specific connectomes. The predictions made on the basis of the implemented set of exact mean-field equations turn out to be in line with the clinical pre-surgical evaluation on recruited secondary networks.

This work has been published in and is available as .

From the action potentials of neurons and cardiac cells to the amplification of calcium signals in oocytes, excitability is a hallmark of many biological signalling processes. In recent years, excitability in single cells has been related to multiple-timescale dynamics through canards, special solutions which determine the effective thresholds of the all-or-none responses. However, the emergence of excitability in large populations remains an open problem. Here, we show that the mechanisms of excitability in an infinite heterogeneous population of coupled quadratic integrate and fire (QIF) cells maintains echoes of the mechanism for the individual components. We exploit the Ott-Antonsen ansatz to derive low-dimensional dynamics for the coupled network and use it to describe the structure of canards via slow periodic forcing. We demonstrate that the thresholds for onset and offset of population firing can be found in the same way as those of the single cell. We combine theoretical and numerical analysis to develop a novel and comprehensive framework for excitability in large populations.

This work has been submitted for publication and is available as .

The counter-intuitive phenomenon of coherence resonance describes a non-monotonic behavior of the regularity of noise-induced oscillations in the excitable regime, leading to an optimal response in terms of regularity of the excited oscillations for an intermediate noise intensity. We study this phenomenon in populations of FitzHugh-Nagumo (FHN) neurons with different coupling architectures. For networks of FHN systems in excitable regime, coherence resonance has been previously analyzed numerically. Here we focus on an analytical approach studying the mean-field limits of the locally and globally coupled populations. The mean-field limit refers to the averaged behavior of a complex network as the number of elements goes to infinity. We derive a mean-field limit approximating the locally coupled FHN network with low noise intensities. Further, we apply mean-field approach to the globally coupled FHN network. We compare the results of the mean-field and network frameworks for coherence resonance and find a good agreement in the globally coupled case, where the correspondence between the two approaches is sufficiently good to capture the emergence of anticoherence resonance. Finally, we study the effects of the coupling strength and noise intensity on coherence resonance for both the network and the mean-field model.

This work has been published in and is available as .

Sequential Monte Carlo methods have been a major breakthrough in the field of numerical signal processing for stochastic dynamical state-space systems with partial and noisy observations. However, these methods still present certain weaknesses. One of the most fundamental is the degeneracy of the filter due to the impoverishment of the particles: the prediction step allows the particles to explore the state-space and can lead to the impoverishment of the particles if this exploration is poorly conducted or when it conflicts with the following observation that will be used in the evaluation of the likelihood of each particle. In this article, in order to improve this last step within the framework of the classic bootstrap particle filter, we propose a simple approximation of the one step fixed-lag smoother. At each time iteration, we propose to perform additional simulations during the prediction step in order to improve the likelihood of the selected particles.

This work has been submitted for publication and is available as .

Discovering the rules of synaptic plasticity is an important step for understanding brain learning. Existing plasticity models are either 1) top-down and interpretable, but not flexible enough to account for experimental data, or 2) bottom-up and biologically realistic, but too intricate to interpret and hard to fit to data. We fill the gap between these approaches by uncovering a new plasticity rule based on a geometrical readout mechanism that flexibly maps synaptic enzyme dynamics to plasticity outcomes. We apply this readout to a multi-timescale model of hippocampal synaptic plasticity induction that includes electrical dynamics, calcium, CaMKII and calcineurin, and accurate representation of intrinsic noise sources. Using a single set of model parameters, we demonstrate the robustness of this plasticity rule by reproducing nine published ex vivo experiments covering various spike-timing and frequency-dependent plasticity induction protocols, animal ages, and experimental conditions. The model also predicts that in vivo-like spike timing irregularity strongly shapes plasticity outcome. This geometrical readout modelling approach can be readily applied to other excitatory or inhibitory synapses to discover their synaptic plasticity rules.

This work has been submitted for publication and is available as .

In the context of the Human Brain Project (HBP, see section 5.1.1.1. below), we have recruited Emre Baspinar in December 2018 for a two-year postdoc, after which he stayed for a few more months on an engineer contract in order to complete some papers. Within MathNeuro, Emre has worked on analysing slow-fast dynamical behaviours in the mean-field limit of neural networks.

In a first project, he has been analysing the slow-fast structure in the mean-field limit of a network of FitzHugh-Nagumo neuron models; the mean-field was previously established in but its slow-fast aspect had not been analysed. In particular, he has proved a persistence result of Fenichel type for slow manifolds in this mean-field limit, thus extending previous work by Berglund et al. , . A manuscript is in preparation.

In a second project, he has been looking at a network of Wilson-Cowan systems whose mean-field limit is an ODE, and he has studied elliptic bursting dynamics in both the network and the limit: its slow-fast dissection, its singular limits and the role of canards. In passing, he has obtained a new characterisation of ellipting bursting via the construction of periodic limit sets using both the slow and the fast singular limits and unravelled a new singular-limit scenario giving rise to elliptic bursting via a new type of torus canard orbits. A manuscript has been published in and is available as (see below).

We investigate large networks of Hopfield neurons under various assumptions about the underlying network graphs (completely connected, sparsely connected, small world), the synaptic coefficients (Gaussian or non-Gaussian distributed, independent, correlated), and the neuronal populations present in the network (excitatory, inhibitory, balanced). These assumptions generate a large variety of different behaviours that we are trying to analyse mathematically and numerically. The mathematics include the description of the thermodynamics (meanfield) limit of these networks, when they exist, the type of solutions of the limit equations, their bifurcations with respect to changes of the network parameters, the fluctuations of the solutions to the network equations around the thermodynamics limit to understand the finite size effects. Along with these theoretical efforts, we are developing simulation tools in the Julia language with an eye on parallel implementations on GPUs to develop an intuition for the behaviours of these networks and guide the mathematical analysis.

We consider Wilson-Cowan-type models for the mathematical description of orientation-dependent Poggendorff-like illusions. Our modelling improves two previously proposed cortical-inspired approaches, embedding the sub-Riemannian heat kernel into the neuronal interaction term, in agreement with the intrinsically anisotropic functional architecture of V1 based on both local and lateral connections. For the numerical realisation of both models, we consider standard gradient descent algorithms combined with Fourier-based approaches for the efficient computation of the sub-Laplacian evolution. Our numerical results show that the use of the sub-Riemannian kernel allows us to reproduce numerically visual misperceptions and inpainting-type biases in a stronger way in comparison with the previous approaches.

This work has been published in and is available as .

We revisit elliptic bursting dynamics from the viewpoint of torus canard solutions. We show that at the transition to and from elliptic burstings, classical or mixed-type torus canards can appear, the difference between the two being the fast subsystem bifurcation that they approach, saddle-node of cycles for the former and subcritical Hopf for the latter. We first showcase such dynamics in a Wilson-Cowan type elliptic bursting model, then we consider minimal models for elliptic bursters in view of finding transitions to and from bursting solutions via both kinds of torus canards. We first consider the canonical model proposed by Izhikevich (ref. [22] in the manuscript) and adapted to elliptic bursting by Ju, Neiman, Shilnikov (ref. [24] in the manuscript), and we show that it does not produce mixed-type torus canards due to a nongeneric transition at one end of the bursting regime. We therefore introduce a perturbative term in the slow equation, which extends this canonical form to a new one that we call Leidenator and which supports the right transitions to and from elliptic bursting via classical and mixed-type torus canards, respectively. Throughout the study, we use singular flows (

) to predict the full system's dynamics (

small enough). We consider three singular flows: slow, fast and average slow, so as to appropriately construct singular orbits corresponding to all relevant dynamics pertaining to elliptic bursting and torus canards.

This work has been published in and is available as .

We study a class of planar integrate and fire models called adaptive integrate and fire (AIF) models, which possesses an adaptation variable on top of membrane potential, and whose subthreshold dynamics is piecewise linear . These AIF models therefore have two reset conditions, which enable bursting dynamics to emerge for suitable parameter values. Such models can be thought of as hybrid dynamical systems. We consider a particular slow dynamics within AIF models and prove the existence of bursting cycles with N resets, for any integer N. Furthermore, we study the transition between N- and (N + 1)-reset cycles upon vanishingly small parameter variations and prove (for N = 2) that such transitions are organised by canard cycles. Finally, using numerical continuation we compute branches of bursting cycles, including canard-explosive branches, in these AIF models, by suitably recasting the periodic problem as a two-point boundary-value problem.

This work has been published in and is available as .

We report a detailed analysis on the emergence of bursting in a recently developed neural mass model that takes short-term synaptic plasticity into account. The one being used here is particularly important, as it represents an exact meanfield limit of synaptically coupled quadratic integrate & fire neurons, a canonical model for type I excitability. In absence of synaptic dynamics, a periodic external current with a slow frequency

can lead to burst-like dynamics. The firing patterns can be understood using techniques of singular perturbation theory, specifically slow-fast dissection. In the model with synaptic dynamics the separation of timescales leads to a variety of slow-fast phenomena and their role for bursting is rendered inordinately more intricate. Canards are one of the main slow-fast elements on the route to bursting. They describe trajectories evolving nearby otherwise repelling locally invariant sets of the system and are found in the transition region from subthreshold dynamics to bursting. For values of the timescale separation nearby the singular limit

, we report peculiar jump-on canards, which block a continuous transition to bursting. In the biologically more plausible regime of

this transition becomes continuous and bursts emerge via consecutive spike-adding transitions. The onset of bursting is of complex nature and involves mixed-type like torus canards, which form the very first spikes of the burst and revolve nearby fast-subsystem repelling limit cycles. We provide numerical evidence for the same mechanisms to be responsible for the emergence of bursting in the quadratic integrate & fire network with plastic synapses. The main conclusions apply for the network, owing to the exactness of the meanfield limit.

This work has been submitted for publication and is available as .

Spreading depolarizations (SDs) are involved in migraine, epilepsy, stroke, traumatic brain injury, and subarachnoid hemorrhage. However, the cellular origin and specific differential mechanisms are not clear. Increased glutamatergic activity is thought to be the key factor for generating cortical spreading depression (CSD), a pathological mechanism of migraine. Here, we show that acute pharmacological activation of

(the main Na

channel of interneurons) or optogenetic-induced hyperactivity of GABAergic interneurons is sufficient to ignite CSD in the neocortex by spiking-generated extracellular K

build-up. Neither GABAergic nor glutamatergic synaptic transmission were required for CSD initiation. CSD was not generated in other brain areas, suggesting that this is a neocortex-specific mechanism of CSD initiation. Gain-of-function mutations of

(SCN1A) cause familial hemiplegic migraine type-3 (FHM3), a subtype of migraine with aura, of which CSD is the neurophysiological correlate. Our results provide the mechanism linking

gain of function to CSD generation in FHM3. Thus, we reveal the key role of hyperactivity of GABAergic interneurons in a mechanism of CSD initiation, which is relevant as a pathological mechanism of Nav1.1 FHM3 mutations, and possibly also for other types of migraine and diseases in which SDs are involved.

This work has been published in and is available as .

The extension of this work is the topic of the PhD of Louisiane Lemaire, who started in October 2018 and was successfully defended in December 2021. A first part of Louisiane's PhD has been to improve and extend the model published in in a number of ways: replace the GABAergic neuron model used in , namely the Wang-Buszáki model, by a more recent fast-spiking cortical interneuron model due to Golomb and collaborators; implement the effect of the HM1a toxin used by M. Mantegazza to mimic the genetic mutation of sodium channels responsible for the hyperactivity of the GABAergic neurons; take into account ionic concentration dynamics (relaxing the hypothesis of constant reversal potentials) for the GABAergic as well whereas in this was done only for the Pyramidal neuron. Furthermore, another mutation of this sodium channel leads to hyperactivity of the pyramidal neurons in a way that is akin to epileptiform activity. The model by Louisiane Lemaire has been extended in order to account for this pathological scenario as well. This required a great deal of modelling and calibration and the simulation results are closer to the actual experiments by Mantegazza than in our previous study. An article has been published (, see below).

Loss of function mutations of SCN1A, the gene coding for the voltage-gated sodium channel

, cause different types of epilepsy, whereas gain of function mutations cause sporadic and familial hemiplegic migraine type 3 (FHM-3). However, it is not clear yet how these opposite effects can induce paroxysmal pathological activities involving neuronal networks’ hyperexcitability that are specific of epilepsy (seizures) or migraine (cortical spreading depolarization, CSD). To better understand differential mechanisms leading to the initiation of these pathological activities, we used a two-neuron conductance-based model of interconnected GABAergic and pyramidal glutamatergic neurons, in which we incorporated ionic concentration dynamics in both neurons. We modeled FHM-3 mutations by increasing the persistent sodium current in the interneuron and epileptogenic mutations by decreasing the sodium conductance in the interneuron. Therefore, we studied both FHM-3 and epileptogenic mutations within the same framework, modifying only two parameters. In our model, the key effect of gain of function FHM-3 mutations is ion fluxes modification at each action potential (in particular the larger activation of voltage-gated potassium channels induced by the NaV1.1 gain of function), and the resulting CSD-triggering extracellular potassium accumulation, which is not caused only by modifications of firing frequency. Loss of function epileptogenic mutations, on the other hand, increase GABAergic neurons’ susceptibility to depolarization block, without major modifications of firing frequency before it. Our modeling results connect qualitatively to experimental data: potassium accumulation in the case of FHM-3 mutations and facilitated depolarization block of the GABAergic neuron in the case of epileptogenic mutations. Both these effects can lead to pyramidal neuron hyperexcitability, inducing in the migraine condition depolarization block of both the GABAergic and the pyramidal neuron. Overall, our findings suggest different mechanisms of network hyperexcitability for migraine and epileptogenic

mutations, implying that the modifications of firing frequency may not be the only relevant pathological mechanism.

This work has been published in and is available as .

The project is part of a long-standing collaboration between Mathieu Desroches (MathNeuro Project-Team, Inria) and Serafim Rodrigues (MCEN research group, Basque Center for Applied Mathematics, Bilbao, Spain) on neurotransmission and its possible disruptions

(see also

). Matti Sensi has been recruited as postdoc in MathNeuro at the end of 2021 to work on this project. The work will be organised around a modeling project on multi-timescale aspects of synaptic transmission, which can be synchronous, asynchronous or spontaneous. Building on our already- existing modeling platform, we will focus on two main extensions: the endocytotic part of the neurotransmitter cycle, and the integration of glial activity into the model. This project is also related to the NeuroTransSF associated team between MathNeuro and MCEN, and the extended model will be used (within a time horizon that goes beyond the postdoc of Mattia Sensi) in the context of experiments performed by Afia Ali on early effects of Alzheimer's Disease.

An important function of the brain is to adapt behavior by selecting between different predictions of sequences of stimuli likely to occur in the environment. The present research studied the branching behavior of a computational network model of populations of excitatory and inhibitory neurons, both analytically and through simulations. Results show how synaptic efficacy, retroactive inhibition and short-term synaptic depression determine the dynamics of choices between different predictions of sequences having different probabilities. Further results show that changes in the probability of the different predictions depend on variations of neuronal gain. Such variations allow the network to optimize the probability of its predictions to changing probabilities of the sequences without changing synaptic efficacy.

This work has been submitted for publication and is available as .

We study a simplified model of the representation of colors in the primate primary cortical visual area V1. The model is described by an initial value problem related to a Hammerstein equation. The solutions to this problem represent the variation of the activity of populations of neurons in V1 as a function of space and color. The two space variables describe the spatial extent of the cortex while the two color variables describe the hue and the saturation represented at every location in the cortex. We prove the well-posedness of the initial value problem. We focus on its stationary, i.e. independent of time, and periodic in space solutions. We show that the model equation is equivariant with respect to the direct product G of the group of the Euclidean transformations of the planar lattice determined by the spatial periodicity and the group of color transformations, isomorphic to

, and study the equivariant bifurcations of its stationary solutions when some parameters in the model vary. Their variations may be caused by the consumption of drugs and the bifurcated solutions may represent visual hallucinations in space and color. Some of the bifurcated solutions can be determined by applying the Equivariant Branching Lemma (EBL) by determining the axial subgroups of G. These define bifurcated solutions which are invariant under the action of the corresponding axial subgroup. We compute analytically these solutions and illustrate them as color images. Using advanced methods of numerical bifurcation analysis we then explore the persistence and stability of these solutions when varying some parameters in the model. We conjecture that we can rely on the EBL to predict the existence of patterns that survive in large parameter domains but not to predict their stability. On our way we discover the existence of spatially localized stable patterns through the phenomenon of "snaking".

This work has been accepted for publication in the "Comptes Rendus Mathématique" and is available as .