Section: Software
Positioning
Our previous works in the domain of well-defined distributed asynchronous adaptive computations [32] , [29] , [34] have already made us define a library (DANA [3] ), closely related to both the notion of artificial neural networks and cellular automata. From a conceptual point of view, the computational paradigm supporting the library is grounded on the notion of a unit that is essentially a (vector of) potential that can vary along time under the influence of other units and learning. Those units can organized into layers, maps and network.
More generally, we gather in the middleware EnaS (that stands for Event Neural Assembly Simulation; cf. http://gforge.inria.fr/projects/enas ) our numerical and theoretical developments, allowing to simulate and analyze so called "event neural assemblies". Enas has been designed as a plug-in for our simulators (e.g. DANA or MVASpike) as other existing simulators (via the NeuralEnsemble meta-simulation platform) and additional modules for computations with neural unit assembly on standard platforms (e.g. Python or the Scilab platform).
We will also have to interact with the High Performance Computing (HPC) community, since having large scale simulations at that mesoscopic level is an important challenge in our systemic view of computational neuroscience. Our approach implies to emulate the dynamics of thousands, or even millions, of integrated computational units, each of them playing the role of a whole elementary neural circuit (e.g. the microcolumn for the cortex). Mesoscopic models are considered in such an integrative approach, in order to exhibit global dynamical effect that would be hardly reachable by compartment models involving membrane equations or even spiking neuron networks.
The vast majority of high performance computing softwares for computational neuroscience addresses sub-neural or neural models [19] , but coarser grained population models are also demanding for large scale simulations, with fully distributed computations, without global memory or time reference, as it is specified in (cf. § 3.2 ).