The expanded name for the Beagle research group is “Artificial Evolution and Computational Biology”. Our aim is to position our research at the interface between biology and computer science and to contribute new results in biology by modeling biological systems. In other words we are making artifacts – from the Latin artis factum (an entity made by human art rather than by Nature) – and we explore them in order to understand Nature.
The team is an INRIA Project-Team since January, 2014. It gathers researchers from INRIA, INSA, who are members of three different labs, the LIRIS 1, the LBBE 2, and CARMEN 3. It is led by Prof. Guillaume Beslon (INSA-Lyon, LIRIS, Computer Science Dept.).
Our research program requires the team members to have skills in computer science but also in life sciences: they must have or develop a strong knowledge in biosciences to interact efficiently with biologists or, ideally, to directly interpret the results given by the models they develop. A direct consequence of this claim is that it is mandatory to restrict the domain of expertise in life sciences. This is why we focus on a specific scale, central in biology: the cellular scale. Indeed, we restrict our investigations on the cell, viewed as a dynamical system made of molecular elements. This specific scale is rich in open questions that deserve modeling and simulation approaches. We also focus on two different kinds of constraints that structure the cellular level: biophysical constraints and historical constraints. The cell is a system composed of molecules that physically interact and the spatio-temporal nature of these interactions is likely to strongly influence its dynamics. But the cell is also the result of an evolutionary process that imposes its own limits on what can evolve (or is the most likely to evolve) and what cannot (or is the less likely to evolve). A better understanding of what kind of systems evolution is the most likely to lead to in a given context could give us important clues for the analysis of extant biological systems.
To study these two kinds of constraints we mainly rely on two specific tools: computational cellular biochemistry and evolution models. We use these tools to develop our “artifacts” and we compare their output with real data, either direct measurements collected by experimentalists or ancestral properties computationally inferred from their extant descendants. The team research is currently organized in four main research axes. The first two ones are methodologically-oriented: we develop general formalisms and tools for computational cellular biochemistry (research axis 1) and families of models to study the evolutionary process (research axis 2). The third “NeuroCell” axis (research axis 3) is the one in which biochemical models are specifically applied on brain cells (neurons and glia). Eventually the last axis aims at integrating the two tools, computational biochemistry and evolution, in what we call "Evolutionary Systems Biology" (research axis 4). The next four sections describe these four axes in more details. The biological questions described are not the sole topics tackled by the team. They are the ones that mobilize a substantial fraction of the researchers on the long run. Many other questions are tackled by individual researchers or even small groups. In the following these ones will be briefly described in their methodological context, i.e. in the two sections devoted to research axes 1 and 2.
The scientific objective of the Beagle team is to develop a consistent set of concepts and tools – mainly based on computational science – to in fine contribute to knowledge discovery in systems biology. Our strategy is to develop strong interactions with life science researchers to become active partners of the biological discovery process. Thus, our aim as a team is not to be a computer science team interacting with biologists, nor to be a team of biologists using computer science tools, but rather to stay in the middle and to become a trading zone46 between biology and computer science. Our very scientific identity is thus fuzzy, melting components from both sciences. Indeed, one of the central claims of the team is that interdisciplinarity involves permanent exchanges between the disciplines. Such exchanges can hardly be maintained between distant teams. That's why the Beagle team tries to develop local collaborations with local scientists. That's also why Beagle also tries to organize itself as an intrinsically interdisciplinary group, gathering different sensitivities between biology and computer science inside the group. Our ultimate objective is to develop interdisciplinarity at the individual level, all members of the team being able to interact efficiently with specialists from both fields.
As stated above, the research topics of the Beagle Team are centered on the modeling and simulation of cellular processes. More specifically, we focus on two specific processes that govern cell dynamics and behavior: Biophysics and Evolution. We are strongly engaged into the integration of these level of biological understanding.
Biochemical kinetics developed as an extension of chemical kinetics in the early 20th century and inherited the main hypotheses underlying Van’t Hoff’s law of mass action : a perfectly-stirred homogeneous medium with deterministic kinetics. This classical view is however challenged by recent experimental results regarding both the movement and the metabolic fate of biomolecules. First, it is now known that the diffusive motion of many proteins in cellular media exhibits deviations from the ideal case of Brownian motion, in the form of position-dependent diffusion or anomalous diffusion, a hallmark of poorly mixing media. Second, several lines of evidence indicate that the metabolic fate of molecules in the organism not only depends on their chemical nature, but also on their spatial organisation – for example, the fate of dietary lipids depends on whether they are organized into many small or a few large droplets (see e.g. 47). In this modern-day framework, cellular media appear as heterogeneous collections of contiguous spatial domains with different characteristics, thus providing spatial organization of the reactants. Moreover, the number of implicated reactants is often small enough that fluctuations cannot be ignored. To improve our understanding of intracellular biochemistry, we study spatiotemporal biochemical kinetics using computer simulations (particle-based spatially explicit stochastic simulations) and mathematical models (age-structured PDEs).
We study the processes of genome evolution, with a focus on large-scale genomic events (rearrangements, duplications, transfers). We are interested in deciphering general laws which explain the organization of the genomes we observe today, as well as using the knowledge of these processes to reconstruct some aspects of the history of life.
To do so, we construct mathematical models and apply them either in a “forward” way, i.e. observing the course of evolution from known ancestors and parameters, by simulation (in silico experimental evolution) or mathematical analysis (theoretical biology), or in a “backward” way, i.e. reconstructing ancestral states and parameters from known extant states (phylogeny, comparative genomics).
Moreover we often mix the two approaches either by validating backwards reconstruction methods on forward simulations, or by using the forward method to test evolutionary hypotheses on biological data.
Brain cells are rarely considered by computational systems biologists, though they are especially well suited for the field: their major signaling pathways are well characterized, the cellular properties they support are well identified (e.g. synaptic plasticity) and eventually give rise to well known functions at the organ scale (learning, memory). Moreover, electro-physiology measurements provide us with an experimental monitoring of signaling at the single cell level (sometimes at the sub-cellular scale) with unrivaled temporal resolution (milliseconds) over durations up to an hour. In this research axis, we develop modeling approaches for systems biology of both neuronal cells and glial cells, in particular astrocytes. We are mostly interested in understanding how the pathways implicated in the signaling between neurons, astrocytes and neurons-astrocytes interactions implement and regulate synaptic plasticity.
This axis, consisting in integrating the two main biological levels we study, is a long-standing and long-term objective in the team. We have started to see significant advances in this direction, mainly due to the evolution of the team staff and team projects. These novel developments allow us to give this axis back its central place. We have several short and middle term projects that integrate biochemical data and evolution. First results were reported in 2019 with respect to an evolutionary perspective on chromatin-associated proteins. Other, ongoing projects include reverse engineering the regulatory networks of `old' and `young' brain regions (i.e. neuro-evo-devo) and finding new therapeutic targets for lung tumours that evolve treatment resistance.
We do not usually distinguish our research and its application domains. Our shared idea is that the research is oriented by a scientific question, which in the case of the Beagle team is a multidisciplinary one, most often of biological nature. We do not develop methodologies independently from this question and then look for applications. Instead we collectively work with other disciplines to solve a question, using our competencies.
In consequence the application domains are already listed in the description of our projects and goals. They concern functional and evolutionary biology, related to critical social questions as human or global health.
We still advocate for the "application domains" section of the activity report to be called "implication domains" to broaden its scope. Implication contains applications, but not conversely.
This could allow us and others to report for example on orientation activities of our research programs guided by a social demand rather than by an intrinsic dynamic of scientific evolution, a simple claim for “progress”, or a social demand coming only from industry.
This could allow a better awareness of social and environmental issues, and integrate them in this section.
The website we constructed two years ago ferme.yeswiki.net/Empreinte can still be used for simple carbon footprint calculations of a team, but is or will be supplanted by future internal tools from Inria or the released ones of Labo1p5.
We organized several "Sciences-Environnements-Sociétés" workshops in collaboration with Sophie Quinton from Inria Grenoble. A dozen one-day workshops have been organised in 2022. It started in Lyon and Grenoble in 2021, and has now been deployed in Rennes, Paris, Marseille, Sophia, Nancy. We have requests to organize it in Montpellier, Avignon, Orleans in 2023.
Besides this, Eric Tannier regularly teaches research ethics at university of Lyon, at Inria and University of Lyon 1, wuth a significant environmental focus.
We also lead an "action exploratoire" related to environmental issues, on the development of agro-ecology, as it is recommended by the IPCC (GIEC) on climate change and IPBES on biodiversity.
This year we highlight several publications in rather good journals, which witness achievements in several long lasting team projects:
Besides this, the team ends this year and a significant amount of work has been dedicated to the construction of future teams.
Aevol is a digital genetics model: populations of digital organisms are subjected to a process of selection and variation, which creates a Darwinian dynamics. By modifying the characteristics of selection (e.g. population size, type of environment, environmental variations) or variation (e.g. mutation rates, chromosomal rearrangement rates, types of rearrangements, horizontal transfer), one can study experimentally the impact of these parameters on the structure of the evolved organisms. In particular, since Aevol integrates a precise and realistic model of the genome, it allows for the study of structural variations of the genome (e.g. number of genes, synteny, proportion of coding sequences).
The simulation platform comes along with a set of tools for analysing phylogenies and measuring many characteristics of the organisms and populations along evolution.
An extension of the model (R-Aevol), integrates an explicit model of the regulation of gene expression, thus allowing for the study of the evolution of gene regulation networks.
Outside Inria Bioindication is used for a participative science project in the Lyon metropolis, and to survey the biodiversity in a small town of the Monts du Lyonnais.
Bioindication has also been forked to an educational version. We designed a sequence of computer science labs in which this educational version of Bioindication plays an important role. The aim of these labs is to analyze interactions in ecosystem using graph data structures. The total lab sequence is 6 hours long. By the end of 2022 we made a preliminary run of the sequence with 800 students at INSA of Lyon (involving about 15 teachers). First feedbacks are positive, and we plan to strengthen the approach and develop it further for the next year.
Molecular evolution is often conceptualised as adaptive walks on rugged fitness landscapes, driven by mutations and constrained by incremental fitness selection. It is well known that epistasis shapes the ruggedness of the landscape’s surface, outlining their topography (with high-fitness peaks separated by valleys of lower fitness genotypes). However, within the strong selection weak mutation (SSWM) limit, once an adaptive walk reaches a local peak, natural selection restricts passage through downstream paths and hampers any possibility of reaching higher fitness values. In addition to the widely used point mutations, we introduced a minimal model of sequence inversions to simulate adaptive walks. We used the well known NK model to instantiate rugged landscapes and showed that adaptive walks can reach higher fitness values through inversion mutations, which, compared to point mutations, allows the evolutionary process to escape local fitness peaks. To elucidate the effects of this chromosomal rearrangement, we used a graph-theoretical representation of accessible mutants and show how new evolutionary paths are uncovered.
This result suggests a simple mechanistic rationale to analyse escapes from local fitness peaks in molecular evolution driven by (intragenic) structural inversions and reveals some consequences of the limits of point mutations for simulations of molecular evolution. It has been published in the international journal PLoS Computational Biology 25.
This other result on chromosome inversions published in History and Philosophy of the life sciences21 is the story, told in the light of a new analysis of historical data, of a mathematical biology problem that was explored in the 1930s in Thomas Morgan’s laboratory at the California Institute of Technology. It is one of the early developments of evolutionary genetics and quantitative phylogeny, and deals with the identification and counting of chromosomal inversions in Drosophila species from comparisons of genetic maps. A re-analysis of the data produced in the 1930s using current mathematics and computational technologies reveals how a team of biologists, with the help of a renowned mathematician and against their first intuition, came to an erroneous conclusion regarding the presence of phylogenetic signals in gene arrangements. This example illustrates two different aspects of a same piece: (1) the appearance of a mathematical in biology problem solved with the development of a combinatorial algorithm, which was unusual at the time, and (2) the role of errors in scientific activity. Also underlying is the possible influence of computational complexity in understanding the directions of research in biology.
Up-Down synchronization in neuronal networks refers to spontaneous switches between periods of high collective firing activity (Up state) and periods of silence (Down state). Recent experimental reports have shown that astrocytes can control the emergence of such Up-Down regimes in neural networks, although the molecular or cellular mechanisms that are involved are still uncertain. We proposed in 10 neural network models made of three populations of cells: excitatory neurons, inhibitory neurons and astrocytes, interconnected by synaptic and gliotransmission events, to explore how astrocytes can control this phenomenon. The presence of astrocytes in the models is indeed observed to promote the emergence of Up-Down regimes with realistic characteristics. Our models show that the difference of signalling timescales between astrocytes and neurons (seconds versus milliseconds) can induce a regime where the frequency of gliotransmission events released by the astrocytes does not synchronize with the Up and Down phases of the neurons, but remains essentially stable. However, these gliotransmission events are found to change the localization of the bifurcations in the parameter space so that with the addition of astrocytes, the network enters a bistability region of the dynamics that corresponds to Up-Down synchronization. Taken together, our work provides a theoretical framework to test scenarios and hypotheses on the modulation of Up-Down dynamics by gliotransmission from astrocytes.
Much of the Ca2+ activity in astrocytes is spatially restricted to microdomains and occurs in fine processes that form a complex anatomical meshwork, the so-called spongiform domain. A growing body of literature indicates that those astrocytic Ca2+ signals can influence the activity of neuronal synapses and thus tune the flow of information through neuronal circuits. Because of technical difficulties in accessing the small spatial scale involved, the role of astrocyte morphology on Ca2+ microdomain activity remains poorly understood. We used in [denizot:hal-03582629] computational tools and idealized 3D geometries of fine processes based on recent super-resolution microscopy data to investigate the mechanistic link between astrocytic nanoscale morphology and local Ca2+ activity. Simulations demonstrate that the nano-morphology of astrocytic processes powerfully shapes the spatio-temporal properties of Ca2+ signals and promotes local Ca2+ activity. The model predicts that this effect is attenuated upon astrocytic swelling, hallmark of brain diseases, which we confirm experimentally in hypo-osmotic conditions. Upon repeated neurotransmitter release events, the model predicts that swelling hinders astrocytic signal propagation. Overall, this study highlights the influence of the complex morphology of astrocytes at the nanoscale and its remodeling in pathological conditions on neuron-astrocyte communication at so-called tripartite synapses, where astrocytic processes come into close contact with pre- and postsynaptic structures.
Neural computational power is determined by neuroenergetics, but how and which energy substrates are allocated to various forms of memory engram is unclear. To solve this question, we asked whether neuronal fueling by glucose or lactate scales differently upon increasing neural computation and cognitive loads. Using electrophysiology, two-photon imaging, cognitive tasks, and mathematical modeling, we show in 11 that both glucose and lactate are involved in engram formation, with lactate supporting longterm synaptic plasticity evoked by high-stimulation load activity patterns and high attentional load in cognitive tasks and glucose being sufficient for less demanding neural computation and learning tasks. Indeed, we show that lactate is mandatory for demanding neural computation, such as theta-burst stimulation, while glucose is sufficient for lighter forms of activity-dependent long-term potentiation (LTP), such as spike timing–dependent plasticity (STDP). We find that subtle variations of spike number or frequency in STDP are sufficient to shift the on-demand fueling from glucose to lactate. Finally, we demonstrate that lactate is necessary for a cognitive task requiring high attentional load, such as the object-in-place task, and for the corresponding in vivo hippocampal LTP expression but is not needed for a less demanding task, such as a simple novel object recognition. Overall, these results demonstrate that glucose and lactate metabolism are differentially engaged in neuronal fueling depending on the complexity of the activity-dependent plasticity and behavior.
DNA supercoiling (SC), the level of under- or overwinding of the DNA polymer around itself, is widely recognized as an ancestral regulation mechanism of gene expression in bacteria. Higher negative SC levels facilitate the opening of the DNA double helix at gene promoters, and increase the associated expression levels. Different levels of SC have been measured in bacteria exposed to different environments, leading to the hypothesis that SC variation can be an environmental response. Moreover, DNA transcription has been shown to generate local variations in the SC level, and therefore to impact the transcription of neighboring genes.
We studied the coupled dynamics of DNA supercoiling and transcription at the genome scale by implementing a genome-wide model of gene expression based on the transcription-supercoiling coupling (TSC). We show that, in this model, a simple change in global DNA SC is sufficient to trigger differentiated responses in gene expression levels via the TSC. Then, studying our model in the light of evolution, we demonstrate that this SC-mediated non-linear response to environmental change can serve as the basis for the evolution of specialized phenotypes. These results have been published in the Artificial Life journal 15. Furthermore, a variant of the model has been used to study the impact of TSC on genome structure. We showed that regulation of gene activity through TSC leads to specific genomic organization at all levels (gene-pairs, motifs and whole genome). A preprint has been published on BiorXiv 40 that shall be submitted soon in an international journal.
Using the Aevol simulator we experimentally studied the dynamic of genome size in prokaryote-like organisms. To this aim we evolve five “Wild-Type” organisms with the simulator until the size of their genomes stabilizes (which occurs after 10 million generations). We then propagated 50 clones of each wild-type for 2 million generations and monitor the dispersal of their genome size and, more specifically of the size of non-coding compartment of their genome. Given that the non-coding compartment is not submitted to selection, its size should follow a random dispersal with a lower bound in zero. However, our experiments revealed that its dispersal is limited by two boundaries, a lower boundary that is much larger than zero and an upper boundary. To understand the origin of these boundaries, we developed a new analysis tool called “Neutral Mutation Accumulation”. Neutral Mutation Accumulation revealed that the non-coding compartment size is driven by two forces. (i.) a neutral force due to a fixation bias between duplications and deletion. Indeed, neutral duplications appear to be more numerous (and longer) than neutral deletions. This neutral force create a permanent flux of genomic material from the coding to the non-coding compartment, hence explaining why the non-coding compartment never reaches the zero bound. (ii.) a selective force due to robustness constraints (the longer the genome, the less robust it is). This selective force limits the expansion of the genome, hence explaining its upper boundary. Both forces explain the observed dynamics of the genome in Aevol. Moreover, since only one of them is selective, we conjectured that the balance between these two forces is driven by the intensity of the selection, hence by the population size. Indeed, by changing the population size in our simulation, we observed that larger population sizes lead to shorter genomes and that, on the opposite, smaller population sizes lead to larger genomes. An empirical law that is well known in microbiology. A publication is in preparation.
X-Aevol is a response to the need of more computational power. It was designed to leverage the massive parallelization capabilities of GPU. As Aevol exposes an irregular and dynamic computational pattern, it was not a straightforward process to adapt it for massively parallel architectures. We present in [GECCO 2021] how we have adapted the Aevol underlying algorithms to GPU architectures. We implement our new algorithms with CUDA programming language and test them on a representative benchmark of Aevol workloads. We show that, by using the power of a GPU, we managed to massively accelerate the evaluation process of Aevol. We do performance evaluation on NVIDIA Tesla V100 and A100. We show how we reach a speed-up of 1,000 over a sequential execution on a CPU and the speed-up gain up to 50% from using the newer Ampere micro-architecture in comparison with Volta one. However we have shown that this is not an easy task and that algorithms have to be re-designed to match this massive parallelism. Our work is then a successful GPU port of a program conveying irregular structures of data with variable size thanks to different parallel algorithms and their implementation using advanced hardware operations. Our experimental setup relies on populations built to control the heterogeneity of the genomes. The main interest is to let possible to generate worst and best scenarios to measure performance of X-Aevol.
Future work is to make real simulation in order to simulate the full evolution of an artificial organism. Another point of interest is the ability to execute our GPU port on other vendor GPUs than the ones from NVIDIA. As CUDA is a proprietary parallel computing platform, it cannot be used for AMD’s or Intel’s GPUs. Frameworks and languages have emerged recently to unify the development of parallel computing to use different kinds of accelerators with the same base code while maintaining a high performance portability. Last but not least, studies show the impact of the size of populations on the genome size and structures. Accordingly, Aevol can be required to simulate very large population exceeding million individuals. To do so, the computing power of a single GPU will not be enough. We would have to work on multi-GPUs implementation using partitioning algorithms that will take into account the micro architectural properties of GPU and our inner knowledge of the biological model of Aevol to cut the overall population into smaller ones assigned to different GPUs.
Introgression, endosymbiosis, and gene transfer, i.e., horizontal gene flow (HGF), are primordial sources of innovation in all domains of life. Our knowledge on HGF relies on detection methods that exploit some of its signatures left on extant genomes. One of them is the effect of HGF on branch lengths of constructed phylogenies. This signature has been formalized in statistical tests for HGF detection and used for example to detect massive adaptive gene flows in malaria vectors or to order evolutionary events involved in eukaryogenesis. However, these studies rely on the assumption that ghost lineages (all unsampled extant and extinct taxa) have little influence. We demonstrate here with simulations and data reanalysis that when considering the more realistic condition that unsampled taxa are legion compared to sampled ones, the conclusion of these studies become unfounded or even reversed. This illustrates the necessity to recognize the existence of ghosts in evolutionary studies. This result has been the subject of two published articles 23, 24, with releases in the generalist press (see highlights).
The absorption of dietary triglycerides has recently been revealed as a key step in cardio-metabolic health, but the underlying molecular mechanisms in the enterocyte remain incompletely understood and are still debated. While many studies focused primarily on the roles of membrane proteins, other have suggested that a critical force governing fatty acid uptake could be the intracellular metabolic demand for fatty acids, which would drive entry by passive diffusion. In 2021, we had tested the compatibility of these hypotheses with experimental uptake data by expressing each of them into a quantitative mathematical model and by fitting it to seven experimental datasets. This had led us to conclude that intracellular metabolism, more than active transport, was a major force driving fatty acid uptake. However, in 2022, we detected a calibration error in one of the parameter, the membrane permeability. It was a composite parameter, capturing several physico-chemical steps: desorption from the micelles, diffusion through the aqueous medium outside the cell, adsorption into the cell membrane, flip-flop from one membrane leaflet to the other, desorption from the membrane into the cytoplasm. However, the numerical value we had used was only based on the flip-flop step, and was therefore off by several orders of magnitude. This had important consequences on the model kinetics (stiffness) and on the conclusions. Moreover, a more detailed version of the model, explicitly modelling each physicochemical step, also revealed that the impact of pH was different depending on the detail level of the model. For these two reasons, we developed the more detailed version of the model, with a modular design allowing for simulated gene knock-outs. With this new version of the model, we observed that the active transport mechanism appeared important to correctly fit short-term uptake data, on the time scale of a few seconds. However, on the time scale of several hours, the most crucial mechanism remains intracellular metabolism, which is required to ensure total absorption of the dietary content. These updated results were about to be submitted for publication.
We are a partner in a project leaded by the company Greenshield that has been funded 2 million euros by BPI France following a PIA4 call on agro-ecology for environmental transition.
Action Exploratoire ExODE: In biology, the vast majority of systems can be modeled as ordinary differential equations (ODEs). Modeling more finely biological objects leads to increase the number of equations. Simulating ever larger systems also leads to increasing the number of equations. Therefore, we observe a large increase in the size of the ODE systems to be solved. A major lock is the limitation of ODE numerical resolution so ware (ODE solver) to a few thousand equations due to prohibitive calculation time. The AEx ExODE tackles this lock via 1) the introduction of new numerical methods that will take advantage of the mixed precision that mixes several floating number precisions within numerical methods, 2) the adaptation of these new methods for next generation highly hierarchical and heterogeneous computers composed of a large number of CPUs and GPUs. For the past year, a new approach to Deep Learning has been proposed to replace the Recurrent Neural Network (RNN) with ODE systems. The numerical and parallel methods of ExODE will be evaluated and adapted in this framework in order to improve the performance and accuracy of these new approaches.
but also verified by various numerical tests which show the compensation of the error with the increase of the system size.
As we have already seen, this method is characterized by its simplicity, its efficiency and above all its vast field of application, especially in biology with large and complicated systems. By the way, following all these mentioned advantages we note that through this article the study of the precision was done by considering the rounding error, whereas we know well that this is not the only error involved in optimizing accuracy.
This encourages us to deal with approximation errors, in order to obtain a solver and a numerical scheme compatible with our mixed precision method, so we can be able to offer an optimal precision for large scale systems in future works. In order to do so, we will use existing tools (PROMISE [16] and VerifTracer [6]) to evaluate the numerical quality of our code and quantify the magnitude of floating point related errors. Nonetheless, one of our goal is to improve performance (execution time) of ODE solver. Thus we will do a thorough performance evaluation of our method on the different proposed biologicalsystems. To conclude, we will assess how our method can benefit from next generation computing platform. Especially, we will work on porting our method to take into account silicon based mixed precision implementations that were tailored for IA/ML.
We organized a half day symposium at the Jobim conference in Rennes in July 2022 on biological sequence simulations.
We organized a half day workshop "Parlement des êtres vivants pour une recherche en Transition" at the "école de l’anthropocène" in january 2022 lien
Eric Tannier was interviewed for "Mediacité" and for "L'âge de faire" about the "Sciences Environnements Sociétés" workshops.