Morpheme is a joint project between INRIA, CNRS and the University of Côte d'Azur (UCA); Signals and Systems Laboratory (I3S) (UMR 6070); the Institute for Biology of Valrose (iBV) (CNRS/INSERM).

The scientific objectives of Morpheme are to characterize and model the development and the morphological properties of biological structures from the cell to the supra-cellular scale. Being at the interface between computational science and biology, we plan to understand the morphological changes that occur during development combining in vivo imaging, image processing and computational modeling.

The morphology and topology of mesoscopic structures, indeed, do have a key influence on the functional behavior of organs. Our goal is to characterize different populations or development conditions based on the shape of cellular and supra-cellular structures, including micro-vascular networks and dendrite/axon networks. Using microscopy or tomography images, we plan to extract quantitative parameters to characterize morphometry over time and in different samples. We will then statistically analyze shapes and complex structures to identify relevant markers and define classification tools. Finally, we will propose models explaining the temporal evolution of the observed samples. With this, we hope to better understand the development of normal tissues, but also characterize at the supra-cellular level different pathologies such as the Fragile X Syndrome, Alzheimer or diabetes.

The recent advent of an increasing number of new microscopy techniques giving access to high throughput screenings and micro or nano-metric resolutions provides a means for quantitative imaging of biological structures and phenomena. To conduct quantitative biological studies based on these new data, it is necessary to develop non-standard specific tools. This requires using a multi-disciplinary approach. We need biologists to define experiment protocols and interpret the results, but also physicists to model the sensors, computer scientists to develop algorithms and mathematicians to model the resulting information. These different expertises are combined within the Morpheme team. This generates a fecund frame for exchanging expertise, knowledge, leading to an optimal framework for the different tasks (imaging, image analysis, classification, modeling). We thus aim at providing adapted and robust tools required to describe, explain and model fundamental phenomena underlying the morphogenesis of cellular and supra-cellular biological structures. Combining experimental manipulations, in vivo imaging, image processing and computational modeling, we plan to provide methods for the quantitative analysis of the morphological changes that occur during development. This is of key importance as the morphology and topology of mesoscopic structures govern organ and cell function. Alterations in the genetic programs underlying cellular morphogenesis have been linked to a range of pathologies.

Biological questions we will focus on include:

Our goal is to characterize different populations or development conditions
based on the shape of cellular and supra-cellular structures, e.g. micro-vascular
networks, dendrite/axon networks, tissues from 2D, 2D+t, 3D or 3D+t images (obtained with confocal microscopy, video-microscopy, photon-microscopy or micro-tomography). We plan to extract shapes or quantitative parameters to characterize the morphometric properties of different samples. On the one hand, we will
propose numerical and biological models explaining the temporal evolution of the
sample, and on the other hand, we will statistically analyze shapes and complex
structures to identify relevant markers for classification purposes. This should
contribute to a better understanding of the development of normal tissues but
also to a characterization at the supra-cellular scale of different pathologies such
as Alzheimer, cancer, diabetes, or the Fragile X Syndrome.
In this multidisciplinary context, several challenges have to be faced. The
expertise of biologists concerning sample generation, as well as optimization of
experimental protocols and imaging conditions, is of course crucial. However,
the imaging protocols optimized for a qualitative analysis may be sub-optimal
for quantitative biology. Second, sample imaging is only a first step, as we need
to extract quantitative information. Achieving quantitative imaging remains an
open issue in biology, and requires close interactions between biologists, computer
scientists and applied mathematicians. On the one hand, experimental and imaging protocols should integrate constraints from the downstream computer-assisted
analysis, yielding to a trade-off between qualitative optimized and quantitative optimized protocols. On the other hand, computer analysis should integrate constraints specific to the biological problem, from acquisition to quantitative information extraction. There is therefore a need of specificity for embedding precise
biological information for a given task. Besides, a level of generality is also desirable for addressing data from different teams acquired with different protocols
and/or sensors.
The mathematical modeling of the physics of the acquisition system will yield
higher performance reconstruction/restoration algorithms in terms of accuracy.
Therefore, physicists and computer scientists have to work together. Quantitative
information extraction also has to deal with both the complexity of the structures of interest (e.g., very dense network, small structure detection in a volume,
multiscale behavior,

Among the applications addressed by Morpheme team we can cite:

Let us describe new/updated software.

This work has been carried out in collaboration with S. Rebegoldi (University of Florence).

In , we consider a variable metric and inexact version of the FISTA-type algorithm studied, e.g., by for the minimization of the sum of two (possibly strongly) convex functions. The proposed algorithm is combined with an adaptive (non-monotone) backtracking strategy, which allows for the adjustment of the algorithmic step-size along the iterations in order to improve the convergence speed. We prove a linear convergence result for the function values, which depends on both the strong convexity parameters of the two functions and the upper and lower bounds on the spectrum of the variable metric operators. We validate the proposed algorithm, named Scaled Adaptive GEneralized FISTA (SAGE-FISTA), on exemplar image denoising and deblurring problems where edge-preserving Total Variation (TV) regularization is combined with Kullback-Leibler-type fidelity terms, as it is common in applications where signal-dependent Poisson noise is assumed in the data. We report the results obtained in this example in Figure where convergence rates, computational times and Lipschitz constant estimations are compared both for monotone and non-monotone backtrcking.

The manuscript is currently at the second stage of review for SIAM journal on Optimization.

This work has been carried out in collaboration with C. Estatico (DIMA, University of Genova) and S. Rebegoldi (University of Florence).

In , we proposed a continuous non-convex variational model for the analysis of Single Molecule Localisation Microscopy (SMLM) data in the presence of Poisson noise. Inspired by previous work , we consider, in particular, a variation of the Continuous Exact

This work has been carried out in collaboration with C. Estatico (DIMA, University of Genova).

In this project, we have worked on the development of forward-backward algorithm in variable exponent Lebesgue spaces

The outcome of this project is the manuscript which has been recently sent for publication to SIAM journal of Scientific Computing.

In collaboration with A. Lanza, M. Pragliola, F. Sgallari (University of Bologna).

In , we propose an automatic parameter selection strategy for the problem of image super-resolution for images corrupted by blur and additive white Gaussian noise with unknown standard deviation. The proposed approach exploits the structure of both the down-sampling and the blur operators in the frequency domain and computes the optimal regularisation parameter as the one optimising a suitable residual whiteness measure. Computationally, the proposed strategy relies on the fast solution of generalised Tikhonov

In ,
the approach is applied to

In collaboration with A. Lanza, M. Pragliola, F. Sgallari (University of Bologna).

Over the last decades a plethora of variational regularisation models for image reconstruction has been proposed and thoroughly inspected by the applied mathematics community. Among them, the pioneering prototype often taught and learned in basic courses in mathematical image processing is the celebrated Rudin-Osher-Fatemi (ROF) model which relies on the minimisation of the edge-preserving Total Variation (TV) semi-norm as regularisation term. Despite its (often limiting) simplicity, this model is still very much employed in many applications and used as a benchmark for assessing the performance of modern learning-based image reconstruction approaches, thanks to its thorough analytical and numerical understanding. Among the many extensions to TV proposed over the years, a large class is based on the concept of space variance. Space-variant models can indeed overcome the intrinsic inability of TV to describe local features (strength, sharpness, directionality) by means of an adaptive mathematical modelling which accommodates local regularisation weighting, variable smoothness and anisotropy, see Figure for an illustration of the many different Weighted (WTV), Weighted-

The outcome of this work is currently under the second stage of review for SIAM Review.

In collaboration with P. Cascarano, E. L. Piccolomini, D. Mylonopoulos (University of Bologna).

In , we consider a variational model for single-image super-resolution based on the assumption that the gradient of the target image is sparse. We enforce this assumption by considering both an isotropic and an anisotropic

This work has been published in the journal IEEE Transactions on Computational Imaging . A MATLAB software with well-documented guide has been sent for publication to Image Processing On Line (IPOL) and it's currently at the second stage of review.

In collaboration S. Morigi (University of Bologna), S. Parisotto (Fitzwilliam museum & University of Cambridge).

We studied the images osmosis model, which is a linear drift-diffusion PDE particularly useful in image processing with applications such as shadow removal, compact data representation and cloning. It is known that in shadow removal applications the osmosis model causes diffusion and is unable to preserve the underlying features in the pixels at the border of the shadow. Hence, in order to improve the results, we started developing a non-linear isotropic variation of the original osmosis model. We investigated the theoretical properties and the possible discretizations of the model showing significant improvements with respect to the standard, linear, one. In Figure a comparison between linear and non-linear modelling is showed.

A journal paper on the outcomes of this project is being written and will be soon submitted for publication to a journal of mathematical imaging.

The estimation of regularization parameters in the context of inverse problems is crucial for many different applications. More or less sophisticated and most often iterative approaches make it possible by encoding some (generally unknown) information on the data such as the noise distribution and/or the desired image to compute. For most of these approaches, the estimation is performed globally and, as such, does not take into account the different amount of regularization needed in different image regions. Deep learning approaches have proven to be very effective for many data and image analysis problems and often achieved unprecedented results in the context of digital image processing. In this project, a pixel-wise map of parameters is estimated by means of a convolutional network by considering the different patches centered in each of the pixels of the given noisy and blurry image.

In this project we have first constructed an image database by corrupting initial images with several types and levels of noise. We then tested several convolutional neural network architectures with different convolutional layers, max pooling and flattening layers and for different choices of batch-sizes, learning rates and epochs, and selected the one providing the least RMSE. The resulting CNN has a very simple architecture: one single convolutional layer followed by a max pooling and flattening layer and completed by a dense layer. By means of the proposed architecture we locally estimate the hyperparameter associated with a variationnal denoising model, the learning set being constructed by maximizing the SNR of the denoised result. The first results give a proof of concept of the deep learning approach for locally estimating parameters (see figure ).

Next steps will consist of comparing the obtained results with statistical approaches and of addressing the case of deconvolution.

This work has been carried out in collaboration with J. H de Goulart (INP Toulouse/IRIT).

Super-resolution light microscopy overcomes the physical barriers due to light diffraction. However, state-of-the-art approaches require specific and often demanding acquisition conditions to achieve adequate levels of both spatial and temporal resolution. Analyzing the stochastic fluctuations of the fluorescent molecules provides a solution to the aforementioned limitations. Based on this idea, we proposed COL0RME , , a method for COvariance-based

As future research directions, we are interested in the investigation of learning approaches such as, for instance, unrolled deep networks such as, e.g., the Learned SPARCOM algorithm proposed in . Algorithm unrolling methods connect the iterative algorithms that are widely used in image processing and the deep neural networks

This appeared in the proceedings of the IEEE ISBI 2021 conference . A longer version of this work has been sent for publication to the Biological Imaging journal .

Vasiliki Stergoipoulou's work is supported by the French Government through the 3IA Côte d’Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002.

We proposed a 3D super-resolution approach to improve spatial resolution in Total Internal Reflectance Fluorescence (TIRF) imaging applications. Our approach, called 3D-COL0RME (3D - Covariance-based

This appeared has been sent for publication to the proceedings of the ISBI 2022 conference .

Vasiliki Stergoipoulou's work is supported by the French Government through the 3IA Côte d’Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002.

Fluorescence microscopy is a well known method to observe specific proteins, marking them with fluorescent molecules. However, the resolution of this method is limited by the diffraction of light in the optical system. In practice it is not possible to observe details below 200nm in the lateral plane.

The goal of super-resolution is to overcome this limitation. There exist many methods for super resolution but most of them are based on specific setup or molecules. The idea was to propose a numerical method both data-driven and model-based inspired from CryoGAN and MSR-GAN but adapted to super-resolution context. The first step was to model the direct problem of microscope acquisition and implement a reliable simulator. This component generates from a discrete distribution of sources a range of corresponding coarse images we could get from the real microscope. Since emission and absorption of photons is a random process, especially when there are only a few photons, this simulator cannot be deterministic. These simulated images should be then compared with real images from the microscope.

Computing a super-resolved image means computing a fine estimation of the distribution of sources. Thus we update the input of the simulator until the output is distributed like the real images from the microscope. This optimization process aims to minimize the distance between the real images and the simulated images depending on the input of the simulator. It can be considered as an adversarial learning problem similar to the training of a specific well-chosen generative adverse network (GAN). The idea is to train a neural net called discriminator to evaluate the simulator output. The discriminator is trained to distinguish the real images from the simulated ones while the simulator should generate images each step nearer from the real images. So these two components have conflicting goals and this kind of double optimisation is said to be adverse.

The main difficulty is to set the hyperparameters of the algorithm. It could be interesting to understand more precisely their respective influence on the results. Despite this limitation this new method for super-resolution yields promising results on both synthetic data and ostreopsis experimental images, see Figure . It seems to be robust enough to generalize to other data. In the general context of inverse problems it provides a new validation of the CryoGAN and MSR-GAN approach and let us suppose this approach could be easily adapted to other inverse problems.

Off-the-grid variational methods recover Radon measures, elements of the Banach

Ongoing project consists in investigating a new approach to extend the off-the-grid theory (right now limited to points) to the reconstruction of more exotic structures like curves, from both theoretical and numerical standpoints;

We have published a review paper in Journal of Imaging and a preprint accepted in IEEE ICASSP 2022 'Off-the-grid covariance-based super-resolution fluctuation microscopy'. See

This work was carried out in collaboration with Jean-Olivier Irisson (LOV, Villefranche-sur-mer, France).

Accurate plankton biomass estimations are essential to study marine ecological processes and biogeochemical cycles. This is particularly true for copepods, which dominate mesozooplankton. Such estimations can efficiently be computed from organism volume estimated from images. However, imaging devices only provide 2D projections of 3D objects. The classical procedures to retrieve volumes, based on the Equivalent Spherical Diameter (ESD) or the best-fitting ellipse, are biased. We developed a method to correct these biases. First a new method aims to estimate the organism body area through ellipse fitting (see Figure , left image). Then, the body of copepods is modeled as an ellipsoid whose 2D silhouette is mathematically derived. Samples of copepod bodies are simulated with realistic shapes/sizes and random orientations. Their total volume is estimated from their silhouettes using the two classical methods and a correction factor is computed, relative to the known, total, volume. On real data, individual orientations and volumes are unknown but the correction factor still holds for the total volume of a large number of organisms. The correction is around -20% for the ESD method and +10% for the ellipse method. When applied to a database of

This work was made in collaboration with Clara Sanchez and Carole Rovère, IPMC, Sophia Antipolis, France.

Overweight and obesity are major public health issues affecting respectively 39% and 13% of the world population (From World Health Organization, 2016). They constitute prominent risks for numerous chronic diseases, including diabetes, cardiovascular diseases, and cancer. Studies in animal models and humans reveal that excess fat diets promote both a peripheral chronic inflammation and a hypothalamic neuroinflammation, which possibly leads to feeding behavior deregulation. Ascertaining whether the inhibition of early activation of two major brain cells involved in feeding behavior (glial cells, more specifically astrocytes and microglia) in the hypothalamus could prevent obesity would offer new prospects for therapeutic treatments. To understand the mechanisms pertaining to obesity-related neuroinflammatory response, we developed a fully automated algorithm, NutriMorph. Although some algorithms were developed in the past decade to detect and segment neural cells, they are highly specific, not fully automatic, and do not provide the desired morphological analysis. Our algorithm cope with these issues and performs the analysis of cells images (here, microglia of the hypothalamic arcuate nucleus), and the morphological clustering of these cells through statistical analysis and machine learning. However, graphical user interfaces at key steps of the pipeline are actually helpful to tune the parameters of the algorithm on a couple of images before running the analyses in batch mode on a whole dataset. Therefore, we developed such an interface for the soma detection task, allowing to check and select the cells for further analysis (see Figure ).

This work was made in collaboration with Jérémie Roux, Ircan, Nice, France.

The detection of cell division and death events is essential for the understanding of tumor cells responses to cancer therapeutics and to obtain robust metrics of pharmacodynamics. Knowing precisely when these events occur in a live-cell experiment allows to study the relative contribution of different drug effects, such as cytotoxic or cytostatic effects, on a cell population. Yet, classical methods require event-specific dyes (or markers) and measure cell viability as an end-point assay with whole population counts where the proliferation rates can only be estimated when both viable and dead cells are labeled simultaneously. As an alternative, we are developing a dynamic analysis framework for detecting single cell events in marker-free time-lapse microscopy experiments of drug pharmacological profiling. Cell tracking is performed using a basic approach which has been adapted to account for cell divisions. Cell divisions are detected topologically on the tree-shaped cell trajectories produced by the tracking step. A pattern matching procedure is then applied to the cell intensity entropy to filter out spurious divisions arising from debris tracking after cell death. Cell death are detected topologically as trajectories ending before the last time-lapse frame, or using pattern matching on cell intensity entropy (see Figure ).

This work was carried out in collaboration with Aurelia Vernay (Bayer) as part of a contract with Bayer.

We have developed an alternative classification approach based on optimal transport whose strength lies in its ability to consider the geometry of the sample distribution. We proposed to transform the data so they follow a simple model (in practice a Gaussian model), the complexity of the data then being "hidden" in the transport transformation. This approach also offers an original way to estimate the probability density function (PDF) underlying a population sample. For that, we used optimal transport (OT) to transform one distribution into another at a lower cost. In our case, we transform the starting data into data that follows a normal distribution. Thus, for each class, we generate a Gaussian with the same Mean and variance as the starting distribution. We learn the transport between source and target samples. Then we transport the samples via the learned transport, and we calculate source sample PDF values using the position of the transported samples and the parameters (mean and variance) of the class distribution. An extrapolation function is then learned based on the class samples and the recovered PDFs. Finally, we obtain for each class, a model that gives the PDFs of any sample (see figure .)

This work is made in collaboration with Aurelia Vernay (Bayer) as part of a contract with Bayer.

To model the growth of the Botrytis cinerea fungus, we consider a fungus as a tree structure whose growth is an iterative process for which some events must be defined, and the probability of their occurrence computed. We first acquired several microscopy time lapses of growing fungi. The first step is to segment the fungi. We used a Canny-based procedure. After segmentation, we corrected the masks according to an inclusion hypothesis (fungi do not move and can only grow). Then, the fungi are tracked over time using the classical overlapping hypothesis. Finally, we extracted the fungi skeletons, converted them to a graph, and computed three parameters per fungus: the total length of the skeleton and the number of primary and terminal branches. From these data, we studied the length parameter evolution over time, and chose an exponential law as the length evolution model. The law coefficients were estimated independently for each fungus using the least square method, and their respective distribution was computed. This allows to generate realistic random length evolutions. We also studied the branch creation events over time. We concluded that this event occurrences do not exhibit any particular trend. The proposed fungus growth simulator represents a fungus by a tree graph whose first node corresponds to the initial spore. A branch is composed of one or more edges. The fungus growth is generated by successive creations of edges without thickness notion. At each time (or iteration), the graph branches lengthen by an additional edge. Additionally, a new branch can be created, either from a terminal node (split at the ending point of a branch) or from the initial (spore) node, by adding a new edge according to a Bernoulli law. Each of these additions of edges relies on two parameters: a length and an angle. The length depends on coefficients drawn according to the distributions previously estimated in the exponential law context, and on the number of edges to be created. The angle is drawn randomly within fixed intervals (see Figure ).

In this project, we aim at identifying parametric objects in an image using a deep learning approach. The goal is to detect and segment a collection of objects. In first step, we have simulated a database of noisy images containing disks or ellipses. We first have considered a circle detection CNN architecture on an image containing one disk. We have then applied a transfer learning scheme where the convolutional layers were frozen to generalize the network to the ellipse case. The next step is to address a collection of objects. We considered the Faster RCNN network to draw a bounding box around each object. We can see that the Faster RCNN is able to desambiguate clusters of objects by drawing a bounding box around each individual oject, preventing the use of non overlapping prior (see figure ). We then extract the image patch corresponding to each bounding box and compare two strategies. First we segment the ellipse using the ellipse detection CNN and we compare the result with an ellipse fitting algorithm. This two steps framework extract clustered objects model by ellipses from images. The next work will consist in a transfer learning step to address real data.

This work has been carried out in collaboration with Prof. Alin Achim from Bristol university.

Uveitis is an inflammatory disease that affect the eye and is considered a leading cause of vision loss in young and old people around the world. Experts confirm that early and accurate detection of vitreous inflammation can potentially reduce the rate of blindness. In this work, we have defined a CNN-based approach that allows the detection of the disease, even in early stages, i.e. retinas in day 2 of the development of the disease (see figure ), furthermore we applied an explainable AI method (Grad-CAM) to help the clinicians to know where the disease is located and why the algorithm gives a prediction. The performances obtained are of the order of 80% in distinguishing whether the 2D image corresponds to an inflamated retina, on the different phases of the disease.

Often in clinical routine, specialists need statistical tools that can quantify the distribution of particles in diseased eyes. Since the particles size is small and annotating them with bounding boxes or masks is costly in terms of time and money, we present an automatic method to segment small particles based on a weak annotation (points). It is based on FCN8 architecture with a loss function containing 4 terms written as follows:

These terms include an image term which tends to put objects on bright areas, a consistency term between the annotations and the detected points, a consistency term between the watershed segmentation obtained from the markers given respectively by the annotations and by the detection and a last term preventing from false detections.

After having detected the particles, we consider their localization with respect to the retina and analyze their distribution using the K-Ripley function. We thus show a clustering effect that may reflect a local inflammation of the retina near the optical nerve(see figure ). Further work will consist of grading the pathology.

This work was carried out in collaboration with Damien Ambrosetti, Nice CHU.

Vascular network analysis is crucial to define the tumoral architecture and then diagnose the Renal Cell Carcinoma (RCC) subtypes. However, automatic vascular network segmentation from histopathological images is still a challenge due to the background complexity. We propose a method that reduces reliance on labeled data through semi-supervised learning (SSL). Additionally, considering the correlation between tumor classification and vascular segmentation, we propose a multi-task learning (MTL) model, shown in Fig. , which can simultaneously segment the vascular network using SSL and predict the tumor class in a supervised context. This multi-task learning procedure offers an end-to-end machine learning solution to jointly perform vascular network segmentation and tumor classification. We computed the ratio of miss-detected vessels (MV), the ratio of falsely detected vessels (FV) and the global performance index (IV) to evaluate our segmentation results. Fig. shows the segmentation results of the proposed MTL and Tab. gives a quantitative evaluation.

This work was carried out in collaboration with Francesco Ponzio from Politechnico di Torino (Italy) and Damien Ambrosetti from Nice CHU.

Renal cell carcinoma (RCC), which typically emerges from the renal tubules, is currently categorized into five main histological sub-types: clear cell, papillary, chromophobe, collecting duct, and unclassified RCC. Among them, the three most common RCCs types are clear cell, papillary, and chromophobe, including 70% to 80%, 14% to 17%, and 4% to 8% of all RCCs, respectively. Collecting duct carcinoma is the most uncommon class of RCC (1%), and unclassified RCCs gathers those alien types which do not fit, morphologically or cytogenetically, into any of the above four sub-types.
The CNN (Convolutional Neural Network) appears to be a successful approach for image classification purposes. However it leads to data-hungry supervised algorithms, requiring ten of thousands of annotated images to be proficiently trained. This aspect is still an open issue that limits CNN usability in the everyday clinical setting. Transfer learning (TL), which leverages knowledge from related domains (called source domains) to facilitate the learning in a new domain (called target domain), is known to be a good solution to this problem. Originally defined as a research problem in machine learning that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem, TL was initially considered capable of shifting and reusing knowledge between two similar/related domains of interest. For instance, using features embedding extracted from a nudity detection model into a new facial recognition task. Nonetheless, debate continues about the practicability of TL-based embedding in contexts distant from the original training domain, or in presence of a limited number of annotated samples. In this regard, the term negative transfer (NT) specifically refers to situations where the source knowledge can have a negative impact on the target learner, causing reduced learning performance in the target domain.

In this study, we applied several TL-based and full-trained state-of-the-art CNNs in the RCC sub-typing task. We show how both these learning paradigms failed, and, specifically, how TL led to NT, producing not satisfactory performance in the target domain.
As a matter of fact, RCC sub-typing task includes those conditions known to be cumbersome even for TL-based solutions: a complex classification task, well-known to be arduous also for skilled pathologists showing a limited number of annotated samples. Our results proved on the one hand that canonical TL from general propose images, and even from other histological source domains, fail, inducing NT. We proposed a hybrid approach, leveraging a combination of TL and expert knowledge, that we named as DeepRCCTree, which was able to outperform several state-of-the-art CNNs in subtyping the RCC cancer (see figure ). The different binary classification tasks, included in this decision tree, are directly inspired by the histopathologist practice.

This work was carried out in collaboration with Georgios Stamatas (Johnson & Johnson Consumer Health R&D) as part of a collaboration contract with Johnson & Johnson.

We propose an automated approach to identify cells on a Reflectance Confocal Microscopy (RCM) image of skin. The study of RCM images provides information on the topological and geometrical properties of the epidermis. These properties change in each layer of the epidermis, with age and with certain skin conditions. RCM can also reveal dynamic changes to the epidermis structure in response to stimuli, as it enables repetitive sampling of the skin without damaging the tissue. Studying RCM images requires manual identification of each cell to derive geometrical and topological information, which is time-consuming and subject to human error. More insights could be derived from these data if not for the tediousness of their manual segmentation, highlighting the need for an automated cell identification method. Our automated method of cell identification on RCM images follows three general steps, (1) identifying the region of interest on the RCM image by removing any areas not containing cells, (2) identifying cells within the region of interest using different filters, and (3) rebuilding the identified cells. Pre-processing is required before each one of these steps. Accuracy assessment is complicated on real RCM images as an absolute ground truth doesn’t exist due to inter-expert variability. To assess the accuracy of our results more accurately and guide parametrization we create synthetic images similar to RCM images in their elements and noise. The obtained segmentation results show that identification of cells on RCM images is possible (see Figure ).

A patent has been submitted for this work.

This project has been conducted in collaboration with M. Fürthauer (iBV) and supported by an UCA Academy 2 project.

In this project we study the laterality of organs in the zebrafish embryo and more particularly the breaking of the right-left symmetry. This phenomenon occurs in a cavity called the Kupffer vesicle (KV) which contains particles moving in a flow. We can observe that in mutant embryos (Myo1D gene mutation), we have a disturbance of the flow in the cavity, which generates a defect of the laterality of the organs.

This year we first created velocity maps to validate the orientation of the flow in the cavity (circular in anti-clockwise). We recovered trajectory positions that were obtained through the semi-automatic tracking of exogenous particles (fluorescent beads) developed last year(see Morpheme 2020 activity report). We extrapolated the trajectories using the sliding window principle to obtain velocities at all points of the KV. We then considered a diffusion model using these velocity maps to estimate the associated parameters. This model is then used to simulate trajectories inside the cavity. These simulations lead to a reliable estimate of the density of particles in the different angular sectors of the KV and of the distribution of hitting points on the cavity membrane.

We then develop an automatic pipeline to accurately detect and track endogenous particles to avoid the injection of fluorescent beads which is invasive and can disturb the cavity flow (see Fig. ). We first removed the heterogeneous non-stationary background from the images by substracting a local time mean. Then, we projected the image sequences into a

Endogenous particles trajectories are sparse motivating the use of simulated additional trajectories to compute the particle density statistics.

This work was carried out in collaboration with Manuel Petit, Guillaume Cerutti and Christophe Godin from Mosaic Inria team, within the IPL Naviscope (section ).

Automatic tracking of cell deformation during development using time-lapse confocal microscopy is a challenging task. In plant cell tissues, large deformations and several division cycles can occur between two consecutive time-points making the image registration and tracking procedure particularly difficult. Here, we propose an iterative approach where an initial registration transformation and cell-to-cell mapping are alternatively refined using high-confidence associations selected on the basis of a geometric context preservation score. The method, evaluated on a long time-lapse series of floral meristem, clearly demonstrates its superiority over a non-iterative approach. In addition, we show that the geometric context preservation score can be used to define a lineage quality assessment metric that makes it possible for an expert to provide locally nudges to finalize the lineage detection if necessary in a semi-automatic way.

This work has been carried out in collaboration with Killian Biasuz, Benjamin Gallean and Patrick Lemaire from CRBM, Montpellier, within the ANR Cell Whisper (section ).

Since the pioneering work of Conklin , it is admitted that ascidian exhibited a stereotyped development in the first stages of their embryogenesis. As a consequence, embryo cells can be unambiguously canonically named after their fate: this provides a common reference space into which population of inviduals can be projected and compared (see Fig. ). Moreover, this stereotypy also provides a means to name a new sequence from already named ones.

We hypothetized that the development stereotypy induces cell neighborhood invariance. Thus the cell neighborhood can be viewed as a cell fingerprint. The definition of cell-to-cell as well as division-to-division distances allows then to name a newly acquired embryo from one or several atlases. As a side result, it also provides means to investigate the developmental variability at a cellular level.

This work is made in collaboration with Barthélémy Delorme and Matteo Rauzi (iBV, Nice).

The project aims at understanding the mechanisms driving the larval gut formation in the embryo of the sea urchin Paracentrotus lividus .

We use the Astec pipeline (see section and ) to process several sets of data and performed targeted quantifications to improve our understanding of the archenteron formation.

The sea urchin embryo is composed of a simple epithelium – a layer of mechanically coupled cells – forming a hollow, bell-shaped organism. At a given point of the embryo - known as the vegetal pole, the epithelium folds and elongates concomitantly to form the rudiment of a tube, the archenteron (Figure ).

One mechanism commonly found in various model systems that drives tissue folding is apical constriction: a group of cells reduce their surfaces on one side and expand on the opposite side . In the sea urchin embryo, a ring of apically constricted cells is indeed present during archenteron invagination (blue cluster on Figure A). Nevertheless, we also imaged and analyzed a mutant that does not display archenteron invagination: the ring-shaped cluster of apically constricted cells is present too (cf Figure B), with similar characteristics (number of cells and level of constriction) (cf Figure C&D). Hereby we demonstrate for the first time that apical constriction is not sufficient to drive archenteron folding in the sea urchin embryo.

Various mechanisms may drive tissue elongation; nevertheless, neighbor exchange via cell intercalation is often found to be a key factor of extension in morphogenetic processes. The 3D segmentations allowed us to systematically track cell intercalation (cf Figure A-D). We could thus demonstrate that, from the onset of archenteron invagination, the tissue elongates via cell intercalations mainly located in the region surrounding the vegetal pole (cf Figure E).

This project is in collaboration with Simone Rebegoldi (University of Florence).

The purpose of this project is the design novel sparse optimisation approaches for image microscopy inverse problems. To do that, the following challenges will be addressed

The use of inexact and variable-metric approaches for imaging inverse problems has gained a lot of attention in the recent years at an international level, but it has not been sufficiently explored by the French imaging community. Along with his collaborator S. Bonettini (UniMoRe, Italy), secondary partner of the project, S. Rebegoldi is an expert on the topic whose expertise could boost the French research activity on the topic with new ideas as well as establish long-lasting collaborations. The project will last till October 2022.

Luca Calatroni is local PI and WP coordinator of the H2020 RISE project on Nonlocal methods for arbitrary data sources.

The Morpheme team belongs to the Labex (Laboratory of Excellence) Signalife. This Labex embeds four biology institutes in Nice, Sophia Antipolis and two Inria teams.

This project is in collaboration with Dario Prandi (L2S)[PI], Valentina Franceschi (UniPD),
Laurent Perrinet (INT) and
Giuseppina Turco (LLF, Paris).

In this project we will design novel variational models defined in suitable functional spaces to translate the neurobiological principle of sensory efficient representation into mathematical terms. This will be done by addressing the following three different problems:

Our goal is to formulate new effective reference models adapted to describe both visual and auditory perceptual phenomena, overcoming the limitations associated to WC equations and L+NL models.

This project will be carried out by coupling in parallel the development of rigorous analytical theories with their numerical and experimental validation. It features a focused team of young researchers with a synergy of combined expertise in variational methods, signal processing, optimisation, control theory, sparse coding, computational neurosciences, and speech analysis. The project will last till October 2023.

In this project we propose to use the cutting-edge organoid technology to test the toxicity of endocrine disruptors (EDCs) on human organs. The aim is to develop computational tools and models to allow the use of organoid technology for EDC toxicity testing. The project is thus divided in two main objectives: to build up and analyze a phenotypic landscape of EDC effect on organoid and to develop explicative or predictive models for their growth.The first goal is to define and construct a phenotypic map of organoids, modelled as graphs (the nodes representing the cells and edges adjacency between them) for classifying EDCs families. The second is to classify organoid growth trajectories on this map. We will consider two organoid models, gastruloids and prostate organoids. to derive the phenotypic map we will combine a graph representation and a deep learning approach. The deep learning approach will be considered for its discriminating properties whereas a correspondence between the bottleneck layer of the chosen neural network and the stratified graph space will bring some explicability to the derived classification.

This 4-years project started in november 2021 and is leaded by X. Descombes. It involves 3 groups : C3M (S. Clavel, Nice), Metatox, Inserm (X. Coumoul, Paris) and Morpheme.

Successful embryogenesis requires the differentiation of the correct cell types, in defined numbers and in appropriate positions. In most cases, decisions taken by individual cells are instructed by signals emitted by their neighbours. A surprisingly small set of signalling pathways is used for this purpose. The FGF/Ras/ERK pathway is one of these and mutations in some of its individual components cause a class of human developmental syndromes, the RASopathies. Our current knowledge of this pathway is, however, mostly static. We lack an integrated understanding of its spatio-temporal dynamics and we can imperfectly explain its highly non-linear response to a graded increase in input stimulus.

This systems biology project combines advanced quantitative live imaging, pharmacological/optogenetics perturbations and computational modelling to address 3 major unanswered questions, each corresponding to a specific aim:

Through this approach, in a simplified model system, we hope to gain an integrated view of the spatio-temporal dynamics of this pathway and of its robustness to parameter variations. Participants are CRBM (Montpellier), LIRMM (Montpellier), MOSAIC (INRIA Lyon) and Morpheme.

This action gathers the expertise of seven Inria research teams (Aviz, Beagle, Hybrid, Morpheme, Parietal, Serpico and Mosaic) and other groups (MaIAGE, INRAE, Jouy-en-Josas and UMR 144, Institut Curie Paris) and aimed at developing original and cutting-edge visualization and navigation methods to assist scientists, enabling semi-automatic analysis, manipulation, and investigation of temporal series of multi-valued volumetric images, with a strong focus on live cell imaging and microscopy application domains. More precisely, the three following challenges will be addressed:

The project involves a large network of research scientists and is led by I. Théry-Parisot (CEPAM).

It focuses on the analysis of high-resolution imaging data (2D, 3D, Tomography) for the characterisation of objects at different scales such as frescoes, manuscripts, biological archives (wood, bones) and geological findings (pottery). It focuses in particular on the exploitation of the potential of applied mathematiques and artificial intelligence to Archeology and Art history. The use of tools coming from Artificial Intelligence and, in particular, of advanced imaging techniques is expected to improve the interpretation of historical and archaeological data. One specific aspect is also the make available the large databases to the AI communities.

The working package Imag'In on imaging of this project (led by Luca Calatroni and Rosa Maria Dessí, CEPAM) received the PRIME label by the CNRS (see ).

In this project we study the left-right organizer (LRO) of the zebrafish which is involved in the organ laterality. The zebrafish LRO is a vesicle in which a flow is produced by cillia beats. To study this flow we consider trajectories of endogenous and/or exogenous particles . In a previous work we have constructed a pipeline to detect the vesicles. During this new project, the goal was to develop a statistical framework to characterize the flow first by computing mean and variance velocity maps in the vesicle. We then considered a diffusion process whose parameters were estimated from the these maps. This model allows simulating new trajectories. From these simulations statistical features, such as the particles density or the distribution of hitting events on the vesicle membrane, are computed.

The project is a collaboration with M. Furthauer from iBV (Nice).