The MIMESIS team develops numerical tools for advanced numerical simulations in the context of surgical training, planning and per-operative guidance (see Fig. 1).
The underlying objectives include patient-specific biophysical and electrophysiological modeling, novel numerical techniques for real-time computation, data assimilation using Bayesian methods and more generally data-driven simulation.
This last topic is a transverse research theme which raises several open problems, related to the field of machine learning.
To pursue these directions we have assembled a team with a multidisciplinary background, and have established close collaborations with academic and clinical partners, in particular the IHU institute in Strasbourg.
We also continue the development of the SOFA framework through the creation of a consortium, to better support the increasingly large community of users.
Image- and signal-guided therapy has revolutionized medicine, in its ability to provide care that is both efficient and effective. However, images and multivariate signals acquired during an intervention are either incomplete, under-exploited or can induce adverse outcomes. This can be due, for instance, to the lack of dimensionality of X ray images and the associated radiation exposure for the patient or the spatial sub-sampling in multivariate signals by a too low number of measurement electrodes. We believe that by combining our expertise in real-time numerical simulation (of soft tissues, flexible medical devices, and complex interactions) with data extracted from intra-operative images and experimental multi-electrode signals, we could provide efficient per-operative guidance. To reach these objectives we need to solve challenges that lie at the intersection of several scientific domains. They include the development of novel numerical strategies (to enable real-time computation even with the increase in complexity of future models), and data-driven simulation (to link simulation with real world data).
The principal objective of this challenge is to improve, at the numerical level, the efficiency, robustness, and quality of the simulations (see Fig. 2). An important part of our research is dedicated to the development of computational models that remain compatible with real-time computation, i.e., which allow immediate visual or haptic feedback. This typically requires computation times below 50 ms and in some cases around 1 ms. Such advanced models can not only increase the realism of future training systems, but also act as a bridge toward the development of patient-specific solutions for computer-aided interventions. Additionally, such simulations should run on (high-end) consumer level computers (i.e. with a single multi-core CPU and a dedicated GPU).
To reach these goals, in a first topic we are investigating novel finite element techniques able to cope with complex, potentially ill-defined input data. After developing Smoothed FEM for real-time simulations, we are developing meshless techniques and immersed boundary methods. The first one is well suited for topological changes, which we sometimes need to account for in our simulations. The second is expected to lead to more stable, and numerically efficient, formulations of the finite element method. We are also developing numerical techniques to compute the complex interactions that can take place between anatomical structures or between medical devices and organs. Boundary conditions are known to also play an important role in the solution of such problems. Therefore we are investigating solutions to both identify and model the interactions that take place between the structure of interest and its anatomical environment.
In a different research topic, the team develops neuronal network models that describe the interactions between brain areas under neural stimulation. Each neuron exhibits stochastic spiking activity at the millisecond scale and each brain area comprises thousands of neurons. To achieve numerical real-time integration of the dynamical network models, we aim to reduce the dimensionality of the network models and to this end derive analytically dimension-reduced mean-field models. These models capture the major dynamics of the network models. Typically, such mean-field models are stochastic delayed differential equation systems. For evalution, the network models are simulated on multi-core CPU computers.
Data-driven simulation has been a recent area of research in our team (see Fig. 3). We have demonstrated that it has the potential to bridge the gap between medical imaging and clinical routine by adapting pre-operative data to the time of the procedure. In the areas of non-rigid registration and augmented reality during surgery, we have demonstrated the benefit of our physics-based approaches with several key publications in major conferences (MICCAI, CVPR, IPCAI, ISMAR).
We have continued this work with an emphasis on robustness to uncertainty and outliers in the information extracted in real-time from image data, as well as real-time parameter estimation. This is currently done by combining Bayesian methods with advanced physics-based methods to handle uncertainties in image-driven simulations.
Finally, Bayesian or similar methods require to perform a large amount of simulations to sample the domain space, even when using efficient methods such as Reduced Order Unscented Kalman Filters. For this reason, we are investigating the use of neural networks to perform predictions instead of using full numerical simulations. A recent paper 31 shows it is possible to teach a neural network from numerical simulations and predict, with good accuracy, the deformation of an organ.
Virtual training helps medical students to get familiar with surgical procedures before manipulation of real patients. The development of simulation used for medical training usually requires important computational power, since realistic behaviors are key to deliver a high-fidelity experience to the trainee. Further, the quality of interaction with the simulator (usually via visual and haptic rendering) is also of paramount importance. All these constraints make the development of training systems time-consuming, thus limiting the deployment of virtual simulators in standard medical curriculum.
Beyond training, clinicians ask for innovative tools that can assist them in the pre-operative planning of an intervention. Using the patient information acquired before the operation, physics-based simulations allow to simulate the effect of therapy with no risk to the patient. The clinicians can thus virtually assess different strategies and select the optimal procedure. Compared to a training simulation, a planning system requires a high accuracy to ensure reliability. Constrained by the time elapsed between the preoperative acquisition and the intervention, the computation must also be efficient.
Besides the surgery training and planning, another major need from clinicians is surgical guidance. While the clinician is performing the operation, a guidance system provides enriched visual feedback. This is especially useful with the emergence of minimally invasive surgery (MIS) where the visual information is often strongly limited. It can be used for example to avoid critical areas such as vessels or to highlight the position of a tumor during its resection. In the MIS technique, the clinician does not interact with organs directly as in the open surgery, but manipulates instruments inserted through trocars placed in small incisions in the wall of the abdominal cavity. The surgeon can observe these instruments on a display showing a video stream captured by an endoscopic camera inserted through the navel. The main advantage of the method resides in reducing pain and time recovery, in addition to reducing bleeding and risks of infection. However, from a surgical standpoint, the procedure is quite complex since the field of view is considerably reduced and the direct manipulation of organs is not possible. For an example, see 18.
MaskDecath: diving masks converted into respirators
Patients who require ventilatory support are usually connected to ventilation systems (Continuous Positive Airway Pressure (CPAP)). Faced with the lack of respiratory assistance equipment and the large number of patients affected by COVID 19, the MaskDecath project aimed to print in 3D an adapter (and if necessary a valve) to transform a Decathlon diving mask into a low-cost CPAP device that could be distributed quickly. To make these adapters, we were inspired by a similar Italian initiative, and then optimized their design through numerous tests to make them more functional. Within a few days, many parts were printed with the on-board means available to our team and a partner team at Twente University.
To make sure that the masks were suitable for use on patients, and in particular that with our modifications the contaminated air exhaled by the patient was well filtered before being discharged outside, we worked with Dr. Silvana Perretta, from the civil hospital in Strasbourg. Once the tests were passed, around 100 adapters were printed, with 30 of them used at the hospital in Strasbourg and the rest of the modified masks were sent to Colmar and Mulhouse. More information on this project can be found under https://
Neurostimulation stabilizes spiking neural networks by disrupting seizure-like oscillatory transitions
An improved understanding of the mechanisms underlying neuromodulatory approaches to mitigate seizure onset is needed to identify clinical targets for the treatment of epilepsy. We examined the role played by intrinsic and extrinsic stimuli on a network’s predisposition to sudden transitions into oscillatory dynamics, similar to the transition to the seizure state 24. Our analyses revealed that such stimuli, be they noisy or periodic in nature, exert a stabilizing influence on network responses, disrupting the development of such oscillations. Our research shows that such stabilization of neural activity occurs through a selective recruitment of inhibitory cells, providing a theoretical undergird for the known key role these cells play in both the healthy and diseased brain.
Alternating quarantine for sustainable epidemic mitigation
Absent pharmaceutical interventions, social distancing, lock-downs and mobility restrictions remain our prime response in the face of epidemic outbreaks. To ease their potentially devastating socioeconomic consequences, we proposed
22
an alternating quarantine strategy: at every instance, half of the population remains under lockdown while the other half continues to be active - maintaining a routine of weekly succession between activity and quarantine. As a result it provides a dramatic reduction in transmission, comparable to that achieved by a population-wide lockdown, despite sustaining socioeconomic continuity at
The Caribou project is aimed at multiphysics computation. This plugin complements the FEM-based models existing in SOFA with a more modularized approach. For continuum solid mechanics, the mechanical law, the type of finite element and the quadrature method can be selected separately. This method therefore eases the reading and understanding of the code and the associated theory. The Caribou project also provides generic C++ utilities, and SOFA components such as solvers.
The project is composed of two modules: (i) Caribou library brings multiple geometric, linear analysis and topological tools that are designed to be as independent as possible from external projects. (ii) SofaCaribou library is built on top of the caribou library, but brings new components to the SOFA project as a plugin.
The finite element method (FEM) is among the most commonly used numerical methods for solving engineering problems. Due to its computational cost, various ideas have been introduced to reduce computation times, such as domain decomposition, parallel computing, adaptive meshing, and model order reduction. In an article
31and in a thesis
33we have demonstrated how the use of machine learning algorithms in combination with conventional numerical methods could improve computer-assisted interventions. In particular, we have presented the U-Mesh method based on deep neural networks, which models the biomechanical behavior of an organ, respecting the specificities of each patient and satisfying the real-time constraint inherent in image-guided surgery. In the pre-operative phase, we built a finite element model of the organ considered with average parameters, in order to generate a training dataset to train the network. Then, during surgery, the mechanical properties of organs are identified with Bayesian Inference and new patient-specific data can be generated. The weights of networks previously trained can be updated using the new simulated data.
The aim of this work is to provide a better knowledge and understanding of pathologies such as prolapsus or abnormal mobility of tissues. 2D dynamic MRI sequences are commonly used in clinical routines in order to evaluate the dynamic of organs, but due to the limited field of view, subjectivity related to human perception cannot be avoided in the diagnoses. To this end, a novel method for 2D/3D registration is proposed
16combining 3D finite element models with a priori knowledge of boundary conditions, in order to provide a 3D extrapolation of the dynamic of the organs observed in a single 2D MRI slice. The method is applied to the four main structures of the female pelvic floor (bladder, vagina, uterus and rectum), providing a full 3D visualisation of the organs' displacements. The methodology is evaluated with two patient‐specific data sets of volunteers presenting no pelvic pathology, and a sensitivity study is performed using synthetic data. The resulting simulations provide an estimation of the dynamic 3D shape of the organs which facilitates the diagnosis compared with 2D sequences.
Observations may improve theoretical models and several techniques are known to merge both in an optimal way. In this context, we have been working on recurrence analysis
14, that is a data analysis technique applied to extract an underlying dynamical model. Moreover, we also have been working on mathematical models
19,
24,
40that aim to reproduce experimental data. Data assimilation combines both observations and models in one technique. We have employed the Localized Ensemble Transform Kalman Filter (LETKF) to estimate short-term predictions in the presence of nonlocal observations
21,
25. Future studies on the LETKF aim to understand better how nonlocal observations affect predictions and how they should be assimilated to achieve optimum predictions. In this context, Axel Hutt has co-edited the book “Synergetics”
30published in the Springer series “Encyclopedia of Complexity and Systems”, together with Hermann Haken (emeritus at University of Stuttgart).
The liver is the largest solid organ in the human body. It plays a fundamental role in various vital functions. Therefore, liver diseases might cause severe problems that lead to a variety of abnormalities and shorter life expectancy. Together with liver transplantation, the typical treatment of liver cancer is liver resection. A possible solution to support the medical surgery is to use a model that simulates the behavior of the real organ, cf. Fig.
4. Matched to the visible part, it predicts the positions of invisible parts and shows the locations of initially registered tumors. While researchers are focused on patient-specific liver properties, very few have addressed the question of boundary conditions. Resulting mainly from ligaments attached to the liver, they are not visible in pre-operative images, yet play a key role in the computation of the deformation. We have proposed
23to estimate both the location and stiffness of ligaments by using a combination of a statistical atlas, numerical simulation, and Bayesian inference. Our approach is evaluated using synthetic data and phantom data. Results show that our estimation of the boundary conditions improves the accuracy of the simulation by 75
when compared to typical methods involving Dirichlet boundary conditions.
When it comes to treating mental disorders, the emergence of resistance to medication is a major problem. Replacing chemical medicine with digital medicine (neural stimulation) could be one way of getting around the problem (https://
Intra-operative brain shift is a well-known phenomenon that describes non-rigid deformation of brain tissues due to gravity and loss of cerebrospinal fluid among other phenomena. This has a negative influence on surgical outcome that is often based on pre-operative planning where the brain shift is not considered. We present a novel brain-shift aware Augmented Reality method to align pre-operative 3D data onto the deformed brain surface viewed through a surgical microscope
27,
28. We formulate our non-rigid registration as a Shape-from-Template problem. A pre-operative 3D wire-like deformable model is registered onto a single 2D image of the cortical vessels, which is automatically segmented. This 3D/2D registration drives the underlying brain structures, such as tumors, and compensates for the brain shift in sub-cortical regions. We evaluated our approach on simulated and real data composed of 6 patients. It achieved good quantitative and qualitative results making it suitable for neurosurgical guidance.
In our work, we consider the major challenges in taking into account the complexity of the geometries on which finite element simulations are carried out. In the research context of fictitious domains
39, we have developed a numerical scheme taking into account the boundary data using a levelset function which is an object that can be obtained during the segmentation of the images. Concerning studies related control
35,
38,
36,
37,
15, the qualitative analysis of optimal strategies allows a better understanding of the biological and physical phenomena in question and the development of faster numerical approximation techniques. We have demonstrated the feasibility of such commands and studied the impact of temporal and spatial influence. We have sometimes even given an explicit characterization of optimal strategies. In most of these studies, we numerically illustrate the validity of the theoretical results obtained.
During a craniotomy, the skull is opened to allow surgeons to access the brain and perform the procedure. The position and size of this opening are chosen in a way to avoid critical structures, such as vessels, and facilitate the access to tumours. Planning the operation is done based on pre-operative images and does not account for intra-operative surgical events. We present a novel image-guided neurosurgical system
20to optimise the craniotomy opening. Using physics-based modelling we define a cortical deformation map that estimates the displacement field at candidate craniotomy locations. This deformation map is coupled with an image analogy algorithm that produces realistic synthetic images that can be used to predict both the geometry and the appearance of the brain surface before opening the skull. These images account for cortical vessel deformations that may occur after opening the skull and is rendered in a way that increases the surgeon’s understanding and assimilation. Our method was tested retrospectively on patients data showing good results and demonstrating the feasibility of practical use of our system.
Abdominal organs undergo large deformations due to intra-abdominal pressure (pneumoperitoneum) during laparoscopic surgery, especially large organs such as the liver. These deformations cause large inaccuracies when using surgical navigation systems. Fortunately, intra-operative imaging through CT/MRI can be acquired in modern hybrid ORs as well as laparoscopic ultrasound and can both be used to provide updated organ models. However, these medical imaging modalities are expensive and may extend the surgical workflow, hence, biomechanical models could be used as a solution for intra-operative registration, also to account for organ deformations due to surgical manipulation. Within our study
32, we propose a solution to compensate for pneumoperitoneum, which could greatly increase the accuracy of liver surgical navigation systems.
MIMESIS had received a support for the development of the project LOSAR: Liver Open Surgery with Augmented Reality which has been finalized in 2020. This project had been done in collaboration with Paul Brousse Hospital in Paris. The goal of Losar was to develop the first augmented reality software usable in the operating room in order to assist surgeons during open liver surgery. To do this, we have developed a stand-alone application to process and automate the generation of preoperative data, as well as to improve the ergonomics of the current software to better meet clinical constraints. The ADT enabled the development of methods for RGBD cameras which facilitate the use of the method in the OR (no markers) and reduce the constraints with the workspace of medical staff. Finally, we worked on solutions to improve the stability of the solution and make more robust the registration algorithms based on real-time biomechanical models. Several experiments have been carried out to validate the system in vivo and ex vivo.
MIMESIS has received support for the development of the project DeepPhysX: Data-driven simulation.This project aims to develop new tools for real-time navigation and registration in image-guided surgery. It aims to integrate learning methods with numerical simulations in order to obtain robust predictions adapted to the patient.
At first, the project considers previous calculation codes, e.g. PyTorch, SOFA and its Python3 interface and the Caribou plugin of Sofa. Then the development of the Kromagon plugin of Sofa dedicated to deep learning (Python and C ++ development) will continue. Essentially, a new Newton algorithm will be developed that combines AI-based prediction and numerical computation. Performance and accuracy evaluation and documentation and provision of the plugin (binary or open) will close the project. This work will be performed in close collaboration to the “surgical guidance” project (Hopital Paul Brousse and / or IHU Strasbourg).
MIMESIS coordinates the ANR project entitled SPERRY: SuPervisEd Robotic suRgerY - application to needle insertion.
Percutaneous medical procedures (using surgical needles) are among the least invasive approaches to accessing deep internal structures of organs without damaging surrounding tissues. Today, many surgical procedures rely on the use of needles allowing for complex interventions such as curie-therapies or thermo-ablations of tumors (cryoablation, radio frequencies). Unlike traditional open surgery, these approaches only affect a localized area around the needle, reducing trauma and risks of complications. These treatments also offer new solutions for tumors or for metastases for which traditional methods may be contraindicated due to the age of the patient and the extent or location of the disease.
In this project, we want to develop new solutions for the control of medical robots interacting with soft tissues. This work is motivated by recent advances in the field of medical simulation achieving a sufficient level of realism to help surgeons during the operation. The maturity of these techniques now suggests the ability to use a simulation intra-operatively to control the motion of a robotic system for needle insertion. This is really a challenge, because in general, little information can be extracted in real time from images during an intervention. We believe that even minimal knowledge of the mechanical behavior of structures, associated with the use of images can make it possible and allow a robot to reach a pre-identified target during a planning stage, without human intervention.
In addition, we participate in the ANR Project VATSOP: Images and models for computer guidance during Video Assisted Thoracic Surgery (VATS). In France, lung cancer is the most common cause of death by cancer before breast and colorectal cancers. Video-Assisted Thoracic Surgery (VATS) is a minimally-invasive trans-thoracic procedure that has become prominent for the treatment of lung cancer. However, localizing the nodules during VATS is extremely challenging due to the technical difficulties inherent to endoscopic surgery, and due to the collapse of the lung during the procedure. The main objective of our project is to develop and integrate, in an operating room, an image-based guidance solution which would: 1) estimate the position of the patient’s nodule after pneumothorax from CBCT images; 2) propose an augmented reality-based guidance to this location on the endoscopic view; 3) allow to follow this location in real-time on the endoscopic view, during manipulation of the lung by the surgeon. Partners of the project are TIMC Grenoble and LTSI Rennes.
MIMESIS is closely connected to the SOFA Consortium, created by Inria in November 2015 with the objective to support the SOFA community and encourage contributions from new SOFA users. The consortium should also be a way to better answer to the needs of academic or industrial partners. MIMESIS actively participates at the development of SOFA and contributes to the evolution of the framework. Moreover, MIMESIS also participates in an initiative aiming at verification and validation of codes and algorithms of SOFA.
Stephane Cotin is part of the program committee of IPCAI 2021 (as Area Chair)
Stephane Cotin is co-Program chair for MICCAI 2021
Axel Hutt is member of the PC of MIMESIP 2021.
Axel Hutt has reviewed manuscripts for Frontiers in Human Neuroscience, Frontiers in Applied Mathematics and Statistics, PLoS Computational Biology and Scientific Reports.
Stéphane Cotin has been elected as a member of the National Academy of Surgery (in November 2020).
The team’s research has been mentioned in “Sciences et Avenir” (No.887, January 2021).