EN FR
EN FR


Section: Application Domains

Image guided intervention

Image-guided neurosurgical procedures rely on complex preoperative planning and intraoperative environment. This includes various multimodal examinations: anatomical, vascular, functional explorations for brain surgery and an increasing number of computer-assisted systems taking place in the Operating Room (OR). Hereto, using an image-guided surgery system, a rigid fusion between the patient's head and the preoperative data is determined. With an optical tracking system and Light Emitting Diodes (LED), it is possible to track the patient's head, the microscope and the surgical instruments in real time. The preoperative data can then be merged with the surgical field of view displayed in the microscope. This fusion is called “augmented reality” or “augmented virtuality”.

Unfortunately, it is now fully admitted that this first generation of systems still have a lot of limitations. These limitations explain their relative added value in the surgeon's decision-making processes. One of the most well known limitations is the issue related to soft tissue surgery. The assumption of a rigid registration between the patient's head and the preoperative images only holds at the beginning of the procedure. This is because soft tissues tend to deform during the intervention. This is a common problem in many image-guided interventions, the particular case of neurosurgical procedures can be considered as a representative case. Brain shift is one manifestation of this problem but other tissue deformations can occur and must be taken into account for a more realistic predictive work. Other important limitations are related to the interactions between the systems and the surgeon. The information displayed in the operative field of view is not perfectly understood by the surgeon. Display modes have to be developed for better interpretation of the data. Only relevant information should be displayed and when required only. The study of information requirements in image guided surgery is a new and crucial topic for better use of images during surgery. Additionally, image guided surgery should be adapted to the specificities of the surgical procedure. They have to be patient specific, surgical procedure specific and surgeon specific. Minimally invasive therapies in neurosurgery emerged this last decade, such as Deep Brain Stimulation and Transcranial Magnetic Stimulation. Similar issues exist for these new therapies. Images of the patient and surgical knowledge must help the surgeon during planning and performance. Soft tissue has to be taken into account. Solutions have to be specific. Finally, it is crucial to develop and apply strong and rigorous methodologies for validating and evaluating methods and systems in this domain. At its beginning, Computer Assisted Surgery suffered from poor validation and evaluation. Numbers were badly computed. For instance, Fiducial Registration Error (FRE) was used in commercial systems for quantifying accuracy. It is now definitively obvious that FRE is a bad indicator of the error at the surgical target. Within this application domain, we aim at developing methods and systems, which overcome these issues for safer surgery. Intra operative soft tissue deformations will be compensated using surgical guidance tools and real-time imagery in the interventional theatre. This imagery can come from video (using augmented reality procedures), echography or even interventional MRI, biological images or thermal imagery in the future. For optimizing the surgical process and the interactions between the user and the CAS systems, we aim at studying the surgical expertise and the decision-making process involving procedural and conceptual knowledge. These approaches will help developing methods for better planning and performance of minimally invasive therapies for neurosurgery, such as Transcranial Magentic Stimulation (TMS) and Deep Brain Stimulation (DBS). All along this research, frameworks will be developed and applied for validation and evaluation of the developed methods and systems.

Intra-operative imaging in neurosurgery: Our major objective within this application domain is to correct for brain deformations that occur during surgery. Neuronavigation systems make it now possible to superimpose preoperative images with the surgical field under the assumption of a rigid transformation. Nevertheless, non-rigid brain deformations, as well as brain resection, drastically limit the efficiency of such systems. The major objective here is to study and estimate brain deformations using 3D ultrasound and video information.

Modeling of surgical expertise: Research on modeling surgical expertise are divided into two aspects: 1) understanding and modelling the surgical process defined as the list of surgical steps planned or performed by the surgeon, 2) understanding and modelling the surgeon's information requirements via cognitive analysis of decision-making process and problem solving process. For the first aspect, the main long term objective consists in defining a global methodology for surgical process modelling including description of patient specific surgical process models (SPM) and computation of generic SPM from patient specific SPMs. Complexity of this project requires an international collaborative work involving different surgical disciplines. This conceptual approach has to be used in a clinical context for identifying added values and for publications. Resulting applications may impact surgical planning, surgical performance as well as surgical education. For the second aspect, we study the cognitive processes followed by surgeon during decision and action processes. In surgical expertise, dexterity is not the only involved skill. With the GRESICO laboratory from the University of Bretagne Sud, we will adapt models from cognitive engineering to study differences in cognitive behaviour between neurosurgeons with different expertise levels as well as information requirements in a decision making or problem solving.

Robotics for 3D echography: This project is conducted jointly with the Lagadic project-team. The goal is to use active vision concepts in order to control the trajectory of a robot based on the contents of echographic images and video frames (taken from the acquisition theatre). Possible applications are the acquisition of echographic data between two remote sites (the patient is away from the referent clinician) or the monitoring of interventional procedure like biopsy or selective catheterisms.

3D free-hand ultrasound: Our major objective within this application domain is to develop efficient and automatic procedures to allow the clinician to use conventional echography to acquire 3D ultrasound and to propose calibrated quantification tools for quantitative analysis and fusion procedures. This will be used to extend the scope of view of an examination.