Learning, Modelling and Data Science
The Learning, Modelling, and Data Science team brings together researchers in the team who work on artificial intelligence (AI), simulation, and computer vision methods.
Computer vision, object recognition and scene understanding
Adlane Habed Jean-Paul Mazellier Nicolas Padoy Benoit Rosa Vinkle Srivastav
This axis concerns the development of computer vision methods for object recognition, 3D scene understanding and assessment, endoscopic image segmentation, or surgical phase estimation. Optimization methods have been developed for applications such as the autocalibration of cameras or visual odometry. We are interested in building original, robust optimization algorithms, exploiting rich multi-modal information such as semantic maps. In the medical context, a driving theme is the development of a surgical control tower, monitoring events in the operating room. Majors results have been obtained using modern deep learning techniques, for instance regarding the 3D pose estimation of operators in the room, surgical phase estimation, and segmentationor pose estimation of surgical instruments. A limitation of such methods is the need for large, high quality datasets. We are therefore more and more interested in weakly- and self-supervised approaches, which exploit available data sources or a specific structure of the information in order to limit the amount of labeled data required.
Numerical simulation methods for surgical applications
Simon Chatelin Hadrien Courtecuisse Jean-Philippe Dillenseger
The second major problem we tackle in this theme is the development of numerical simulation methods for surgical applications. A first application of such models is to help the design and modeling of robots. An interesting approach developed in partnership with researchers from the MLMS team is to include real-time finite element simulations within the control loop of a robotic system in order to anticipate environment deformations and interactions. We are also interested in developing biomechanical models of patient-specific features such as soft tissues. These developments are accompanied by the development of methods for the acquisition of multi-scale and patient-specific in vivo physical parameters via biomedical imaging (with a specific focus on elastography using preclinical and clinical MRI and ultrasound methods). Finally, simulations can also drive the training of surgical staff, as shown with the X-aware prototype in which physical radiation models and AI-based 3D pose estimation allow making a clinician aware of his/her full body exposure to x-ray during interventional radiology procedures.
Data sience methods and clinical translation
Alexandros Karargyris Jean-Paul Mazellier Nicolas Padoy Vinkle Srivastav
The last axis within this theme concerns transverse methodological research problems in AI, which could be applicable to both computer vision and simulation methods. We aim to tackle key problems towards clinical application of the methods developed in our work. One first key problem is the generalizability problem, i.e. proving that our methods are robust to diverse conditions. To this aim, we develop data augmentation techniques with original synthetic data generation methods. We have recently started working on coupling simulation and AI or computer vision methods in a synergistic fashion. Simulated models can for instance be a source of information to augment available data with semi-synthetic inputs in order to train more realistic AI models without increasing the data gathering and annotation costs. We are also increasingly interested in federated learning approaches, for training models with multi-centric sensitive data.