Introduction The current study presents a deep learning framework to determine, in real-time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot-assisted procedures. Methods This framework exploits semantic segmentation and, thereafter, two techniques, based on Convolutional Neural Networks and motion analysis, were used to infer the rotation. Results The segmentation shows optimal accuracies, with a mean IoU score greater than 80% in all tests. Different performance levels are obtained for rotation, depending on the surgical procedure. Discussion Even if the presented methodology has various degrees of precision depending on the testing scenario, this work sets the first step for the adoption of deep learning and augmented reality to generalise the automatic registration process.

A deep learning framework for real-time 3D model registration in robot-assisted laparoscopic surgery / Padovan, Erica; Marullo, Giorgia; Tanzi, Leonardo; Piazzolla, Pietro; Moos, Sandro; Porpiglia, Francesco; Vezzetti, Enrico. - In: THE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY. - ISSN 1478-5951. - ELETTRONICO. - (2022). [10.1002/rcs.2387]

A deep learning framework for real-time 3D model registration in robot-assisted laparoscopic surgery

Marullo, Giorgia;Tanzi, Leonardo;Piazzolla, Pietro;Moos, Sandro;Porpiglia, Francesco;Vezzetti, Enrico
2022

Abstract

Introduction The current study presents a deep learning framework to determine, in real-time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot-assisted procedures. Methods This framework exploits semantic segmentation and, thereafter, two techniques, based on Convolutional Neural Networks and motion analysis, were used to infer the rotation. Results The segmentation shows optimal accuracies, with a mean IoU score greater than 80% in all tests. Different performance levels are obtained for rotation, depending on the surgical procedure. Discussion Even if the presented methodology has various degrees of precision depending on the testing scenario, this work sets the first step for the adoption of deep learning and augmented reality to generalise the automatic registration process.
File in questo prodotto:
File Dimensione Formato  
Robotics Computer Surgery - 2022 - Padovan - A deep learning framework for real‐time 3D model registration in.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 984.7 kB
Formato Adobe PDF
984.7 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2958396