Automated segmentation of histological structures in microscopy images is a crucial step in computer-aided diagnosis framework. However, this task remains a challenging problem due to issues like overlapping and touching objects, shape variation, and background complexity. In this work, we present a novel and effective approach for instance segmentation through the synergistic combination of two deep learning networks (detection and segmentation models) with active shape models. Our method, called softmax-driven active shape model (SD-ASM), uses information from deep neural networks to initialize and evolve a dynamic deformable model. The detection module enables treatment of individual objects separately, while the segmentation map precisely outlines boundaries. We conducted extensive tests using various state-of-the-art architectures on two standard datasets for segmenting crowded objects like cell nuclei - MoNuSeg and CoNIC. The proposed SD-ASM consistently outperformed reference methods, achieving up to 8.93% higher Aggregated Jaccard Index (AJI) and 9.84% increase in Panoptic Quality (PQ) score compared to segmentation networks alone. To emphasize versatility, we also applied SD-ASMs to segment hepatic steatosis and renal tubules, where individual structure identification is critical. Once again, integration of SD-ASM with deep models enhanced segmentation accuracy beyond prior works by up to 6.2% in AJI and 38% decrease in Hausdorff Distance. The proposed approach demonstrates effectiveness in accurately segmenting touching objects across multiple clinical scenarios.

Softmax-Driven Active Shape Model for Segmenting Crowded Objects in Digital Pathology Images / Salvi, Massimo; Meiburger, Kristen M.; Molinari, Filippo. - In: IEEE ACCESS. - ISSN 2169-3536. - ELETTRONICO. - 12:(2024), pp. 30824-30838. [10.1109/access.2024.3369916]

Softmax-Driven Active Shape Model for Segmenting Crowded Objects in Digital Pathology Images

Salvi, Massimo;Meiburger, Kristen M.;Molinari, Filippo
2024

Abstract

Automated segmentation of histological structures in microscopy images is a crucial step in computer-aided diagnosis framework. However, this task remains a challenging problem due to issues like overlapping and touching objects, shape variation, and background complexity. In this work, we present a novel and effective approach for instance segmentation through the synergistic combination of two deep learning networks (detection and segmentation models) with active shape models. Our method, called softmax-driven active shape model (SD-ASM), uses information from deep neural networks to initialize and evolve a dynamic deformable model. The detection module enables treatment of individual objects separately, while the segmentation map precisely outlines boundaries. We conducted extensive tests using various state-of-the-art architectures on two standard datasets for segmenting crowded objects like cell nuclei - MoNuSeg and CoNIC. The proposed SD-ASM consistently outperformed reference methods, achieving up to 8.93% higher Aggregated Jaccard Index (AJI) and 9.84% increase in Panoptic Quality (PQ) score compared to segmentation networks alone. To emphasize versatility, we also applied SD-ASMs to segment hepatic steatosis and renal tubules, where individual structure identification is critical. Once again, integration of SD-ASM with deep models enhanced segmentation accuracy beyond prior works by up to 6.2% in AJI and 38% decrease in Hausdorff Distance. The proposed approach demonstrates effectiveness in accurately segmenting touching objects across multiple clinical scenarios.
2024
File in questo prodotto:
File Dimensione Formato  
Softmax-Driven_Active_Shape_Model_for_Segmenting_Crowded_Objects_in_Digital_Pathology_Images.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 3.26 MB
Formato Adobe PDF
3.26 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2986513