Beamforming, the process of reconstructing B-mode images from raw radiofrequency (RF) data, significantly influences ultrasound image quality. While advanced beamforming methods aim to enhance the traditional Delay and Sum (DAS) technique, they require access to raw RF data, which is often unavailable to researchers when using clinical ultrasound scanners. Given that Filtered Delay Multiply and Sum (F-DMAS) is known to provide superior image quality compared to conventional DAS, this study introduces the idea of employing generative adversarial networks (GANs) that transform plane wave DAS images into ones resembling those produced by F- DMAS. We validated the adversarial approach employing three different architectures (traditional Pix2Pix, Py- ramidal Pix2Pix and CycleGAN) using full-reference metrics: Root Mean Square Error (RMSE) and Peak Signal-to- Noise Ratio (PSNR). We further propose employing a texture analysis to validate consistency between the generated images and target images, using 27 first-order and second-order parameters; contrast enhancement was evaluated using the Contrast Improvement Index (CII), and clinical relevance was determined through expert qualitative evaluation. The adversarial methods were also compared with traditional image enhancement methods, such as contrast limited adaptive histogram equalization (CLAHE) and histogram matching. The image similarity metrics between all methods were comparable, with the Pyramidal Pix2Pix GAN method showing the best values compared to traditional techniques and other generative models (PSNR = 18.0 ± 0.6 dB, RMSE = 0.126 ± 0.008). The texture features proved to be a clear discriminant between traditional methods and generative models, with values much closer to the target F-DMAS image for the generative models. All employed methods showed an improved contrast over original PW DAS images. A clinical evaluation was then employed to assess the contribution of the generated images compared to the original ones and to distinguish which gener- ative model provided the best qualitative images. The proposed generative adversarial approach proves to be a viable option for enhancing B-mode ultrasound images when there is no access to raw RF data and demonstrates how texture features can be employed to validate deep learning generative models
Adversarial learning for beamforming domain transfer in ultrasound medical imaging / Seoni, Silvia; Salvi, Massimo; Matrone, Giulia; Lapia, Francesco; Busso, Chiara; Minetto, Marco A.; Meiburger, Kristen M.. - In: ULTRASONICS. - ISSN 0041-624X. - 156:(2025). [10.1016/j.ultras.2025.107749]
Adversarial learning for beamforming domain transfer in ultrasound medical imaging
Seoni, Silvia;Salvi, Massimo;Meiburger, Kristen M.
2025
Abstract
Beamforming, the process of reconstructing B-mode images from raw radiofrequency (RF) data, significantly influences ultrasound image quality. While advanced beamforming methods aim to enhance the traditional Delay and Sum (DAS) technique, they require access to raw RF data, which is often unavailable to researchers when using clinical ultrasound scanners. Given that Filtered Delay Multiply and Sum (F-DMAS) is known to provide superior image quality compared to conventional DAS, this study introduces the idea of employing generative adversarial networks (GANs) that transform plane wave DAS images into ones resembling those produced by F- DMAS. We validated the adversarial approach employing three different architectures (traditional Pix2Pix, Py- ramidal Pix2Pix and CycleGAN) using full-reference metrics: Root Mean Square Error (RMSE) and Peak Signal-to- Noise Ratio (PSNR). We further propose employing a texture analysis to validate consistency between the generated images and target images, using 27 first-order and second-order parameters; contrast enhancement was evaluated using the Contrast Improvement Index (CII), and clinical relevance was determined through expert qualitative evaluation. The adversarial methods were also compared with traditional image enhancement methods, such as contrast limited adaptive histogram equalization (CLAHE) and histogram matching. The image similarity metrics between all methods were comparable, with the Pyramidal Pix2Pix GAN method showing the best values compared to traditional techniques and other generative models (PSNR = 18.0 ± 0.6 dB, RMSE = 0.126 ± 0.008). The texture features proved to be a clear discriminant between traditional methods and generative models, with values much closer to the target F-DMAS image for the generative models. All employed methods showed an improved contrast over original PW DAS images. A clinical evaluation was then employed to assess the contribution of the generated images compared to the original ones and to distinguish which gener- ative model provided the best qualitative images. The proposed generative adversarial approach proves to be a viable option for enhancing B-mode ultrasound images when there is no access to raw RF data and demonstrates how texture features can be employed to validate deep learning generative modelsFile | Dimensione | Formato | |
---|---|---|---|
(2025) paper - GAN beamforming.pdf
accesso aperto
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Creative commons
Dimensione
3.78 MB
Formato
Adobe PDF
|
3.78 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/3002332