Deep Learning in Photoacoustic

Deep Learning in Photoacoustic imaging

Last few years saw a significant growth in application of DL for image classification and processing. Deep learning methods have also been used for reconstruction and enhancement in photoacoustic imaging. Here we discuss few of our recent work on deep learning in photoacoustic imaging. 


Ultrasound-guided needle tracking with photoacoustic ground truth trained deep learning: 

Accurate needle guidance is crucial for safe and effective clinical diagnosis and treatment procedures. Conventional ultrasound (US)-guided needle insertion often encounters challenges in consistency and precisely visualizing the needle, necessitating the development of reliable methods to track the needle. As a powerful tool in image processing, deep learning has shown promise for enhancing needle visibility in US images, although its dependence on manual annotation or simulated data as ground truth can lead to potential bias or difficulties in generalizing to real US images. Photoacoustic (PA) imaging has demonstrated its capability for high-contrast needle visualization. In this study, we explore the potential of PA imaging as a reliable ground truth for deep learning network training without the need for expert annotation. Our network (UIU-Net) has shown remarkable precision in localizing needles within US images. More details can be found in Hui et al., Photoacoustics 34 (2023). Codes and data can be downloaded from the Software Download page.

s4.avi
s5.avi

Deep Learning aided high frame rate (~3 Hz) PAT with single-element ultrasound transducer: 

In circular scanning photoacoustic tomography (PAT), it takes several minutes to generate an image of acceptable quality, especially with a single-element ultrasound transducer (UST). The imaging speed can be enhanced by faster scanning (with high repetition rate light sources) and using multiple-USTs However, artifacts arising from the sparse signal acquisition and low signal-to-noise ratio at higher scanning speeds limits the imaging speed. Thus, there is a need to improve the imaging speed of the PAT systems without hampering the quality of the PAT image. For improving the framerate (or imaging speed) of the PAT system, we propose a novel U-Net-based deep learning framework to reconstruct PAT images from fast scanning data. The efficiency of the network was evaluated on both single- and multiple-UST-based PAT systems. Both phantom and in vivo imaging demonstrate that the network can improve the imaging frame rate by ~6 fold in single-UST-based PAT systems and by ~2 fold in multi-UST-based PAT systems. With this method, we achieved the fastest frame rate of ~3 Hz imaging without hampering the quality of the reconstructed image. More details can be found in Rajendran et al., Journal of Biomedical Optics 27 (2022). Codes and data can be downloaded from the Software Download page.

Deep Learning approach for multi-transducer PAT without radius calibration: 

Pulsed laser diodes (PLD) are used in photoacoustic tomography (PAT) as excitation sources because of their low cost, compact size, and high pulse repetition rate. In combination with multiple single-element ultrasound transducers (SUT), it improves the imaging speed. However, during PAT image reconstruction, the exact radius of each SUT is required for accurate reconstruction. Here, we developed a novel deep learning approach to alleviate the need for radius calibration. We used a convolutional neural network (fully dense U-Net) aided with a convolutional long short-term memory (LSTM) block to reconstruct the PAT images. Our analysis on the test set demonstrate that the proposed network eliminates the need for radius calibration and improves the peak signal-to-noise-ratio by ~73% without compromising the image quality. In vivo imaging was used to verify the performance of the network. More details can be found in Rajendran et al., Optics Letters 46 (2021). Codes and data can be downloaded from the Software Download page.

Deep Learning approach to improve tangential resolution in PAT: 

In circular scan photoacoustic tomography (PAT), the axial resolution is spatially invariant and is limited by the bandwidth of the detector. However, the tangential resolution is spatially variant and is dependent on the aperture size of the detector. In particular, the tangential resolution improves with the decreasing aperture size. However, using detector with smaller aperture reduces the sensitivity of the transducer. Thus, large aperture size detectors are widely preferred in circular scan PAT imaging systems. Although several techniques have been proposed to improve the tangential resolution, they have inherent limitations such as high cost and the need for customized detectors. Here, we developed a novel deep learning architecture to counter the spatially variant tangential resolution in circular scanning PAT imaging systems. We used a fully dense U-Net based convolution neural network architecture along with 9 residual blocks to improve the tangential resolution of the PAT images. The network was trained on the simulated datasets and its performance was verified by experimental in vivo imaging. Results show that the proposed deep learning network improves the tangential resolution by eight folds, without compromising the structural similarity and quality of image. More details can be found in Rajendran et al., Biomedical Optics Express 11(2020). Codes and data can be downloaded from the Software Download page.

Convolution neural network for resolution improvement in AR-PAM:

In acoustic resolution photoacoustic microscopy (AR-PAM), a high numerical aperture focused ultrasound transducer (UST) is used for deep tissue high resolution photoacoustic imaging. There is a significant degradation of lateral resolution in out-of-focus region. Improvement in out-of-focus resolution without degrading the image quality remains a challenge. Here, we used a deep learning-based method to improve the resolution of AR-PAM images, especially at the out of focus plane. A modified fully dense U-Net based architecture was trained on simulated AR-PAM images. Applying the trained model on experimental images showed that the variation in resolution is ~10% across the entire imaging depth (~4 mm) in deep learning-based method, compared to ~180% variation in original PAM images. Performance of the trained network on in vivo rat vasculature imaging further validated that noise-free, high resolution images can be obtained using this method. More details can be found in Sharma et al., Biomedical Optics Express 11 (2020). Codes and data can be downloaded from the Software Download page.