In pathological diagnostics,histological images highlight the oncological features of excised specimens,but they require laborious and costly staining procedures.Despite recent innovations in label-free microscopy tha...In pathological diagnostics,histological images highlight the oncological features of excised specimens,but they require laborious and costly staining procedures.Despite recent innovations in label-free microscopy that simplify complex staining procedures,technical limitations and inadequate histological visualization are still problems in clinical settings.Here,we demonstrate an interconnected deep learning(DL)-based framework for performing automated virtual staining,segmentation,and classification in label-free photoacoustic histology(PAH)of human specimens.The framework comprises three components:(1)an explainable contrastive unpaired translation(E-CUT)method for virtual H&E(VHE)staining,(2)an U-net architecture for feature segmentation,and(3)a DL-based stepwise feature fusion method(StepFF)for classification.The framework demonstrates promising performance at each step of its application to human liver cancers.In virtual staining,the E-CUT preserves the morphological aspects of the cell nucleus and cytoplasm,making VHE images highly similar to real H&E ones.In segmentation,various features(e.g.,the cell area,number of cells,and the distance between cell nuclei)have been successfully segmented in VHE images.Finally,by using deep feature vectors from PAH,VHE,and segmented images,StepFF has achieved a 98.00%classification accuracy,compared to the 94.80%accuracy of conventional PAH classification.In particular,StepFF’s classification reached a sensitivity of 100%based on the evaluation of three pathologists,demonstrating its applicability in real clinical settings.This series of DL methods for label-free PAH has great potential as a practical clinical strategy for digital pathology.展开更多
A superresolution imaging approach that localizes very small targets,such as red blood cells or droplets of injected photoacoustic dye,has significantly improved spatial resolution in various biological and medical im...A superresolution imaging approach that localizes very small targets,such as red blood cells or droplets of injected photoacoustic dye,has significantly improved spatial resolution in various biological and medical imaging modalities.However,this superior spatial resolution is achieved by sacrificing temporal resolution because many raw image frames,each containing the localization target,must be superimposed to form a sufficiently sampled high-density superresolution image.Here,we demonstrate a computational strategy based on deep neural networks(DNNs)to reconstruct high-density superresolution images from far fewer raw image frames.The localization strategy can be applied for both 3D label-free localization optical-resolution photoacoustic microscopy(OR-PAM)and 2D labeled localization photoacoustic computed tomography(PACT).For the former,the required number of raw volumetric frames is reduced from tens to fewer than ten.For the latter,the required number of raw 2D frames is reduced by 12 fold.Therefore,our proposed method has simultaneously improved temporal(via the DNN)and spatial(via the localization method)resolutions in both label-free microscopy and labeled tomography.Deep-learning powered localization PA imaging can potentially provide a practical tool in preclinical and clinical studies requiring fast temporal and fine spatial resolutions.展开更多
基金supported by the following sources:Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2020R1A6A1A03047902)NRF grant funded by the Ministry of Science and ICT(MSIT)(2023R1A2C3004880,2021M3C1C3097624)+2 种基金Korea Medical Device Development Fund grant funded by the Korea government(MSIT,the Ministry of Trade,Industry and Energy,the Ministry of Health&Welfare,the Ministry of Food and Drug Safety)(Project Number:1711195277,RS-2020-KD000008,1711196475,RS-2023-00243633)Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2019-II191906,Artificial Intelligence Graduate School Program(POSTECH))BK21 FOUR program.
文摘In pathological diagnostics,histological images highlight the oncological features of excised specimens,but they require laborious and costly staining procedures.Despite recent innovations in label-free microscopy that simplify complex staining procedures,technical limitations and inadequate histological visualization are still problems in clinical settings.Here,we demonstrate an interconnected deep learning(DL)-based framework for performing automated virtual staining,segmentation,and classification in label-free photoacoustic histology(PAH)of human specimens.The framework comprises three components:(1)an explainable contrastive unpaired translation(E-CUT)method for virtual H&E(VHE)staining,(2)an U-net architecture for feature segmentation,and(3)a DL-based stepwise feature fusion method(StepFF)for classification.The framework demonstrates promising performance at each step of its application to human liver cancers.In virtual staining,the E-CUT preserves the morphological aspects of the cell nucleus and cytoplasm,making VHE images highly similar to real H&E ones.In segmentation,various features(e.g.,the cell area,number of cells,and the distance between cell nuclei)have been successfully segmented in VHE images.Finally,by using deep feature vectors from PAH,VHE,and segmented images,StepFF has achieved a 98.00%classification accuracy,compared to the 94.80%accuracy of conventional PAH classification.In particular,StepFF’s classification reached a sensitivity of 100%based on the evaluation of three pathologists,demonstrating its applicability in real clinical settings.This series of DL methods for label-free PAH has great potential as a practical clinical strategy for digital pathology.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF),funded by the Ministry of Education(2020R1A6A1A03047902)supported by National R&D Program through the NRF funded by the Ministry of Science and ICT(MSIT)(2020M3H2A1078045)+4 种基金supported by the NRF grant funded by the Korea government MSIT(No.NRF-2019R1A2C2006269 and No.2020R1C1C1013549)This work was partly supported by the Institute of Information&communications Technology Planning&Evaluation(ITP)grant funded by the Korea government MSIT(No.2019-0-01906,Artificial Intelligence Graduate School Program(POSTECH))Korea Evaluation Institute of Industrial Technology(KEIT)grant funded by the Ministry of Trade,industry and Energy(MOTIE)This work was also supported by the Korea Medical Device Development Fund grant funded by the MOTIE(9991007019,KMDF_PR_20200901_0008)It was also supported by the BK21 Four project.
文摘A superresolution imaging approach that localizes very small targets,such as red blood cells or droplets of injected photoacoustic dye,has significantly improved spatial resolution in various biological and medical imaging modalities.However,this superior spatial resolution is achieved by sacrificing temporal resolution because many raw image frames,each containing the localization target,must be superimposed to form a sufficiently sampled high-density superresolution image.Here,we demonstrate a computational strategy based on deep neural networks(DNNs)to reconstruct high-density superresolution images from far fewer raw image frames.The localization strategy can be applied for both 3D label-free localization optical-resolution photoacoustic microscopy(OR-PAM)and 2D labeled localization photoacoustic computed tomography(PACT).For the former,the required number of raw volumetric frames is reduced from tens to fewer than ten.For the latter,the required number of raw 2D frames is reduced by 12 fold.Therefore,our proposed method has simultaneously improved temporal(via the DNN)and spatial(via the localization method)resolutions in both label-free microscopy and labeled tomography.Deep-learning powered localization PA imaging can potentially provide a practical tool in preclinical and clinical studies requiring fast temporal and fine spatial resolutions.