In pathological examinations,tissue must first be stained to meet specific diagnostic requirements,a meticulous process demanding significant time and expertise from specialists.With advancements in deep learning,this...In pathological examinations,tissue must first be stained to meet specific diagnostic requirements,a meticulous process demanding significant time and expertise from specialists.With advancements in deep learning,this staining process can now be achieved through computational methods known as virtual staining.This technique replicates the visual effects of traditional histological staining in pathological imaging,enhancing efficiency and reducing costs.Extensive research in virtual staining for pathology has already demonstrated its effectiveness in generating clinically relevant stained images across a variety of diagnostic scenarios.Unlike previous reviews that broadly cover the clinical applications of virtual staining,this paper focuses on the technical methodologies,encompassing current models,datasets,and evaluation methods.It highlights the unique challenges of virtual staining compared to traditional image translation,discusses limitations in existing work,and explores future perspectives.Adopting a macro perspective,we avoid overly intricate technical details to make the content accessible to clinical experts.Additionally,we provide a brief introduction to the purpose of virtual staining from a medical standpoint,which may inspire algorithm-focused researchers.This paper aims to promote a deeper understanding of interdisciplinary knowledge between algorithm developers and clinicians,fostering the integration of technical solutions and medical expertise in the development of virtual staining models.This collaboration seeks to create more efficient,generalized,and versatile virtual staining models for a wide range of clinical applications.展开更多
Recently,Mueller matrix(MM)polarimetric imaging-assisted pathology detection methods are showing great potential in clinical diagnosis.However,since our human eyes cannot observe polarized light directly,it raises a n...Recently,Mueller matrix(MM)polarimetric imaging-assisted pathology detection methods are showing great potential in clinical diagnosis.However,since our human eyes cannot observe polarized light directly,it raises a notable challenge for interpreting the measurement results by pathologists who have limited familiarity with polarization images.One feasible approach is to combine MM polarimetric imaging with virtual staining techniques to generate standardized stained images,inheriting the advantages of information-abundant MM polarimetric imaging.In this study,we develop a model using unpaired MM polarimetric images and bright-field images for generating standard hematoxylin and eosin(H&E)stained tissue images.Compared with the existing polarization virtual staining techniques primarily based on the model training with paired images,the proposed Cycle-Consistent Generative Adversarial Networks(CycleGAN)-based model simplifies data acquisition and data preprocessing to a great extent.The outcomes demonstrate the feasibility of training CycleGAN with unpaired polarization images and their corresponding bright-field images as a viable approach,which provides an intuitive manner for pathologists for future polarization-assisted digital pathology.展开更多
Histopathological analysis of chronic wounds is crucial for clinicians to accurately assess wound healing progress and detect potential malignancy.However,traditional pathological tissue sections require specific stai...Histopathological analysis of chronic wounds is crucial for clinicians to accurately assess wound healing progress and detect potential malignancy.However,traditional pathological tissue sections require specific staining procedures involving carcinogenic chemicals.This study proposes an interdisciplinary approach merging materials science,medicine,and artificial intelligence(AI)to develop a virtual staining technique and intelligent evaluation model based on deep learning for chronic wound tissue pathology.This innovation aims to enhance clinical diagnosis and treatment by offering personalized AI-driven therapeutic strategies.By establishing a mouse model of chronic wounds and using a series of hydrogel wound dressings,tissue pathology sections were periodically collected for manual staining and healing assessment.We focused on leveraging the pix2pix image translation framework within deep learning networks.Through CNN models implemented in Python using PyTorch,our study involves learning and feature extraction for region segmentation of pathological slides.Comparative analysis between virtual staining and manual staining results,along with healing diagnosis conclusions,aims to optimize AI models.Ultimately,this approach integrates new metrics such as image recognition,quantitative analysis,and digital diagnostics to formulate an intelligent wound assessment model,facilitating smart monitoring and personalized treatment of wounds.In blind evaluation by pathologists,minimal disparities were found between virtual and conventional histologically stained images of murine wound tissue.The evaluation used pathologists’average scores on real stained images as a benchmark.The scores for virtual stained images were 71.1%for cellular features,75.4%for tissue structures,and 77.8%for overall assessment.Metrics such as PSNR(20.265)and SSIM(0.634)demonstrated our algorithms’superior performance over existing networks.Eight pathological features such as epidermis,hair follicles,and granulation tissue can be accurately identified,and the images were found to be more faithful to the actual tissue feature distribution when compared to manually annotated data.展开更多
Until recently,conventional biochemical staining had the undisputed status as well-established benchmark for most biomedical problems related to clinical diagnostics,fundamental research and biotechnology.Despite this...Until recently,conventional biochemical staining had the undisputed status as well-established benchmark for most biomedical problems related to clinical diagnostics,fundamental research and biotechnology.Despite this role as gold-standard,staining protocols face several challenges,such as a need for extensive,manual processing of samples,substantial time delays,altered tissue homeostasis,limited choice of contrast agents,2D imaging instead of 3D tomography and many more.Label-free optical technologies,on the other hand,do not rely on exogenous and artificial markers,by exploiting intrinsic optical contrast mechanisms,where the specificity is typically less obvious to the human observer.Over the past few years,digital staining has emerged as a promising concept to use modern deep learning for the translation from optical contrast to established biochemical contrast of actual stainings.In this review article,we provide an in-depth analysis of the current state-of-the-art in this field,suggest methods of good practice,identify pitfalls and challenges and postulate promising advances towards potential future implementations and applications.展开更多
Recent advancements in deep learning(DL)have propelled the virtual transformation of microscopy images across optical modalities,enabling unprecedented multimodal imaging analysis hitherto impossible.Despite these str...Recent advancements in deep learning(DL)have propelled the virtual transformation of microscopy images across optical modalities,enabling unprecedented multimodal imaging analysis hitherto impossible.Despite these strides,the integration of such algorithms into scientists’daily routines and clinical trials remains limited,largely due to a lack of recognition within their respective fields and the plethora of available transformation methods.To address this,we present a structured overview of cross-modality transformations,encompassing applications,data sets,and implementations,aimed at unifying this evolving field.Our review focuses on DL solutions for two key applications:contrast enhancement of targeted features within images and resolution enhancements.We recognize cross-modality transformations as a valuable resource for biologists seeking a deeper understanding of the field,as well as for technology developers aiming to better grasp sample limitations and potential applications.Notably,they enable high-contrast,high-specificity imaging akin to fluorescence microscopy without the need for laborious,costly,and disruptive physical-staining procedures.In addition,they facilitate the realization of imaging with properties that would typically require costly or complex physical modifications,such as achieving superresolution capabilities.By consolidating the current state of research in this review,we aim to catalyze further investigation and development,ultimately bringing the potential of cross-modality transformations into the hands of researchers and clinicians alike.展开更多
基金supported by the National Natural Science Foundation of China under Grant 62371409Fujian Provincial Natural Science Foundation of China under Grant 2023J01005.
文摘In pathological examinations,tissue must first be stained to meet specific diagnostic requirements,a meticulous process demanding significant time and expertise from specialists.With advancements in deep learning,this staining process can now be achieved through computational methods known as virtual staining.This technique replicates the visual effects of traditional histological staining in pathological imaging,enhancing efficiency and reducing costs.Extensive research in virtual staining for pathology has already demonstrated its effectiveness in generating clinically relevant stained images across a variety of diagnostic scenarios.Unlike previous reviews that broadly cover the clinical applications of virtual staining,this paper focuses on the technical methodologies,encompassing current models,datasets,and evaluation methods.It highlights the unique challenges of virtual staining compared to traditional image translation,discusses limitations in existing work,and explores future perspectives.Adopting a macro perspective,we avoid overly intricate technical details to make the content accessible to clinical experts.Additionally,we provide a brief introduction to the purpose of virtual staining from a medical standpoint,which may inspire algorithm-focused researchers.This paper aims to promote a deeper understanding of interdisciplinary knowledge between algorithm developers and clinicians,fostering the integration of technical solutions and medical expertise in the development of virtual staining models.This collaboration seeks to create more efficient,generalized,and versatile virtual staining models for a wide range of clinical applications.
基金Shenzhen Key Fundamental Research Project(No.JCYJ20210324120012035).
文摘Recently,Mueller matrix(MM)polarimetric imaging-assisted pathology detection methods are showing great potential in clinical diagnosis.However,since our human eyes cannot observe polarized light directly,it raises a notable challenge for interpreting the measurement results by pathologists who have limited familiarity with polarization images.One feasible approach is to combine MM polarimetric imaging with virtual staining techniques to generate standardized stained images,inheriting the advantages of information-abundant MM polarimetric imaging.In this study,we develop a model using unpaired MM polarimetric images and bright-field images for generating standard hematoxylin and eosin(H&E)stained tissue images.Compared with the existing polarization virtual staining techniques primarily based on the model training with paired images,the proposed Cycle-Consistent Generative Adversarial Networks(CycleGAN)-based model simplifies data acquisition and data preprocessing to a great extent.The outcomes demonstrate the feasibility of training CycleGAN with unpaired polarization images and their corresponding bright-field images as a viable approach,which provides an intuitive manner for pathologists for future polarization-assisted digital pathology.
基金supported by the Fundamental Research Funds for the Central Universities(No.20720230037)the National Natural Science Foundation of China(No.52273305)+2 种基金Natural Science Foundation of Fujian Province of China(No.2023J05012)State Key Laboratory of Vaccines for Infectious Diseases,Xiang An Biomedicine Laboratory(Nos.2023XAKJ0103071,2023XAKJ0102061)Natural Science Foundation of Xiamen,China(No.3502Z20227010).
文摘Histopathological analysis of chronic wounds is crucial for clinicians to accurately assess wound healing progress and detect potential malignancy.However,traditional pathological tissue sections require specific staining procedures involving carcinogenic chemicals.This study proposes an interdisciplinary approach merging materials science,medicine,and artificial intelligence(AI)to develop a virtual staining technique and intelligent evaluation model based on deep learning for chronic wound tissue pathology.This innovation aims to enhance clinical diagnosis and treatment by offering personalized AI-driven therapeutic strategies.By establishing a mouse model of chronic wounds and using a series of hydrogel wound dressings,tissue pathology sections were periodically collected for manual staining and healing assessment.We focused on leveraging the pix2pix image translation framework within deep learning networks.Through CNN models implemented in Python using PyTorch,our study involves learning and feature extraction for region segmentation of pathological slides.Comparative analysis between virtual staining and manual staining results,along with healing diagnosis conclusions,aims to optimize AI models.Ultimately,this approach integrates new metrics such as image recognition,quantitative analysis,and digital diagnostics to formulate an intelligent wound assessment model,facilitating smart monitoring and personalized treatment of wounds.In blind evaluation by pathologists,minimal disparities were found between virtual and conventional histologically stained images of murine wound tissue.The evaluation used pathologists’average scores on real stained images as a benchmark.The scores for virtual stained images were 71.1%for cellular features,75.4%for tissue structures,and 77.8%for overall assessment.Metrics such as PSNR(20.265)and SSIM(0.634)demonstrated our algorithms’superior performance over existing networks.Eight pathological features such as epidermis,hair follicles,and granulation tissue can be accurately identified,and the images were found to be more faithful to the actual tissue feature distribution when compared to manually annotated data.
基金This project has received funding from the European Union’s Horizon 2022 Marie Skłodowska-Curie Action(grant agreement 101103200,‘MICS’to L.K.)K.C.Z.was supported in part by Schmidt Science Fellows,in partnership with the Rhodes Trust+2 种基金K.C.L.was supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),funded by the Ministry of Health&Welfare,Republic of Korea(grant number:HI21C0977060102002)Commercialization Promotion Agency for R&D Outcomes(COMPA)funded by the Ministry of Science and ICT(MSIT)(1711198540)This material is based upon work supported in part by the Air Force Office of Scientific Research under award number FA9550-21-1-0401,the National Science Foundation under Grant 2238845,and a Hartwell Foundation Individual Biomedical Researcher Award.
文摘Until recently,conventional biochemical staining had the undisputed status as well-established benchmark for most biomedical problems related to clinical diagnostics,fundamental research and biotechnology.Despite this role as gold-standard,staining protocols face several challenges,such as a need for extensive,manual processing of samples,substantial time delays,altered tissue homeostasis,limited choice of contrast agents,2D imaging instead of 3D tomography and many more.Label-free optical technologies,on the other hand,do not rely on exogenous and artificial markers,by exploiting intrinsic optical contrast mechanisms,where the specificity is typically less obvious to the human observer.Over the past few years,digital staining has emerged as a promising concept to use modern deep learning for the translation from optical contrast to established biochemical contrast of actual stainings.In this review article,we provide an in-depth analysis of the current state-of-the-art in this field,suggest methods of good practice,identify pitfalls and challenges and postulate promising advances towards potential future implementations and applications.
基金support from the MSCA-ITN-ETN project ActiveMatter sponsored by the European Commission(Horizon 2020,Project No.812780)support from the ERC-CoG project MAPEI sponsored by the European Commission(Horizon 2020,Project No.101001267)+1 种基金from the Knut and AliceWallenberg Foundation(Grant No.2019.0079)aroline Beck Adiels and Giovanni Volpe acknowledge the Swedish Foundation for Strategic Research(Grant No.ITM17-0384).
文摘Recent advancements in deep learning(DL)have propelled the virtual transformation of microscopy images across optical modalities,enabling unprecedented multimodal imaging analysis hitherto impossible.Despite these strides,the integration of such algorithms into scientists’daily routines and clinical trials remains limited,largely due to a lack of recognition within their respective fields and the plethora of available transformation methods.To address this,we present a structured overview of cross-modality transformations,encompassing applications,data sets,and implementations,aimed at unifying this evolving field.Our review focuses on DL solutions for two key applications:contrast enhancement of targeted features within images and resolution enhancements.We recognize cross-modality transformations as a valuable resource for biologists seeking a deeper understanding of the field,as well as for technology developers aiming to better grasp sample limitations and potential applications.Notably,they enable high-contrast,high-specificity imaging akin to fluorescence microscopy without the need for laborious,costly,and disruptive physical-staining procedures.In addition,they facilitate the realization of imaging with properties that would typically require costly or complex physical modifications,such as achieving superresolution capabilities.By consolidating the current state of research in this review,we aim to catalyze further investigation and development,ultimately bringing the potential of cross-modality transformations into the hands of researchers and clinicians alike.