Objective:This study aims to develop a deep multiscale image learning system(DMILS)to differentiate malignant from benign thyroid follicular neoplasms on multiscale whole-slide images(WSIs)of intraoperative frozen pat...Objective:This study aims to develop a deep multiscale image learning system(DMILS)to differentiate malignant from benign thyroid follicular neoplasms on multiscale whole-slide images(WSIs)of intraoperative frozen pathological images.Methods:A total of 1,213 patients were divided into training and validation sets,an internal test set,a pooled external test set,and a pooled prospective test set at three centers.DMILS was constructed using a deep learningbased weakly supervised method based on multiscale WSIs at 10×,20×,and 40×magnifications.The performance of the DMILS was compared with that of a single magnification and validated in two pathologist-unidentified subsets.Results:The DMILS yielded good performance,with areas under the receiver operating characteristic curves(AUCs)of 0.848,0.857,0.810,and 0.787 in the training and validation sets,internal test set,pooled external test set,and pooled prospective test set,respectively.The AUC of the DMILS was higher than that of a single magnification,with 0.788 of 10×,0.824 of 20×,and 0.775 of 40×in the internal test set.Moreover,DMILS yielded satisfactory performance on the two pathologist-unidentified subsets.Furthermore,the most indicative region predicted by DMILS is the follicular epithelium.Conclusions:DMILS has good performance in differentiating thyroid follicular neoplasms on multiscale WSIs of intraoperative frozen pathological images.展开更多
Breast cancer has become a killer of women's health nowadays.In order to exploit the potential representational capabilities of the models more comprehensively,we propose a multi-model fusion strategy.Specifically...Breast cancer has become a killer of women's health nowadays.In order to exploit the potential representational capabilities of the models more comprehensively,we propose a multi-model fusion strategy.Specifically,we combine two differently structured deep learning models,ResNet101 and Swin Transformer(SwinT),with the addition of the Convolutional Block Attention Module(CBAM)attention mechanism,which makes full use of SwinT's global context information modeling ability and ResNet101's local feature extraction ability,and additionally the cross entropy loss function is replaced by the focus loss function to solve the problem of unbalanced allocation of breast cancer data sets.The multi-classification recognition accuracies of the proposed fusion model under 40X,100X,200X and 400X BreakHis datasets are 97.50%,96.60%,96.30 and 96.10%,respectively.Compared with a single SwinT model and ResNet 101 model,the fusion model has higher accuracy and better generalization ability,which provides a more effective method for screening,diagnosis and pathological classification of female breast cancer.展开更多
A homological multi-information image fusion method was introduced for recognition of the gastric tumor pathological tissue images.The main purpose is that fewer procedures are used to provide more information and the...A homological multi-information image fusion method was introduced for recognition of the gastric tumor pathological tissue images.The main purpose is that fewer procedures are used to provide more information and the result images could be easier to be understood than any other methods.First,multi-scale wavelet transform was used to extract edge feature,and then watershed morphology was used to form multi-threshold grayscale contours.The research laid emphasis upon the homological tissue image fusion based on extended Bayesian algorithm,which fusion result images of linear weighted algorithm was used to compare with the ones of extended Bayesian algorithm.The final fusion images are shown in Fig 5.The final image evaluation was made by information entropy,information correlativity and statistics methods.It is indicated that this method has more advantages for clinical application.展开更多
Clear cell renal cell carcinoma(ccRCC)represents the most frequent form of renal cell carcinoma(RCC),and accurate International Society of Urological Pathology(ISUP)grading is crucial for prognosis and treatment selec...Clear cell renal cell carcinoma(ccRCC)represents the most frequent form of renal cell carcinoma(RCC),and accurate International Society of Urological Pathology(ISUP)grading is crucial for prognosis and treatment selection.This study presents a new deep network called Multi-scale Fusion Network(MsfNet),which aims to enhance the automatic ISUP grade of ccRCC with digital histopathology pathology images.The MsfNet overcomes the limitations of traditional ResNet50 by multi-scale information fusion and dynamic allocation of channel quantity.The model was trained and tested using 90 Hematoxylin and Eosin(H&E)stained whole slide images(WSIs),which were all cropped into 320×320-pixel patches at 40×magnification.MsfNet achieved a micro-averaged area under the curve(AUC)of 0.9807,a macro-averaged AUC of 0.9778 on the test dataset.The Gradient-weighted Class Activation Mapping(Grad-CAM)visually demonstrated MsfNet’s ability to distinguish and highlight abnormal areas more effectively than ResNet50.The t-Distributed Stochastic Neighbor Embedding(t-SNE)plot indicates our model can efficiently extract critical features from images,reducing the impact of noise and redundant information.The results suggest that MsfNet offers an accurate ISUP grade of ccRCC in digital images,emphasizing the potential of AI-assisted histopathological systems in clinical practice.展开更多
Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy ...Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy specimens(1,128 gastritis,122 normal mucosa)from PLA General Hospital.The deep learning algorithm based on DeepLab v3(ResNet-50)architecture was trained and validated using 1,008 WSIs and 100 WSIs,respectively.The diagnostic performance of the algorithm was tested on an independent test set of 142 WSIs,with the pathologists’consensus diagnosis as the gold standard.Results The receiver operating characteristic(ROC)curves were generated for chronic superficial gastritis(CSuG),chronic active gastritis(CAcG),and chronic atrophic gastritis(CAtG)in the test set,respectively.The areas under the ROC curves(AUCs)of the algorithm for CSuG,CAcG,and CAtG were 0.882,0.905 and 0.910,respectively.The sensitivity and specificity of the deep learning algorithm for the classification of CSuG,CAcG,and CAtG were 0.790 and 1.000(accuracy 0.880),0.985 and 0.829(accuracy 0.901),0.952 and 0.992(accuracy 0.986),respectively.The overall predicted accuracy for three different types of gastritis was 0.867.By flagging the suspicious regions identified by the algorithm in WSI,a more transparent and interpretable diagnosis can be generated.Conclusion The deep learning algorithm achieved high accuracy for chronic gastritis classification using WSIs.By pre-highlighting the different gastritis regions,it might be used as an auxiliary diagnostic tool to improve the work efficiency of pathologists.展开更多
Objective:The process of manually recognize the lesion tissue in pathological images is a key,laborious and subjective step in tumor diagnosis.An automatic segmentation method is proposed to segment lesion tissue in p...Objective:The process of manually recognize the lesion tissue in pathological images is a key,laborious and subjective step in tumor diagnosis.An automatic segmentation method is proposed to segment lesion tissue in pathological images.Methods:We present a region of interest(ROI)method to generate a new pre-training dataset for training initial weights on DNNs to solve the overfitting problem.To improve the segmentation performance,a multiscale and multi-resolution ensemble strategy is proposed.Our methods are validated on a public segmentation dataset of colonoscopy images.Results:By using the ROI pre-training method,the Dice score of DeepLabV3 and ResUNet increases from 0.607 to 0.739 and from 0.572 to 0.741,respectively.The ensemble method is used in the testing phase,the Dice score of DeepLabV3 and ResUNet increased to 0.760 and 0.786.Conclusion:The ROI pre-training method and ensemble strategy can be applied to DeepLabV3 and ResUNet to improve the segmentation performance of colonoscopy images.展开更多
Partitional clustering techniques such as K-Means(KM),Fuzzy C-Means(FCM),and Rough K-Means(RKM)are very simple and effective techniques for image segmentation.But,because their initial cluster centers are randomly det...Partitional clustering techniques such as K-Means(KM),Fuzzy C-Means(FCM),and Rough K-Means(RKM)are very simple and effective techniques for image segmentation.But,because their initial cluster centers are randomly determined,it is often seen that certain clusters converge to local optima.In addition to that,pathology image segmentation is also problematic due to uneven lighting,stain,and camera settings during the microscopic image capturing process.Therefore,this study proposes an Improved Slime Mould Algorithm(ISMA)based on opposition based learning and differential evolution’s mutation strategy to perform illumination-free White Blood Cell(WBC)segmentation.The ISMA helps to overcome the local optima trapping problem of the partitional clustering techniques to some extent.This paper also performs a depth analysis by considering only color components of many well-known color spaces for clustering to find the effect of illumination over color pathology image clustering.Numerical and visual results encourage the utilization of illumination-free or color component-based clustering approaches for image segmentation.ISMA-KM and“ab”color channels of CIELab color space provide best results with above-99%accuracy for only nucleus segmentation.Whereas,for entire WBC segmentation,ISMA-KM and the“CbCr”color component of YCbCr color space provide the best results with an accuracy of above 99%.Furthermore,ISMA-KM and ISMA-RKM have the lowest and highest execution times,respectively.On the other hand,ISMA provides competitive outcomes over CEC2019 benchmark test functions compared to recent well-established and efficient Nature-Inspired Optimization Algorithms(NIOAs).展开更多
BACKGROUND Digital pathology image(DPI)analysis has been developed by machine learning(ML)techniques.However,little attention has been paid to the reproducibility of ML-based histological classification in heterochron...BACKGROUND Digital pathology image(DPI)analysis has been developed by machine learning(ML)techniques.However,little attention has been paid to the reproducibility of ML-based histological classification in heterochronously obtained DPIs of the same hematoxylin and eosin(HE)slide.AIM To elucidate the frequency and preventable causes of discordant classification results of DPI analysis using ML for the heterochronously obtained DPIs.METHODS We created paired DPIs by scanning 298 HE stained slides containing 584 tissues twice with a virtual slide scanner.The paired DPIs were analyzed by our MLaided classification model.We defined non-flipped and flipped groups as the paired DPIs with concordant and discordant classification results,respectively.We compared differences in color and blur between the non-flipped and flipped groups by L1-norm and a blur index,respectively.RESULTS We observed discordant classification results in 23.1%of the paired DPIs obtained by two independent scans of the same microscope slide.We detected no significant difference in the L1-norm of each color channel between the two groups;however,the flipped group showed a significantly higher blur index than the non-flipped group.CONCLUSION Our results suggest that differences in the blur-not the color-of the paired DPIs may cause discordant classification results.An ML-aided classification model for DPI should be tested for this potential cause of the reduced reproducibility of the model.In a future study,a slide scanner and/or a preprocessing method of minimizing DPI blur should be developed.展开更多
The deep learning method automatically extracts advanced features from a large amount of data, avoiding cumbersome manual feature screening, and using digital pathology and artificial intelligence technology to build ...The deep learning method automatically extracts advanced features from a large amount of data, avoiding cumbersome manual feature screening, and using digital pathology and artificial intelligence technology to build a computer-aided diagnosis system to help pathologists quickly make objective and reliable diagnoses and improve work efficiency. Because pathological images are limited by factors such as sample size, manual labeling expertise, and complexity, artificial intelligence algorithms have not been extensively and in-depth researched on pathological images of lung cancer metastasis. Therefore, this paper proposes a lung cancer metastasis segmentation method based on pathological images, to further improve the computer-aided diagnosis method of lung cancer.展开更多
Objective Breast cancer is the most frequently diagnosed cancer in women. Accurate evaluation of the size and extent of the tumor is crucial in selecting a suitable surgical method for patients with breast cancer. Bot...Objective Breast cancer is the most frequently diagnosed cancer in women. Accurate evaluation of the size and extent of the tumor is crucial in selecting a suitable surgical method for patients with breast cancer. Both overestimation and underestimation have important adverse effects on patient care. This study aimed to evaluate the accuracy of breast magnetic resonance imaging(MRI) and ultrasound(US) examination for measuring the size and extent of early-stage breast neoplasms.Methods The longest diameter of breast tumors in patients with T_(1–2)N_(0–1)M_0 invasive breast cancer preparing for breast-conserving surgery(BCS) was measured preoperatively by using both MRI and US and their accuracy was compared with that of postoperative pathologic examination. If the diameter difference was within 2 mm, it was considered to be consistent with pathologic examination.Results A total of 36 patients were imaged using both MRI and US. The mean longest diameter of the tumors on MRI, US, and postoperative pathologic examination was 20.86 mm ± 4.09 mm(range: 11–27 mm), 16.14 mm ± 4.91 mm(range: 6–26 mm), and 18.36 mm ± 3.88 mm(range: 9–24 mm). US examination underestimated the size of the tumor compared to that determined using pathologic examination(t = 3.49, P < 0.01), while MRI overestimated it(t =-6.35, P < 0.01). The linear correlation coefficients between the image measurements and pathologic tumor size were r = 0.826(P < 0.01) for MRI and r = 0.645(P < 0.01) for US. The rate of consistency of MRI and US compared to that with pathologic examination was 88.89% and 80.65%, respectively, and there was no statistically significant difference between them(χ~2 = 0.80, P > 0.05).Conclusion MRI and US are both effective methods to assess the size of breast tumors, and they maintain good consistency with pathologic examination. MRI has a better correlation with pathology. However, we should be careful about the risk of inaccurate size estimation.展开更多
In pathological examinations,tissue must first be stained to meet specific diagnostic requirements,a meticulous process demanding significant time and expertise from specialists.With advancements in deep learning,this...In pathological examinations,tissue must first be stained to meet specific diagnostic requirements,a meticulous process demanding significant time and expertise from specialists.With advancements in deep learning,this staining process can now be achieved through computational methods known as virtual staining.This technique replicates the visual effects of traditional histological staining in pathological imaging,enhancing efficiency and reducing costs.Extensive research in virtual staining for pathology has already demonstrated its effectiveness in generating clinically relevant stained images across a variety of diagnostic scenarios.Unlike previous reviews that broadly cover the clinical applications of virtual staining,this paper focuses on the technical methodologies,encompassing current models,datasets,and evaluation methods.It highlights the unique challenges of virtual staining compared to traditional image translation,discusses limitations in existing work,and explores future perspectives.Adopting a macro perspective,we avoid overly intricate technical details to make the content accessible to clinical experts.Additionally,we provide a brief introduction to the purpose of virtual staining from a medical standpoint,which may inspire algorithm-focused researchers.This paper aims to promote a deeper understanding of interdisciplinary knowledge between algorithm developers and clinicians,fostering the integration of technical solutions and medical expertise in the development of virtual staining models.This collaboration seeks to create more efficient,generalized,and versatile virtual staining models for a wide range of clinical applications.展开更多
Computational pathology,a field at the intersection of computer science and pathology,leverages digital technology to enhance diagnostic accuracy and efficiency.With the digitization of pathology and the development o...Computational pathology,a field at the intersection of computer science and pathology,leverages digital technology to enhance diagnostic accuracy and efficiency.With the digitization of pathology and the development of artificial intelligence,computational pathology has made significant strides in the automatic analysis of pathology images,including pathological structure segmentation,tumor classification,and prognosis analysis.Driven by large-scale datasets and advanced methods,computational pathology is moving toward building foundation models to reach more general applications.Generative methods provide a new perspective on addressing challenges in computational pathology.However,challenges in data security and model reliability,reproducibility,and clinical application remain.This review outlines the evolution of computational pathology from pathology slide digitization to pathology image analysis,consolidates the development of foundation and generative models in computational pathology,and discusses the key challenges that persist.Finally,we introduce some rising techniques for precision pathology.展开更多
基金supported by the Taishan Scholar Project(No.ts20190991,tsqn202211378)the Key R&D Project of Shandong Province(No.2022CXPT023)the General Program of National Natural Science Foundation of China(No.82371933)。
文摘Objective:This study aims to develop a deep multiscale image learning system(DMILS)to differentiate malignant from benign thyroid follicular neoplasms on multiscale whole-slide images(WSIs)of intraoperative frozen pathological images.Methods:A total of 1,213 patients were divided into training and validation sets,an internal test set,a pooled external test set,and a pooled prospective test set at three centers.DMILS was constructed using a deep learningbased weakly supervised method based on multiscale WSIs at 10×,20×,and 40×magnifications.The performance of the DMILS was compared with that of a single magnification and validated in two pathologist-unidentified subsets.Results:The DMILS yielded good performance,with areas under the receiver operating characteristic curves(AUCs)of 0.848,0.857,0.810,and 0.787 in the training and validation sets,internal test set,pooled external test set,and pooled prospective test set,respectively.The AUC of the DMILS was higher than that of a single magnification,with 0.788 of 10×,0.824 of 20×,and 0.775 of 40×in the internal test set.Moreover,DMILS yielded satisfactory performance on the two pathologist-unidentified subsets.Furthermore,the most indicative region predicted by DMILS is the follicular epithelium.Conclusions:DMILS has good performance in differentiating thyroid follicular neoplasms on multiscale WSIs of intraoperative frozen pathological images.
基金By the National Natural Science Foundation of China(NSFC)(No.61772358),the National Key R&D Program Funded Project(No.2021YFE0105500),and the Jiangsu University‘Blue Project’.
文摘Breast cancer has become a killer of women's health nowadays.In order to exploit the potential representational capabilities of the models more comprehensively,we propose a multi-model fusion strategy.Specifically,we combine two differently structured deep learning models,ResNet101 and Swin Transformer(SwinT),with the addition of the Convolutional Block Attention Module(CBAM)attention mechanism,which makes full use of SwinT's global context information modeling ability and ResNet101's local feature extraction ability,and additionally the cross entropy loss function is replaced by the focus loss function to solve the problem of unbalanced allocation of breast cancer data sets.The multi-classification recognition accuracies of the proposed fusion model under 40X,100X,200X and 400X BreakHis datasets are 97.50%,96.60%,96.30 and 96.10%,respectively.Compared with a single SwinT model and ResNet 101 model,the fusion model has higher accuracy and better generalization ability,which provides a more effective method for screening,diagnosis and pathological classification of female breast cancer.
基金Supported by the National Science Foundation of China(No.30370403 )
文摘A homological multi-information image fusion method was introduced for recognition of the gastric tumor pathological tissue images.The main purpose is that fewer procedures are used to provide more information and the result images could be easier to be understood than any other methods.First,multi-scale wavelet transform was used to extract edge feature,and then watershed morphology was used to form multi-threshold grayscale contours.The research laid emphasis upon the homological tissue image fusion based on extended Bayesian algorithm,which fusion result images of linear weighted algorithm was used to compare with the ones of extended Bayesian algorithm.The final fusion images are shown in Fig 5.The final image evaluation was made by information entropy,information correlativity and statistics methods.It is indicated that this method has more advantages for clinical application.
基金supported by the Scientific Research and Innovation Team of Hebei University(IT2023B07)the Natural Science Foundation of Hebei Province(F2023201069)the Postgraduate’s Innovation Fund Project of Hebei University(HBU2024BS021).
文摘Clear cell renal cell carcinoma(ccRCC)represents the most frequent form of renal cell carcinoma(RCC),and accurate International Society of Urological Pathology(ISUP)grading is crucial for prognosis and treatment selection.This study presents a new deep network called Multi-scale Fusion Network(MsfNet),which aims to enhance the automatic ISUP grade of ccRCC with digital histopathology pathology images.The MsfNet overcomes the limitations of traditional ResNet50 by multi-scale information fusion and dynamic allocation of channel quantity.The model was trained and tested using 90 Hematoxylin and Eosin(H&E)stained whole slide images(WSIs),which were all cropped into 320×320-pixel patches at 40×magnification.MsfNet achieved a micro-averaged area under the curve(AUC)of 0.9807,a macro-averaged AUC of 0.9778 on the test dataset.The Gradient-weighted Class Activation Mapping(Grad-CAM)visually demonstrated MsfNet’s ability to distinguish and highlight abnormal areas more effectively than ResNet50.The t-Distributed Stochastic Neighbor Embedding(t-SNE)plot indicates our model can efficiently extract critical features from images,reducing the impact of noise and redundant information.The results suggest that MsfNet offers an accurate ISUP grade of ccRCC in digital images,emphasizing the potential of AI-assisted histopathological systems in clinical practice.
文摘Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy specimens(1,128 gastritis,122 normal mucosa)from PLA General Hospital.The deep learning algorithm based on DeepLab v3(ResNet-50)architecture was trained and validated using 1,008 WSIs and 100 WSIs,respectively.The diagnostic performance of the algorithm was tested on an independent test set of 142 WSIs,with the pathologists’consensus diagnosis as the gold standard.Results The receiver operating characteristic(ROC)curves were generated for chronic superficial gastritis(CSuG),chronic active gastritis(CAcG),and chronic atrophic gastritis(CAtG)in the test set,respectively.The areas under the ROC curves(AUCs)of the algorithm for CSuG,CAcG,and CAtG were 0.882,0.905 and 0.910,respectively.The sensitivity and specificity of the deep learning algorithm for the classification of CSuG,CAcG,and CAtG were 0.790 and 1.000(accuracy 0.880),0.985 and 0.829(accuracy 0.901),0.952 and 0.992(accuracy 0.986),respectively.The overall predicted accuracy for three different types of gastritis was 0.867.By flagging the suspicious regions identified by the algorithm in WSI,a more transparent and interpretable diagnosis can be generated.Conclusion The deep learning algorithm achieved high accuracy for chronic gastritis classification using WSIs.By pre-highlighting the different gastritis regions,it might be used as an auxiliary diagnostic tool to improve the work efficiency of pathologists.
基金the National Major Science and Technology Projects(grant no.2018AAA0100201)the National Natural Science Foundation of China(grant no.61906127).
文摘Objective:The process of manually recognize the lesion tissue in pathological images is a key,laborious and subjective step in tumor diagnosis.An automatic segmentation method is proposed to segment lesion tissue in pathological images.Methods:We present a region of interest(ROI)method to generate a new pre-training dataset for training initial weights on DNNs to solve the overfitting problem.To improve the segmentation performance,a multiscale and multi-resolution ensemble strategy is proposed.Our methods are validated on a public segmentation dataset of colonoscopy images.Results:By using the ROI pre-training method,the Dice score of DeepLabV3 and ResUNet increases from 0.607 to 0.739 and from 0.572 to 0.741,respectively.The ensemble method is used in the testing phase,the Dice score of DeepLabV3 and ResUNet increased to 0.760 and 0.786.Conclusion:The ROI pre-training method and ensemble strategy can be applied to DeepLabV3 and ResUNet to improve the segmentation performance of colonoscopy images.
基金This work has been partially supported with the grant received in research project under RUSA 2.0 component 8,Govt.of India,New Delhi.
文摘Partitional clustering techniques such as K-Means(KM),Fuzzy C-Means(FCM),and Rough K-Means(RKM)are very simple and effective techniques for image segmentation.But,because their initial cluster centers are randomly determined,it is often seen that certain clusters converge to local optima.In addition to that,pathology image segmentation is also problematic due to uneven lighting,stain,and camera settings during the microscopic image capturing process.Therefore,this study proposes an Improved Slime Mould Algorithm(ISMA)based on opposition based learning and differential evolution’s mutation strategy to perform illumination-free White Blood Cell(WBC)segmentation.The ISMA helps to overcome the local optima trapping problem of the partitional clustering techniques to some extent.This paper also performs a depth analysis by considering only color components of many well-known color spaces for clustering to find the effect of illumination over color pathology image clustering.Numerical and visual results encourage the utilization of illumination-free or color component-based clustering approaches for image segmentation.ISMA-KM and“ab”color channels of CIELab color space provide best results with above-99%accuracy for only nucleus segmentation.Whereas,for entire WBC segmentation,ISMA-KM and the“CbCr”color component of YCbCr color space provide the best results with an accuracy of above 99%.Furthermore,ISMA-KM and ISMA-RKM have the lowest and highest execution times,respectively.On the other hand,ISMA provides competitive outcomes over CEC2019 benchmark test functions compared to recent well-established and efficient Nature-Inspired Optimization Algorithms(NIOAs).
文摘BACKGROUND Digital pathology image(DPI)analysis has been developed by machine learning(ML)techniques.However,little attention has been paid to the reproducibility of ML-based histological classification in heterochronously obtained DPIs of the same hematoxylin and eosin(HE)slide.AIM To elucidate the frequency and preventable causes of discordant classification results of DPI analysis using ML for the heterochronously obtained DPIs.METHODS We created paired DPIs by scanning 298 HE stained slides containing 584 tissues twice with a virtual slide scanner.The paired DPIs were analyzed by our MLaided classification model.We defined non-flipped and flipped groups as the paired DPIs with concordant and discordant classification results,respectively.We compared differences in color and blur between the non-flipped and flipped groups by L1-norm and a blur index,respectively.RESULTS We observed discordant classification results in 23.1%of the paired DPIs obtained by two independent scans of the same microscope slide.We detected no significant difference in the L1-norm of each color channel between the two groups;however,the flipped group showed a significantly higher blur index than the non-flipped group.CONCLUSION Our results suggest that differences in the blur-not the color-of the paired DPIs may cause discordant classification results.An ML-aided classification model for DPI should be tested for this potential cause of the reduced reproducibility of the model.In a future study,a slide scanner and/or a preprocessing method of minimizing DPI blur should be developed.
文摘The deep learning method automatically extracts advanced features from a large amount of data, avoiding cumbersome manual feature screening, and using digital pathology and artificial intelligence technology to build a computer-aided diagnosis system to help pathologists quickly make objective and reliable diagnoses and improve work efficiency. Because pathological images are limited by factors such as sample size, manual labeling expertise, and complexity, artificial intelligence algorithms have not been extensively and in-depth researched on pathological images of lung cancer metastasis. Therefore, this paper proposes a lung cancer metastasis segmentation method based on pathological images, to further improve the computer-aided diagnosis method of lung cancer.
文摘Objective Breast cancer is the most frequently diagnosed cancer in women. Accurate evaluation of the size and extent of the tumor is crucial in selecting a suitable surgical method for patients with breast cancer. Both overestimation and underestimation have important adverse effects on patient care. This study aimed to evaluate the accuracy of breast magnetic resonance imaging(MRI) and ultrasound(US) examination for measuring the size and extent of early-stage breast neoplasms.Methods The longest diameter of breast tumors in patients with T_(1–2)N_(0–1)M_0 invasive breast cancer preparing for breast-conserving surgery(BCS) was measured preoperatively by using both MRI and US and their accuracy was compared with that of postoperative pathologic examination. If the diameter difference was within 2 mm, it was considered to be consistent with pathologic examination.Results A total of 36 patients were imaged using both MRI and US. The mean longest diameter of the tumors on MRI, US, and postoperative pathologic examination was 20.86 mm ± 4.09 mm(range: 11–27 mm), 16.14 mm ± 4.91 mm(range: 6–26 mm), and 18.36 mm ± 3.88 mm(range: 9–24 mm). US examination underestimated the size of the tumor compared to that determined using pathologic examination(t = 3.49, P < 0.01), while MRI overestimated it(t =-6.35, P < 0.01). The linear correlation coefficients between the image measurements and pathologic tumor size were r = 0.826(P < 0.01) for MRI and r = 0.645(P < 0.01) for US. The rate of consistency of MRI and US compared to that with pathologic examination was 88.89% and 80.65%, respectively, and there was no statistically significant difference between them(χ~2 = 0.80, P > 0.05).Conclusion MRI and US are both effective methods to assess the size of breast tumors, and they maintain good consistency with pathologic examination. MRI has a better correlation with pathology. However, we should be careful about the risk of inaccurate size estimation.
基金supported by the National Natural Science Foundation of China under Grant 62371409Fujian Provincial Natural Science Foundation of China under Grant 2023J01005.
文摘In pathological examinations,tissue must first be stained to meet specific diagnostic requirements,a meticulous process demanding significant time and expertise from specialists.With advancements in deep learning,this staining process can now be achieved through computational methods known as virtual staining.This technique replicates the visual effects of traditional histological staining in pathological imaging,enhancing efficiency and reducing costs.Extensive research in virtual staining for pathology has already demonstrated its effectiveness in generating clinically relevant stained images across a variety of diagnostic scenarios.Unlike previous reviews that broadly cover the clinical applications of virtual staining,this paper focuses on the technical methodologies,encompassing current models,datasets,and evaluation methods.It highlights the unique challenges of virtual staining compared to traditional image translation,discusses limitations in existing work,and explores future perspectives.Adopting a macro perspective,we avoid overly intricate technical details to make the content accessible to clinical experts.Additionally,we provide a brief introduction to the purpose of virtual staining from a medical standpoint,which may inspire algorithm-focused researchers.This paper aims to promote a deeper understanding of interdisciplinary knowledge between algorithm developers and clinicians,fostering the integration of technical solutions and medical expertise in the development of virtual staining models.This collaboration seeks to create more efficient,generalized,and versatile virtual staining models for a wide range of clinical applications.
基金supported in part by the Shenzhen Natural Science Fund(the Stable Support Plan Program 20220810144949003)the Key Technology Development Program of Shenzhen(JSGG20210713091811036)+2 种基金the Key-Area Research and Development Program of Guangdong Province(2021B0101420005)the Shenzhen Key Laboratory Foundation(ZDSYS20200811143757022)the Guangdong Provincial Key Laboratory of Mathematical and Neural Dynamical Systems(2024B1212010004).
文摘Computational pathology,a field at the intersection of computer science and pathology,leverages digital technology to enhance diagnostic accuracy and efficiency.With the digitization of pathology and the development of artificial intelligence,computational pathology has made significant strides in the automatic analysis of pathology images,including pathological structure segmentation,tumor classification,and prognosis analysis.Driven by large-scale datasets and advanced methods,computational pathology is moving toward building foundation models to reach more general applications.Generative methods provide a new perspective on addressing challenges in computational pathology.However,challenges in data security and model reliability,reproducibility,and clinical application remain.This review outlines the evolution of computational pathology from pathology slide digitization to pathology image analysis,consolidates the development of foundation and generative models in computational pathology,and discusses the key challenges that persist.Finally,we introduce some rising techniques for precision pathology.