Automated prostate cancer detection in magnetic resonance imaging(MRI)scans is of significant importance for cancer patient management.Most existing computer-aided diagnosis systems adopt segmentation methods while ob...Automated prostate cancer detection in magnetic resonance imaging(MRI)scans is of significant importance for cancer patient management.Most existing computer-aided diagnosis systems adopt segmentation methods while object detection approaches recently show promising results.The authors have(1)carefully compared performances of most-developed segmentation and object detection methods in localising prostate imaging reporting and data system(PIRADS)-labelled prostate lesions on MRI scans;(2)proposed an additional customised set of lesion-level localisation sensitivity and precision;(3)proposed efficient ways to ensemble the segmentation and object detection methods for improved performances.The ground-truth(GT)perspective lesion-level sensitivity and prediction-perspective lesion-level precision are reported,to quantify the ratios of true positive voxels being detected by algorithms over the number of voxels in the GT labelled regions and predicted regions.The two networks are trained independently on 549 clinical patients data with PIRADS-V2 as GT labels,and tested on 161 internal and 100 external MRI scans.At the lesion level,nnDetection outperforms nnUNet for detecting both PIRADS≥3 and PIRADS≥4 lesions in majority cases.For example,at the average false positive prediction per patient being 3,nnDetection achieves a greater Intersection-of-Union(IoU)-based sensitivity than nnUNet for detecting PIRADS≥3 lesions,being 80.78%�1.50%versus 60.40%�1.64%(p<0.01).At the voxel level,nnUnet is in general superior or comparable to nnDetection.The proposed ensemble methods achieve improved or comparable lesion-level accuracy,in all tested clinical scenarios.For example,at 3 false positives,the lesion-wise ensemble method achieves 82.24%�1.43%sensitivity versus 80.78%�1.50%(nnDetection)and 60.40%�1.64%(nnUNet)for detecting PIRADS≥3 lesions.Consistent conclusions are also drawn from results on the external data set.展开更多
Currently,the improvement in AI is mainly related to deep learning techniques that are employed for the classification,identification,and quantification of patterns in clinical images.The deep learning models show mor...Currently,the improvement in AI is mainly related to deep learning techniques that are employed for the classification,identification,and quantification of patterns in clinical images.The deep learning models show more remarkable performance than the traditional methods for medical image processing tasks,such as skin cancer,colorectal cancer,brain tumour,cardiac disease,Breast cancer(BrC),and a few more.The manual diagnosis of medical issues always requires an expert and is also expensive.Therefore,developing some computer diagnosis techniques based on deep learning is essential.Breast cancer is the most frequently diagnosed cancer in females with a rapidly growing percentage.It is estimated that patients with BrC will rise to 70%in the next 20 years.If diagnosed at a later stage,the survival rate of patients with BrC is shallow.Hence,early detection is essential,increasing the survival rate to 50%.A new framework for BrC classification is presented that utilises deep learning and feature optimization.The significant steps of the presented framework include(i)hybrid contrast enhancement of acquired images,(ii)data augmentation to facilitate better learning of the Convolutional Neural Network(CNN)model,(iii)a pre‐trained ResNet‐101 model is utilised and modified according to selected dataset classes,(iv)deep transfer learning based model training for feature extraction,(v)the fusion of features using the proposed highly corrected function‐controlled canonical correlation analysis approach,and(vi)optimal feature selection using the modified Satin Bowerbird Optimization controlled Newton Raphson algorithm that finally classified using 10 machine learning classifiers.The experiments of the proposed framework have been carried out using the most critical and publicly available dataset,such as CBISDDSM,and obtained the best accuracy of 94.5%along with improved computation time.The comparison depicts that the presented method surpasses the current state‐ofthe‐art approaches.展开更多
基金National Natural Science Foundation of China,Grant/Award Number:62303275International Alliance for Cancer Early Detection,Grant/Award Numbers:C28070/A30912,C73666/A31378Wellcome/EPSRC Centre for Interventional and Surgical Sciences,Grant/Award Number:203145Z/16/Z。
文摘Automated prostate cancer detection in magnetic resonance imaging(MRI)scans is of significant importance for cancer patient management.Most existing computer-aided diagnosis systems adopt segmentation methods while object detection approaches recently show promising results.The authors have(1)carefully compared performances of most-developed segmentation and object detection methods in localising prostate imaging reporting and data system(PIRADS)-labelled prostate lesions on MRI scans;(2)proposed an additional customised set of lesion-level localisation sensitivity and precision;(3)proposed efficient ways to ensemble the segmentation and object detection methods for improved performances.The ground-truth(GT)perspective lesion-level sensitivity and prediction-perspective lesion-level precision are reported,to quantify the ratios of true positive voxels being detected by algorithms over the number of voxels in the GT labelled regions and predicted regions.The two networks are trained independently on 549 clinical patients data with PIRADS-V2 as GT labels,and tested on 161 internal and 100 external MRI scans.At the lesion level,nnDetection outperforms nnUNet for detecting both PIRADS≥3 and PIRADS≥4 lesions in majority cases.For example,at the average false positive prediction per patient being 3,nnDetection achieves a greater Intersection-of-Union(IoU)-based sensitivity than nnUNet for detecting PIRADS≥3 lesions,being 80.78%�1.50%versus 60.40%�1.64%(p<0.01).At the voxel level,nnUnet is in general superior or comparable to nnDetection.The proposed ensemble methods achieve improved or comparable lesion-level accuracy,in all tested clinical scenarios.For example,at 3 false positives,the lesion-wise ensemble method achieves 82.24%�1.43%sensitivity versus 80.78%�1.50%(nnDetection)and 60.40%�1.64%(nnUNet)for detecting PIRADS≥3 lesions.Consistent conclusions are also drawn from results on the external data set.
基金Supporting Project number(PNURSP2023R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.supported by MRC,UK(MC_PC_17171)+9 种基金Royal Society,UK(RP202G0230)BHF,UK(AA/18/3/34220)Hope Foundation for Cancer Research,UK(RM60G0680)GCRF,UK(P202PF11)Sino‐UK Industrial Fund,UK(RP202G0289)LIAS,UK(P202ED10,P202RE969)Data Science Enhancement Fund,UK(P202RE237)Fight for Sight,UK(24NN201)Sino‐UK Education Fund,UK(OP202006)BBSRC,UK(RM32G0178B8).The funding of this work was provided by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Currently,the improvement in AI is mainly related to deep learning techniques that are employed for the classification,identification,and quantification of patterns in clinical images.The deep learning models show more remarkable performance than the traditional methods for medical image processing tasks,such as skin cancer,colorectal cancer,brain tumour,cardiac disease,Breast cancer(BrC),and a few more.The manual diagnosis of medical issues always requires an expert and is also expensive.Therefore,developing some computer diagnosis techniques based on deep learning is essential.Breast cancer is the most frequently diagnosed cancer in females with a rapidly growing percentage.It is estimated that patients with BrC will rise to 70%in the next 20 years.If diagnosed at a later stage,the survival rate of patients with BrC is shallow.Hence,early detection is essential,increasing the survival rate to 50%.A new framework for BrC classification is presented that utilises deep learning and feature optimization.The significant steps of the presented framework include(i)hybrid contrast enhancement of acquired images,(ii)data augmentation to facilitate better learning of the Convolutional Neural Network(CNN)model,(iii)a pre‐trained ResNet‐101 model is utilised and modified according to selected dataset classes,(iv)deep transfer learning based model training for feature extraction,(v)the fusion of features using the proposed highly corrected function‐controlled canonical correlation analysis approach,and(vi)optimal feature selection using the modified Satin Bowerbird Optimization controlled Newton Raphson algorithm that finally classified using 10 machine learning classifiers.The experiments of the proposed framework have been carried out using the most critical and publicly available dataset,such as CBISDDSM,and obtained the best accuracy of 94.5%along with improved computation time.The comparison depicts that the presented method surpasses the current state‐ofthe‐art approaches.