Skin cancer is the most prevalent cancer globally,primarily due to extensive exposure to Ultraviolet(UV)radiation.Early identification of skin cancer enhances the likelihood of effective treatment,as delays may lead t...Skin cancer is the most prevalent cancer globally,primarily due to extensive exposure to Ultraviolet(UV)radiation.Early identification of skin cancer enhances the likelihood of effective treatment,as delays may lead to severe tumor advancement.This study proposes a novel hybrid deep learning strategy to address the complex issue of skin cancer diagnosis,with an architecture that integrates a Vision Transformer,a bespoke convolutional neural network(CNN),and an Xception module.They were evaluated using two benchmark datasets,HAM10000 and Skin Cancer ISIC.On the HAM10000,the model achieves a precision of 95.46%,an accuracy of 96.74%,a recall of 96.27%,specificity of 96.00%and an F1-Score of 95.86%.It obtains an accuracy of 93.19%,a precision of 93.25%,a recall of 92.80%,a specificity of 92.89%and an F1-Score of 93.19%on the Skin Cancer ISIC dataset.The findings demonstrate that the model that was proposed is robust and trustworthy when it comes to the classification of skin lesions.In addition,the utilization of Explainable AI techniques,such as Grad-CAM visualizations,assists in highlighting the most significant lesion areas that have an impact on the decisions that are made by the model.展开更多
随着深度学习的快速发展,计算机视觉领域对图像的分类研究不仅仅局限于识别出物体的类别,更需要在传统图像分类任务的基础上进行更细致的类别划分。通过对现有细粒度图像分类算法和模型的分析研究,提出一种基于Xception模型与WSDAN(weak...随着深度学习的快速发展,计算机视觉领域对图像的分类研究不仅仅局限于识别出物体的类别,更需要在传统图像分类任务的基础上进行更细致的类别划分。通过对现有细粒度图像分类算法和模型的分析研究,提出一种基于Xception模型与WSDAN(weakly supervised data augmentation network)弱监督数据增强的方法相结合的深度学习网络应用于细粒度图像分类任务。该方法以Xception网络作为骨干网络和特征提取网络、利用改进的WSDAN模型进行数据增强,并把增强后的图像反馈回网络作为输入图像来增强网络的泛化能力。在常用的细粒度图像数据集和NABirds数据集上进行实验验证,得到的分类正确率分别为89.28%、91.18%、94.47%、93.04%和88.4%。实验结果表明,与WSDAN(Pytorch)模型及其他多个主流细粒度分类算法相比,该方法取得了更好的分类结果。展开更多
文摘Skin cancer is the most prevalent cancer globally,primarily due to extensive exposure to Ultraviolet(UV)radiation.Early identification of skin cancer enhances the likelihood of effective treatment,as delays may lead to severe tumor advancement.This study proposes a novel hybrid deep learning strategy to address the complex issue of skin cancer diagnosis,with an architecture that integrates a Vision Transformer,a bespoke convolutional neural network(CNN),and an Xception module.They were evaluated using two benchmark datasets,HAM10000 and Skin Cancer ISIC.On the HAM10000,the model achieves a precision of 95.46%,an accuracy of 96.74%,a recall of 96.27%,specificity of 96.00%and an F1-Score of 95.86%.It obtains an accuracy of 93.19%,a precision of 93.25%,a recall of 92.80%,a specificity of 92.89%and an F1-Score of 93.19%on the Skin Cancer ISIC dataset.The findings demonstrate that the model that was proposed is robust and trustworthy when it comes to the classification of skin lesions.In addition,the utilization of Explainable AI techniques,such as Grad-CAM visualizations,assists in highlighting the most significant lesion areas that have an impact on the decisions that are made by the model.
文摘随着深度学习的快速发展,计算机视觉领域对图像的分类研究不仅仅局限于识别出物体的类别,更需要在传统图像分类任务的基础上进行更细致的类别划分。通过对现有细粒度图像分类算法和模型的分析研究,提出一种基于Xception模型与WSDAN(weakly supervised data augmentation network)弱监督数据增强的方法相结合的深度学习网络应用于细粒度图像分类任务。该方法以Xception网络作为骨干网络和特征提取网络、利用改进的WSDAN模型进行数据增强,并把增强后的图像反馈回网络作为输入图像来增强网络的泛化能力。在常用的细粒度图像数据集和NABirds数据集上进行实验验证,得到的分类正确率分别为89.28%、91.18%、94.47%、93.04%和88.4%。实验结果表明,与WSDAN(Pytorch)模型及其他多个主流细粒度分类算法相比,该方法取得了更好的分类结果。