期刊文献+
共找到244篇文章
< 1 2 13 >
每页显示 20 50 100
卷积神经网络与Vision Transformer在胶质瘤中的研究进展
1
作者 杨浩辉 徐涛 +3 位作者 王伟 安良良 敖用芳 朱家宝 《磁共振成像》 北大核心 2026年第1期168-174,共7页
胶质瘤因高度异质性、强侵袭性及预后差,传统诊疗面临巨大挑战。深度学习技术的引入为其精准诊疗提供了新路径,其中卷积神经网络(convolutional neural network,CNN)与Vision Transformer(ViT)是核心工具。CNN凭借层级化卷积操作在局部... 胶质瘤因高度异质性、强侵袭性及预后差,传统诊疗面临巨大挑战。深度学习技术的引入为其精准诊疗提供了新路径,其中卷积神经网络(convolutional neural network,CNN)与Vision Transformer(ViT)是核心工具。CNN凭借层级化卷积操作在局部特征提取(如肿瘤边缘、纹理细节)上具有天然优势,而ViT基于自注意力机制在全局上下文建模(如肿瘤跨区域异质性、多模态关联)方面表现突出,二者的融合策略通过整合局部精细特征与全局关联信息,在应对胶质瘤边界模糊、跨模态数据异构性等临床难题中展现出显著优势。本文综述了二者在胶质瘤检测与分割、病理分级、分子分型、预后评估等关键临床任务中的研究进展,阐述了原理、单独应用及融合策略。同时,本文也探讨了当前研究中存在的挑战,诸如对数据标注的强依赖性、模型可解释性不足等问题,并展望了未来的发展方向,例如构建轻量化架构、发展自监督学习以及推进多组学融合等前沿,以期为胶质瘤智能诊断提供系统性参考。 展开更多
关键词 胶质瘤 深度学习 卷积神经网络 vision transformer 磁共振成像
暂未订购
基于条件生成对抗网络和Vision Transformer的胎儿颅脑超声标准切面识别方法
2
作者 李惠莲 林艺榕 +1 位作者 刘中华 柳培忠 《临床超声医学杂志》 2026年第2期164-169,共6页
胎儿颅脑超声检查是产前常规筛查中至关重要的一环,准确识别标准切面对于评估胎儿大脑发育状况具有重要意义。然而,由于超声图像质量差异和切面获取的复杂性,准确识别标准切面具有较大的挑战性。本文提出了一种基于条件对抗生成网络(CG... 胎儿颅脑超声检查是产前常规筛查中至关重要的一环,准确识别标准切面对于评估胎儿大脑发育状况具有重要意义。然而,由于超声图像质量差异和切面获取的复杂性,准确识别标准切面具有较大的挑战性。本文提出了一种基于条件对抗生成网络(CGAN)和Vision Transformer的胎儿颅脑超声标准切面识别方法,利用CGAN对原始数据进行增强,生成额外的标准切面和非标准切面图像,解决数据不足的问题;同时采用YOLOv9模型对超声图像中的颅骨区域进行自动裁剪,去除无关信息,确保模型专注于关键区域。在分类模型中采用Vision Transformer对所有输入图像进行归一化和尺寸调整,使用了数据增强技术如随机水平或垂直翻转、调整图像对比度、中心裁剪和调整图像饱和度等。结果显示,相较于现有最优模型CSwin Transformer的方法,本文提出的方法在胎儿颅脑超声标准切面识别任务中表现出色,其精确率、召回率、F1分数及准确率分别为92.5%、92.3%、92.4%和93.3%。该方法在提升识别精度方面具有显著优势,为临床超声检查提供了有效技术支持。 展开更多
关键词 条件生成对抗网络 vision transformer 颅脑超声 胎儿 标准切面识别方法
暂未订购
基于注意机制优化的Vision Transformer在虫草等级识别中的应用
3
作者 刘惠文 《消费电子》 2026年第4期248-250,共3页
在数字时代背景下,深度学习驱动了图像识别的创新,但目前对虫草等级识别的研究主要还是依靠人工经验,存在效率低、主观性强等问题。文章采用视觉转换器(Vision Transformer,ViT)模型对虫草图像进行分级识别。首先,阐述视觉知觉、注意机... 在数字时代背景下,深度学习驱动了图像识别的创新,但目前对虫草等级识别的研究主要还是依靠人工经验,存在效率低、主观性强等问题。文章采用视觉转换器(Vision Transformer,ViT)模型对虫草图像进行分级识别。首先,阐述视觉知觉、注意机制和层次划分的理论依据,并从注意机制和模型结构两个角度对ViT进行调整和优化;在此基础上,利用PyTorch框架对包含5000幅图像的数据集进行5重交叉验证。实验结果显示,该模型的预测精度达到95.2%,召回率达到94.5%,F1值达到94.8%,为虫草行业智能化发展提供了技术支持。 展开更多
关键词 vision transformer 虫草等级识别 图像分类 深度学习 计算机视觉
在线阅读 下载PDF
孪生多级Vision Transformer高分遥感影像变化检测方法
4
作者 黄英杰 《测绘与空间地理信息》 2026年第2期123-126,130,共5页
针对现有遥感变化检测模型捕获特征不全面,深、浅层特征利用不充分,导致分割精度不高的问题,提出一种结合Vision Transformer与孪生架构的遥感影像变化检测模型。在编码器端,采用孪生多级Vision Transformer实现空间特征提取与全局上下... 针对现有遥感变化检测模型捕获特征不全面,深、浅层特征利用不充分,导致分割精度不高的问题,提出一种结合Vision Transformer与孪生架构的遥感影像变化检测模型。在编码器端,采用孪生多级Vision Transformer实现空间特征提取与全局上下文特征建模,同时采用haar小波下采样层进行特征图尺寸压缩,减少细节特征的丢失;在特征解码过程中,引入全尺度特征连接机制,充分利用不同来源的深、浅层特征。实验结果表明,所提出模型在分割精度上优于当前的主流模型,能够准确地捕获变化目标的边界与细节信息。 展开更多
关键词 遥感变化检测 孪生架构 vision transformer haar小波下采样 全尺度特征连接
在线阅读 下载PDF
基于改进Vision Transformer的水稻叶片病害图像识别 被引量:1
5
作者 朱周华 周怡纳 +1 位作者 侯智杰 田成源 《电子测量技术》 北大核心 2025年第10期153-160,共8页
水稻叶片病害智能识别在现代农业生产中具有重要意义。针对传统Vision Transformer网络缺乏归纳偏置,难以有效捕捉图像局部细节特征的问题,提出了一种改进的Vision Transformer模型。该模型通过引入内在归纳偏置,增强了对多尺度上下文... 水稻叶片病害智能识别在现代农业生产中具有重要意义。针对传统Vision Transformer网络缺乏归纳偏置,难以有效捕捉图像局部细节特征的问题,提出了一种改进的Vision Transformer模型。该模型通过引入内在归纳偏置,增强了对多尺度上下文以及局部与全局依赖关系的建模能力,同时降低了对大规模数据集的需求。此外,Vision Transformer中的多层感知器模块被Kolmogorov-Arnold网络结构取代,从而提升了模型对复杂特征的提取能力和可解释性。实验结果表明,所提模型在水稻叶片病害识别任务中取得了优异的性能,识别准确率达到了98.62%,较原始ViT模型提升了6.2%,显著提高了对水稻叶片病害的识别性能。 展开更多
关键词 水稻叶片病害 图像识别 vision transformer网络 归纳偏置 局部特征
原文传递
基于残差注意力TCN与vision transformer的齿轮剩余寿命预测
6
作者 胡爱军 李晨阳 +2 位作者 邢磊 周卓浩 向玲 《航空动力学报》 北大核心 2025年第12期14-24,共11页
齿轮系统的运行状况受到多个因素的影响,这些因素在时间上存在长期依赖关系,并在局部和全局特征之间存在差异。为了有效地捕捉数据中的时间依赖性并自适应调整对特征的关注度,提出具有残差卷积块注意力机制的时间卷积网络(RCMTCN)。通... 齿轮系统的运行状况受到多个因素的影响,这些因素在时间上存在长期依赖关系,并在局部和全局特征之间存在差异。为了有效地捕捉数据中的时间依赖性并自适应调整对特征的关注度,提出具有残差卷积块注意力机制的时间卷积网络(RCMTCN)。通过在卷积块注意力机制中引入残差连接,模型能够同时关注原始输入和注意力加权的信息,提高了模型对局部信息的感知能力。在此基础上,将vision transformer(ViT)模型与RCMTCN相结合对齿轮的剩余使用寿命(RUL)预测,ViT模型能有效地捕获数据中的全局信息。两者融合后能充分展现在处理时间序列数据局部特征提取能力和全局信息关注方面的优势,提高对多维度特征的感知能力。最后,通过在两种工况齿轮性能退化数据集上对模型进行验证,选用点蚀故障数据进行训练,分别对点蚀和断齿故障进行测试。实验结果表明:与其他方法相比,所提出的方法能更充分地提取关键特征信息,在点蚀故障上评分函数得分为0.8898,且在断齿故障上得分为0.8587,表现出良好的工况、故障适应能力。 展开更多
关键词 齿轮 剩余使用寿命 时序网络 注意力机制 vision transformer模型
原文传递
Vision Transformer模型在中医舌诊图像分类中的应用研究
7
作者 周坚和 王彩雄 +3 位作者 李炜 周晓玲 张丹璇 吴玉峰 《广西科技大学学报》 2025年第5期89-98,共10页
舌诊作为中医望诊中的一项重要且常规的检查手段,在中医临床诊断中发挥着不可或缺的作用。为突破传统舌诊依赖主观经验及卷积神经网络(convolutional neural network,CNN)模型分类性能不足的局限,本文基于高质量舌象分类数据集,提出基于... 舌诊作为中医望诊中的一项重要且常规的检查手段,在中医临床诊断中发挥着不可或缺的作用。为突破传统舌诊依赖主观经验及卷积神经网络(convolutional neural network,CNN)模型分类性能不足的局限,本文基于高质量舌象分类数据集,提出基于Vision Transformer(ViT)深度学习模型,通过预训练与微调策略优化特征提取能力,并结合数据增强技术解决类别分布不平衡问题。实验结果表明,该模型在6项关键舌象特征分类任务中,5项指标的准确率(苔色85.6%、瘀斑98.0%、质地99.6%、舌色96.6%、裂纹87.8%)显著优于现有CNN方法(如ResNet50对应准确率分别为78.0%、91.0%、92.0%、68.0%、80.1%),验证了该模型在突破传统性能瓶颈、提升中医临床智能诊断可靠性方面的有效性和应用潜力。 展开更多
关键词 舌诊 vision transformer(ViT) 深度学习 医学图像分类
在线阅读 下载PDF
A Hybrid Approach for Pavement Crack Detection Using Mask R-CNN and Vision Transformer Model 被引量:2
8
作者 Shorouq Alshawabkeh Li Wu +2 位作者 Daojun Dong Yao Cheng Liping Li 《Computers, Materials & Continua》 SCIE EI 2025年第1期561-577,共17页
Detecting pavement cracks is critical for road safety and infrastructure management.Traditional methods,relying on manual inspection and basic image processing,are time-consuming and prone to errors.Recent deep-learni... Detecting pavement cracks is critical for road safety and infrastructure management.Traditional methods,relying on manual inspection and basic image processing,are time-consuming and prone to errors.Recent deep-learning(DL)methods automate crack detection,but many still struggle with variable crack patterns and environmental conditions.This study aims to address these limitations by introducing the Masker Transformer,a novel hybrid deep learning model that integrates the precise localization capabilities of Mask Region-based Convolutional Neural Network(Mask R-CNN)with the global contextual awareness of Vision Transformer(ViT).The research focuses on leveraging the strengths of both architectures to enhance segmentation accuracy and adaptability across different pavement conditions.We evaluated the performance of theMaskerTransformer against other state-of-theartmodels such asU-Net,TransformerU-Net(TransUNet),U-NetTransformer(UNETr),SwinU-NetTransformer(Swin-UNETr),You Only Look Once version 8(YoloV8),and Mask R-CNN using two benchmark datasets:Crack500 and DeepCrack.The findings reveal that the MaskerTransformer significantly outperforms the existing models,achieving the highest Dice SimilarityCoefficient(DSC),precision,recall,and F1-Score across both datasets.Specifically,the model attained a DSC of 80.04%on Crack500 and 91.37%on DeepCrack,demonstrating superior segmentation accuracy and reliability.The high precision and recall rates further substantiate its effectiveness in real-world applications,suggesting that the Masker Transformer can serve as a robust tool for automated pavement crack detection,potentially replacing more traditional methods. 展开更多
关键词 Pavement crack segmentation TRANSPORTATION deep learning vision transformer Mask R-CNN image segmentation
在线阅读 下载PDF
Vision Transformer深度学习模型在前列腺癌识别中的价值
9
作者 李梦娟 金龙 +2 位作者 尹胜男 计一丁 丁宁 《中国医学计算机成像杂志》 北大核心 2025年第3期396-401,共6页
目的:旨在探讨Vision Transformer(ViT)深度学习模型在前列腺癌(PCa)识别中的应用价值.方法:回顾性分析了480例接受磁共振成像(MRI)检查的患者影像资料.采用TotalSegmentator模型自动分割前列腺区域,通过ViT深度学习方法分别构建基于T2... 目的:旨在探讨Vision Transformer(ViT)深度学习模型在前列腺癌(PCa)识别中的应用价值.方法:回顾性分析了480例接受磁共振成像(MRI)检查的患者影像资料.采用TotalSegmentator模型自动分割前列腺区域,通过ViT深度学习方法分别构建基于T2加权像(T2WI)、基于表观弥散系数(ADC)图和基于两者结合的三个ViT模型.结果:在PCa的识别能力上,结合模型在训练组和测试组上的受试者工作特征(ROC)曲线下面积(AUC)分别为0.961和0.980,优于仅基于单一成像序列构建的ViT模型.在基于单一序列构建的ViT模型中,基于ADC图的模型相较于基于T2WI的模型表现更佳.此外,决策曲线分析显示结合模型提供了更大的临床效益.结论:ViT深度学习模型在前列腺癌识别中具有较高的诊断准确性和潜在价值. 展开更多
关键词 vision transformer 深度学习 前列腺癌 自动分割 磁共振成像
暂未订购
基于改进Vision Transformer的遥感图像分类研究 被引量:1
10
作者 李宗轩 冷欣 +1 位作者 章磊 陈佳凯 《林业机械与木工设备》 2025年第6期31-35,共5页
通过遥感图像分类能够快速有效获取森林区域分布,为林业资源管理监测提供支持。Vision Transformer(ViT)凭借优秀的全局信息捕捉能力在遥感图像分类任务中广泛应用。但Vision Transformer在浅层特征提取时会冗余捕捉其他局部特征而无法... 通过遥感图像分类能够快速有效获取森林区域分布,为林业资源管理监测提供支持。Vision Transformer(ViT)凭借优秀的全局信息捕捉能力在遥感图像分类任务中广泛应用。但Vision Transformer在浅层特征提取时会冗余捕捉其他局部特征而无法有效捕获关键特征,并且Vision Transformer在将图像分割为patch过程中可能会导致边缘等细节信息的丢失,从而影响分类准确性。针对上述问题提出一种改进Vision Transformer,引入了STA(Super Token Attention)注意力机制来增强Vision Transformer对关键特征信息的提取并减少计算冗余度,还通过加入哈尔小波下采样(Haar Wavelet Downsampling)在减少细节信息丢失的同时增强对图像不同尺度局部和全局信息的捕获能力。通过实验在AID数据集上达到了92.98%的总体准确率,证明了提出方法的有效性。 展开更多
关键词 遥感图像分类 vision transformer 哈尔小波下采样 STA注意力机制
在线阅读 下载PDF
基于Vision Transformer的混合型晶圆图缺陷模式识别
11
作者 李攀 娄莉 《现代信息科技》 2025年第19期26-30,共5页
晶圆测试作为芯片生产过程中重要的一环,晶圆图缺陷模式的识别和分类对改进前端制造工艺具有关键作用。在实际生产过程中,各类缺陷可能同时出现,形成混合缺陷类型。传统深度学习方法对混合型晶圆图缺陷信息的识别率较低,为此,文章提出... 晶圆测试作为芯片生产过程中重要的一环,晶圆图缺陷模式的识别和分类对改进前端制造工艺具有关键作用。在实际生产过程中,各类缺陷可能同时出现,形成混合缺陷类型。传统深度学习方法对混合型晶圆图缺陷信息的识别率较低,为此,文章提出一种基于Vision Transformer的缺陷识别方法。该方法采用多头自注意力机制对晶圆图的全局特征进行编码,实现了对混合型晶圆缺陷图的高效识别。在混合型缺陷数据集上的实验结果表明,该方法性能优于现有深度学习模型,平均正确率达96.2%。 展开更多
关键词 计算机视觉 晶圆图 缺陷识别 vision transformer
在线阅读 下载PDF
Steel Surface Defect Detection Using Learnable Memory Vision Transformer
12
作者 Syed Tasnimul Karim Ayon Farhan Md.Siraj Jia Uddin 《Computers, Materials & Continua》 SCIE EI 2025年第1期499-520,共22页
This study investigates the application of Learnable Memory Vision Transformers(LMViT)for detecting metal surface flaws,comparing their performance with traditional CNNs,specifically ResNet18 and ResNet50,as well as o... This study investigates the application of Learnable Memory Vision Transformers(LMViT)for detecting metal surface flaws,comparing their performance with traditional CNNs,specifically ResNet18 and ResNet50,as well as other transformer-based models including Token to Token ViT,ViT withoutmemory,and Parallel ViT.Leveraging awidely-used steel surface defect dataset,the research applies data augmentation and t-distributed stochastic neighbor embedding(t-SNE)to enhance feature extraction and understanding.These techniques mitigated overfitting,stabilized training,and improved generalization capabilities.The LMViT model achieved a test accuracy of 97.22%,significantly outperforming ResNet18(88.89%)and ResNet50(88.90%),aswell as the Token to TokenViT(88.46%),ViT without memory(87.18),and Parallel ViT(91.03%).Furthermore,LMViT exhibited superior training and validation performance,attaining a validation accuracy of 98.2%compared to 91.0%for ResNet 18,96.0%for ResNet50,and 89.12%,87.51%,and 91.21%for Token to Token ViT,ViT without memory,and Parallel ViT,respectively.The findings highlight the LMViT’s ability to capture long-range dependencies in images,an areawhere CNNs struggle due to their reliance on local receptive fields and hierarchical feature extraction.The additional transformer-based models also demonstrate improved performance in capturing complex features over CNNs,with LMViT excelling particularly at detecting subtle and complex defects,which is critical for maintaining product quality and operational efficiency in industrial applications.For instance,the LMViT model successfully identified fine scratches and minor surface irregularities that CNNs often misclassify.This study not only demonstrates LMViT’s potential for real-world defect detection but also underscores the promise of other transformer-based architectures like Token to Token ViT,ViT without memory,and Parallel ViT in industrial scenarios where complex spatial relationships are key.Future research may focus on enhancing LMViT’s computational efficiency for deployment in real-time quality control systems. 展开更多
关键词 Learnable Memory vision transformer(LMViT) Convolutional Neural Networks(CNN) metal surface defect detection deep learning computer vision image classification learnable memory gradient clipping label smoothing t-SNE visualization
在线阅读 下载PDF
ViT-Count:面向冠层遮挡的Vision Transformer树木计数定位方法
13
作者 张乔一 张瑞 霍光煜 《北京林业大学学报》 北大核心 2025年第10期128-138,共11页
【目的】针对复杂场景中树木检测的挑战,如遮挡、背景干扰及密集分布等,本研究提出一种基于Vision Transformer(ViT)的树木检测方法(ViT-Count),提升模型对复杂场景中树木的检测精度与鲁棒性。【方法】采用ViT作为基础模型,其在捕捉图... 【目的】针对复杂场景中树木检测的挑战,如遮挡、背景干扰及密集分布等,本研究提出一种基于Vision Transformer(ViT)的树木检测方法(ViT-Count),提升模型对复杂场景中树木的检测精度与鲁棒性。【方法】采用ViT作为基础模型,其在捕捉图像中全局上下文信息方面具有天然优势,尤其适用于形态多变的复杂环境。设计针对树木的视觉提示调优VPT机制,其通过在特征中注入可学习提示(prompts),优化模型在林地高密度树冠、光照变化及不同树种结构下的特征提取能力,提高对不同林分类型的适应性。设计卷积模块的注意力机制模块,利用其在局部感知基础上的长距离依赖建模能力,有效强化模型对树木遮挡、重叠及形态相似目标的辨别能力,提高整体检测的鲁棒性与准确性。设计一个树木检测解码器,通过多层卷积、归一化、GELU激活与上采样操作逐步还原空间分辨率,以生成的目标密度图实现树木计数与定位。【结果】该方法在提升森林、城市场景下的树木检测鲁棒性的同时,增强了模型在多尺度树木目标上的泛化能力。在Larch Casebearer数据集和Urban Tree数据集上进行的实验显示,与其他主流模型相比,该方法的MAE和RMSE最多分别降低了2.53、3.99,表明其泛化能力更强,具有最优的树木检测性能。可视化实验结果表明,在密集森林场景和复杂城市场景中,所提模型均具有较高的树木检测准确率。消融实验的结果证明了模型主要模块的有效性。【结论】基于Vision Transformer的面向复杂场景的树木计数与定位方法能够充分发挥ViT的全局建模能力及视觉提示调优机制任务适应性,结合卷积模块的注意力机制,有效提升复杂场景树木计数与定位的精度与鲁棒性。 展开更多
关键词 目标识别 树木计数 树木定位 复杂场景 vision transformer(ViT) 视觉提示调优(VPT) 注意力机制
在线阅读 下载PDF
基于改进Vision Transformer的森林火灾视频识别研究
14
作者 张敏 辛颖 黄天棋 《南京林业大学学报(自然科学版)》 北大核心 2025年第4期186-194,共9页
【目的】针对现有森林火灾图像识别算法存在的效率不足、时序特征利用率低等问题,构建基于视频数据的森林火灾识别模型,以提升林火监测的实时性与识别准确率。【方法】提出融合三维卷积神经网络(3DCNN)与视觉Vision Transformer(ViT)的C... 【目的】针对现有森林火灾图像识别算法存在的效率不足、时序特征利用率低等问题,构建基于视频数据的森林火灾识别模型,以提升林火监测的实时性与识别准确率。【方法】提出融合三维卷积神经网络(3DCNN)与视觉Vision Transformer(ViT)的C3D-ViT算法。该模型通过3DCNN提取视频序列的时空特征,构建时空特征向量;利用ViT编码器的自注意力机制融合局部与全局特征;最终经MLP Head层输出分类结果。通过消融实验验证C3D-ViT模型的有效性,并与原模型3DCNN和ViT,以及ResNet50、LSTM、YOLOv5等深度学习模型进行对比。【结果】C3D-ViT在自建林火数据集上准确率达到96.10%,较ResNet50(89.07%)、LSTM(93.26%)和YOLOv5(91.46%)具有明显优势。模型改进有效,准确率超越3DCNN(93.91%)与ViT(90.43%)。在遮挡、远距离、低浓度烟雾等复杂场景下保持较高的平均置信度,满足实时监测需求。【结论】C3D-ViT通过时空特征联合建模,显著提升林火识别的鲁棒性与时效性,为森林防火系统提供可靠的技术支持。 展开更多
关键词 森林火灾 深度学习 目标检测 三维卷积神经网络 vision transformer
原文传递
融合Vision Transformer与3D CNN的深度伪造视频篡改检测
15
作者 孙立信 吴永飞 +2 位作者 李心宇 任杰煌 刘西林 《计算机应用与软件》 北大核心 2025年第11期121-127,共7页
Deepfake技术的出现,使人们可以轻松地对人脸视频进行篡改,对社会造成巨大的危害。现有的篡改检测方法主要侧重于视频帧间的局部人脸区域空间特征变化检测,并没有考虑连续全局区域的时域特征,且不能检测视频帧中的细微空域特征变化。针... Deepfake技术的出现,使人们可以轻松地对人脸视频进行篡改,对社会造成巨大的危害。现有的篡改检测方法主要侧重于视频帧间的局部人脸区域空间特征变化检测,并没有考虑连续全局区域的时域特征,且不能检测视频帧中的细微空域特征变化。针对此问题,提出融合Vision Transformer和3D CNN的视频篡改检测方法ViT-3DCNN。该方法无需对人脸进行裁剪,直接学习视频帧间的连续时域特征以及每一帧的空间特征。实验结果表明,不依赖于人脸剪裁的情况下,ViT-3DCNN模型分别在DFDC数据集及Celeb-DF数据集上取得了93.3%与90.65%的分类准确性,充分验证了该模型在检测精度和泛化性等方面相较于现有检测方法具有明显的优势。 展开更多
关键词 伪造视频篡改检测 时空特征 vision transformer 3D卷积
在线阅读 下载PDF
xCViT:Improved Vision Transformer Network with Fusion of CNN and Xception for Skin Disease Recognition with Explainable AI
16
作者 Armughan Ali Hooria Shahbaz Robertas Damaševicius 《Computers, Materials & Continua》 2025年第4期1367-1398,共32页
Skin cancer is the most prevalent cancer globally,primarily due to extensive exposure to Ultraviolet(UV)radiation.Early identification of skin cancer enhances the likelihood of effective treatment,as delays may lead t... Skin cancer is the most prevalent cancer globally,primarily due to extensive exposure to Ultraviolet(UV)radiation.Early identification of skin cancer enhances the likelihood of effective treatment,as delays may lead to severe tumor advancement.This study proposes a novel hybrid deep learning strategy to address the complex issue of skin cancer diagnosis,with an architecture that integrates a Vision Transformer,a bespoke convolutional neural network(CNN),and an Xception module.They were evaluated using two benchmark datasets,HAM10000 and Skin Cancer ISIC.On the HAM10000,the model achieves a precision of 95.46%,an accuracy of 96.74%,a recall of 96.27%,specificity of 96.00%and an F1-Score of 95.86%.It obtains an accuracy of 93.19%,a precision of 93.25%,a recall of 92.80%,a specificity of 92.89%and an F1-Score of 93.19%on the Skin Cancer ISIC dataset.The findings demonstrate that the model that was proposed is robust and trustworthy when it comes to the classification of skin lesions.In addition,the utilization of Explainable AI techniques,such as Grad-CAM visualizations,assists in highlighting the most significant lesion areas that have an impact on the decisions that are made by the model. 展开更多
关键词 Skin lesions vision transformer CNN Xception deep learning network fusion explainable AI Grad-CAM skin cancer detection
在线阅读 下载PDF
Enhanced Plant Species Identification through Metadata Fusion and Vision Transformer Integration
17
作者 Hassan Javed Labiba Gillani Fahad +2 位作者 Syed Fahad Tahir Mehdi Hassan Hani Alquhayz 《Computers, Materials & Continua》 2025年第11期3981-3996,共16页
Accurate plant species classification is essential for many applications,such as biodiversity conservation,ecological research,and sustainable agricultural practices.Traditional morphological classification methods ar... Accurate plant species classification is essential for many applications,such as biodiversity conservation,ecological research,and sustainable agricultural practices.Traditional morphological classification methods are inherently slow,labour-intensive,and prone to inaccuracies,especiallywhen distinguishing between species exhibiting visual similarities or high intra-species variability.To address these limitations and to overcome the constraints of imageonly approaches,we introduce a novel Artificial Intelligence-driven framework.This approach integrates robust Vision Transformer(ViT)models for advanced visual analysis with a multi-modal data fusion strategy,incorporating contextual metadata such as precise environmental conditions,geographic location,and phenological traits.This combination of visual and ecological cues significantly enhances classification accuracy and robustness,proving especially vital in complex,heterogeneous real-world environments.The proposedmodel achieves an impressive 97.27%of test accuracy,andMean Reciprocal Rank(MRR)of 0.9842 that demonstrates strong generalization capabilities.Furthermore,efficient utilization of high-performance GPU resources(RTX 3090,18 GB memory)ensures scalable processing of highdimensional data.Comparative analysis consistently confirms that ourmetadata fusion approach substantially improves classification performance,particularly formorphologically similar species,and through principled self-supervised and transfer learning from ImageNet,the model adapts efficiently to new species,ensuring enhanced generalization.This comprehensive approach holds profound practical implications for precise conservation initiatives,rigorous ecological monitoring,and advanced agricultural management. 展开更多
关键词 vision transformers(ViTs) transformerS machine learning deep learning plant species classification MULTI-ORGAN
在线阅读 下载PDF
Multi-Stage Vision Transformer and Knowledge Graph Fusion for Enhanced Plant Disease Classification
18
作者 Wafaa H.Alwan Sabah M.Alturfi 《Computer Systems Science & Engineering》 2025年第1期419-434,共16页
Plant diseases pose a significant challenge to global agricultural productivity,necessitating efficient and precise diagnostic systems for early intervention and mitigation.In this study,we propose a novel hybrid fram... Plant diseases pose a significant challenge to global agricultural productivity,necessitating efficient and precise diagnostic systems for early intervention and mitigation.In this study,we propose a novel hybrid framework that integrates EfficientNet-B8,Vision Transformer(ViT),and Knowledge Graph Fusion(KGF)to enhance plant disease classification across 38 distinct disease categories.The proposed framework leverages deep learning and semantic enrichment to improve classification accuracy and interpretability.EfficientNet-B8,a convolutional neural network(CNN)with optimized depth and width scaling,captures fine-grained spatial details in high-resolution plant images,aiding in the detection of subtle disease symptoms.In parallel,ViT,a transformer-based architecture,effectively models long-range dependencies and global structural patterns within the images,ensuring robust disease pattern recognition.Furthermore,KGF incorporates domain-specific metadata,such as crop type,environmental conditions,and disease relationships,to provide contextual intelligence and improve classification accuracy.The proposed model was rigorously evaluated on a large-scale dataset containing diverse plant disease images,achieving outstanding performance with a 99.7%training accuracy and 99.3%testing accuracy.The precision and F1-score were consistently high across all disease classes,demonstrating the framework’s ability to minimize false positives and false negatives.Compared to conventional deep learning approaches,this hybrid method offers a more comprehensive and interpretable solution by integrating self-attention mechanisms and domain knowledge.Beyond its superior classification performance,this model opens avenues for optimizing metadata dependency and reducing computational complexity,making it more feasible for real-world deployment in resource-constrained agricultural settings.The proposed framework represents an advancement in precision agriculture,providing scalable,intelligent disease diagnosis that enhances crop protection and food security. 展开更多
关键词 Plant disease classification EfficientNet-B8 vision transformer knowledge graph fusion precision agriculture deep learning contextual metadata
在线阅读 下载PDF
Multi-Scale Vision Transformer with Dynamic Multi-Loss Function for Medical Image Retrieval and Classification
19
作者 Omar Alqahtani Mohamed Ghouse +2 位作者 Asfia Sabahath Omer Bin Hussain Arshiya Begum 《Computers, Materials & Continua》 2025年第5期2221-2244,共24页
This paper introduces a novel method for medical image retrieval and classification by integrating a multi-scale encoding mechanism with Vision Transformer(ViT)architectures and a dynamic multi-loss function.The multi... This paper introduces a novel method for medical image retrieval and classification by integrating a multi-scale encoding mechanism with Vision Transformer(ViT)architectures and a dynamic multi-loss function.The multi-scale encoding significantly enhances the model’s ability to capture both fine-grained and global features,while the dynamic loss function adapts during training to optimize classification accuracy and retrieval performance.Our approach was evaluated on the ISIC-2018 and ChestX-ray14 datasets,yielding notable improvements.Specifically,on the ISIC-2018 dataset,our method achieves an F1-Score improvement of+4.84% compared to the standard ViT,with a precision increase of+5.46% for melanoma(MEL).On the ChestX-ray14 dataset,the method delivers an F1-Score improvement of 5.3%over the conventional ViT,with precision gains of+5.0% for pneumonia(PNEU)and+5.4%for fibrosis(FIB).Experimental results demonstrate that our approach outperforms traditional CNN-based models and existing ViT variants,particularly in retrieving relevant medical cases and enhancing diagnostic accuracy.These findings highlight the potential of the proposedmethod for large-scalemedical image analysis,offering improved tools for clinical decision-making through superior classification and case comparison. 展开更多
关键词 Medical image retrieval vision transformer multi-scale encoding multi-loss function ISIC-2018 ChestX-ray14
在线阅读 下载PDF
Mango Disease Detection Using Fused Vision Transformer with ConvNeXt Architecture
20
作者 Faten S.Alamri Tariq Sadad +2 位作者 Ahmed S.Almasoud Raja Atif Aurangzeb Amjad Khan 《Computers, Materials & Continua》 2025年第4期1023-1039,共17页
Mango farming significantly contributes to the economy,particularly in developing countries.However,mango trees are susceptible to various diseases caused by fungi,viruses,and bacteria,and diagnosing these diseases at... Mango farming significantly contributes to the economy,particularly in developing countries.However,mango trees are susceptible to various diseases caused by fungi,viruses,and bacteria,and diagnosing these diseases at an early stage is crucial to prevent their spread,which can lead to substantial losses.The development of deep learning models for detecting crop diseases is an active area of research in smart agriculture.This study focuses on mango plant diseases and employs the ConvNeXt and Vision Transformer(ViT)architectures.Two datasets were used.The first,MangoLeafBD,contains data for mango leaf diseases such as anthracnose,bacterial canker,gall midge,and powdery mildew.The second,SenMangoFruitDDS,includes data for mango fruit diseases such as Alternaria,Anthracnose,Black Mould Rot,Healthy,and Stem and Rot.Both datasets were obtained from publicly available sources.The proposed model achieved an accuracy of 99.87%on the MangoLeafBD dataset and 98.40%on the MangoFruitDDS dataset.The results demonstrate that ConvNeXt and ViT models can effectively diagnose mango diseases,enabling farmers to identify these conditions more efficiently.The system contributes to increased mango production and minimizes economic losses by reducing the time and effort needed for manual diagnostics.Additionally,the proposed system is integrated into a mobile application that utilizes the model as a backend to detect mango diseases instantly. 展开更多
关键词 ConvNeXt model FUSION mango disease smart agriculture vision transformer
在线阅读 下载PDF
上一页 1 2 13 下一页 到第
使用帮助 返回顶部