期刊文献+
共找到246篇文章
< 1 2 13 >
每页显示 20 50 100
Advancing Breast Cancer Molecular Subtyping:A Comparative Study of Convolutional Neural Networks and Vision Transformers on Mammograms
1
作者 Chee Chin Lim Hui Wen Tiu +2 位作者 Qi Wei Oung Chiew Chea Lau Xiao Jian Tan 《Computers, Materials & Continua》 2026年第3期1287-1308,共22页
critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study pr... critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study proposes a non-invasive approach using digital mammography images and deep learning algorithm for classifying breast cancer molecular subtypes.Four pretrained models,including two Convolutional Neural Networks(MobileNet_V3_Large and VGG-16)and two Vision Transformers(ViT_B_16 and ViT_Base_Patch16_Clip_224)were fine-tuned to classify images into HER2-enriched,Luminal,Normal-like,and Triple Negative subtypes.Hyperparameter tuning,including learning rate adjustment and layer freezing strategies,was applied to optimize performance.Among the evaluated models,ViT_Base_Patch16_Clip_224 achieved the highest test accuracy(94.44%),with equally high precision,recall,and F1-score of 0.94,demonstrating excellent generalization.MobileNet_V3_Large achieved the same accuracy but showed less training stability.In contrast,VGG-16 recorded the lowest performance,indicating a limitation in its generalizability for this classification task.The study also highlighted the superior performance of the Vision Transformer models over CNNs,particularly due to their ability to capture global contextual features and the benefit of CLIP-based pretraining in ViT_Base_Patch16_Clip_224.To enhance clinical applicability,a graphical user interface(GUI)named“BCMS Dx”was developed for streamlined subtype prediction.Deep learning applied to mammography has proven effective for accurate and non-invasive molecular subtyping.The proposed Vision Transformer-based model and supporting GUI offer a promising direction for augmenting diagnostic workflows,minimizing the need for invasive procedures,and advancing personalized breast cancer management. 展开更多
关键词 Artificial intelligence breast cancer classification convolutional neural network deep learning hyperparameter tuning MAMMOGRAPHY medical imaging molecular subtypes vision transformer
在线阅读 下载PDF
Unsupervised Time-Series Signal Analysis with Autoencoders and Vision Transformers:A Review of Architectures and Applications
2
作者 Hossein Ahmadi Sajjad Emdadi Mahdimahalleh +1 位作者 Arman Farahat Banafsheh Saffari 《Journal of Intelligent Learning Systems and Applications》 2025年第2期77-111,共35页
The rapid growth of unlabeled time-series data in domains such as wireless communications,radar,biomedical engineering,and the Internet of Things(IoT)has driven advancements in unsupervised learning.This review synthe... The rapid growth of unlabeled time-series data in domains such as wireless communications,radar,biomedical engineering,and the Internet of Things(IoT)has driven advancements in unsupervised learning.This review synthe-sizes recent progress in applying autoencoders and vision transformers for un-supervised signal analysis,focusing on their architectures,applications,and emerging trends.We explore how these models enable feature extraction,anomaly detection,and classification across diverse signal types,including electrocardiograms,radar waveforms,and IoT sensor data.The review high-lights the strengths of hybrid architectures and self-supervised learning,while identifying challenges in interpretability,scalability,and domain generaliza-tion.By bridging methodological innovations and practical applications,this work offers a roadmap for developing robust,adaptive models for signal in-telligence. 展开更多
关键词 Unsupervised Learning Autoencoders vision transformers Time-Series Analysis Signal Processing Representation Learning Anomaly Detection Wireless Signals Biomedical Signals RADAR IoT
在线阅读 下载PDF
卷积神经网络与Vision Transformer在胶质瘤中的研究进展
3
作者 杨浩辉 徐涛 +3 位作者 王伟 安良良 敖用芳 朱家宝 《磁共振成像》 北大核心 2026年第1期168-174,共7页
胶质瘤因高度异质性、强侵袭性及预后差,传统诊疗面临巨大挑战。深度学习技术的引入为其精准诊疗提供了新路径,其中卷积神经网络(convolutional neural network,CNN)与Vision Transformer(ViT)是核心工具。CNN凭借层级化卷积操作在局部... 胶质瘤因高度异质性、强侵袭性及预后差,传统诊疗面临巨大挑战。深度学习技术的引入为其精准诊疗提供了新路径,其中卷积神经网络(convolutional neural network,CNN)与Vision Transformer(ViT)是核心工具。CNN凭借层级化卷积操作在局部特征提取(如肿瘤边缘、纹理细节)上具有天然优势,而ViT基于自注意力机制在全局上下文建模(如肿瘤跨区域异质性、多模态关联)方面表现突出,二者的融合策略通过整合局部精细特征与全局关联信息,在应对胶质瘤边界模糊、跨模态数据异构性等临床难题中展现出显著优势。本文综述了二者在胶质瘤检测与分割、病理分级、分子分型、预后评估等关键临床任务中的研究进展,阐述了原理、单独应用及融合策略。同时,本文也探讨了当前研究中存在的挑战,诸如对数据标注的强依赖性、模型可解释性不足等问题,并展望了未来的发展方向,例如构建轻量化架构、发展自监督学习以及推进多组学融合等前沿,以期为胶质瘤智能诊断提供系统性参考。 展开更多
关键词 胶质瘤 深度学习 卷积神经网络 vision Transformer 磁共振成像
暂未订购
基于条件生成对抗网络和Vision Transformer的胎儿颅脑超声标准切面识别方法
4
作者 李惠莲 林艺榕 +1 位作者 刘中华 柳培忠 《临床超声医学杂志》 2026年第2期164-169,共6页
胎儿颅脑超声检查是产前常规筛查中至关重要的一环,准确识别标准切面对于评估胎儿大脑发育状况具有重要意义。然而,由于超声图像质量差异和切面获取的复杂性,准确识别标准切面具有较大的挑战性。本文提出了一种基于条件对抗生成网络(CG... 胎儿颅脑超声检查是产前常规筛查中至关重要的一环,准确识别标准切面对于评估胎儿大脑发育状况具有重要意义。然而,由于超声图像质量差异和切面获取的复杂性,准确识别标准切面具有较大的挑战性。本文提出了一种基于条件对抗生成网络(CGAN)和Vision Transformer的胎儿颅脑超声标准切面识别方法,利用CGAN对原始数据进行增强,生成额外的标准切面和非标准切面图像,解决数据不足的问题;同时采用YOLOv9模型对超声图像中的颅骨区域进行自动裁剪,去除无关信息,确保模型专注于关键区域。在分类模型中采用Vision Transformer对所有输入图像进行归一化和尺寸调整,使用了数据增强技术如随机水平或垂直翻转、调整图像对比度、中心裁剪和调整图像饱和度等。结果显示,相较于现有最优模型CSwin Transformer的方法,本文提出的方法在胎儿颅脑超声标准切面识别任务中表现出色,其精确率、召回率、F1分数及准确率分别为92.5%、92.3%、92.4%和93.3%。该方法在提升识别精度方面具有显著优势,为临床超声检查提供了有效技术支持。 展开更多
关键词 条件生成对抗网络 vision Transformer 颅脑超声 胎儿 标准切面识别方法
暂未订购
Vision transformers for estimating irradiance using data scarce sky images
5
作者 David Hamlyn Sunny Chaudhary Tasmiat Rahman 《Energy and AI》 2025年第3期619-630,共12页
Accurate estimation of diffuse horizontal irradiance(DHI)is critical for optimising photovoltaic system performance and energy forecasting yet remains challenging in regions lacking comprehensive ground-based instrume... Accurate estimation of diffuse horizontal irradiance(DHI)is critical for optimising photovoltaic system performance and energy forecasting yet remains challenging in regions lacking comprehensive ground-based instrumentation.Recent advancements using Vision Transformers(ViTs)trained on extensive sky image datasets have shown promise in replacing costly irradiance measurement equipment,but the scarcity of long-term,high-quality sky imagery significantly restricts practical implementation.Addressing this critical gap,this study proposes a novel dual-framework approach designed for data-scarce scenarios.First,calculated atmospheric parameters,including extraterrestrial irradiance and cyclic time encodings,are integrated to represent sky conditions without utilising any instrumentation.Next,a sequential pipeline initially predicts synthetic global horizontal irradiance(GHI)and uses it as a feature,to refine DHI estimation.Finally,a dual-parallel architecture simultaneously processes raw and overlay-enhanced fisheye sky images.Overlays are generated through unsupervised,physicsinformed cloud segmentation to highlight dynamic sky features.Empirical validation is performed using data from the Chilbolton Observatory,chosen for its temperate climate and frequent cloud variability.To simulate data-scarce conditions,models are trained on a single month(e.g.,January)and evaluated across a temporally disjoint,full-year test set.Under this setup,the sequential and dual-parallel frameworks achieve RMSE values within 2-3 W/m^(2)and 1-6 W/m^(2),respectively,of a state-of-the-art ViT trained on the complete dataset.By combining physics-informed modelling with unsupervised segmentation,the proposed method provides a scalable and cost-effective solution for DHI estimation,advancing solar resource assessment in data-constrained environments. 展开更多
关键词 Computer vision Machine learning Solar irradiance Sky imaging vision transformer(ViT)
在线阅读 下载PDF
基于Vision Transformer的高炉风口智能监测模型及应用
6
作者 王浩男 韩明博 +1 位作者 但家云 李强 《钢铁研究学报》 北大核心 2026年第1期25-37,共13页
高炉下部风口窥视孔可以实时监测高炉回旋区的燃烧特征与喷煤状态等关键冶炼状态信息,进而判断煤气流分布和炉缸活跃程度等重要参数。为解决风口监测过程中存在的主观性与时滞性问题,本工作基于风口图像非结构大数据与Vision Transforme... 高炉下部风口窥视孔可以实时监测高炉回旋区的燃烧特征与喷煤状态等关键冶炼状态信息,进而判断煤气流分布和炉缸活跃程度等重要参数。为解决风口监测过程中存在的主观性与时滞性问题,本工作基于风口图像非结构大数据与Vision Transformer架构,建立了高炉风口智能监测模型TI-ViT。首先,对采集到的风口图像进行预处理,通过特征辨析与标签标定形成典型炉况数据集;进而,基于Vision Transformer架构构建了TI-ViT风口图像识别模型;最后,对TI-ViT模型进行性能评估,重点探究了模型深度对准确率、参数量、训练时间与运行时间的影响,并与传统卷积神经网络模型进行比较。经验证,TI-ViT模型的准确率达到97.7%,相比基于卷积神经网络的模型提升了9.1%,单张图像的推理时间仅为15.75 ms。将基于本研究模型所开发的“智慧眼”系统应用于现场实践,其识别准确率可达95.2%,表明该系统实现了对高炉风口的实时监测、识别与预警,有助于降低钢铁企业对风口异常状态的监测与诊断成本,为高炉炼铁智能化提供了新的发展方向。 展开更多
关键词 高炉风口 计算机视觉 vision Transformer 图像识别 高炉炼铁
原文传递
Gait-ViT:基于Vision Transformer的跨视角步态识别方法
7
作者 沈澍 王森 +1 位作者 黄苏岩 张秉睿 《小型微型计算机系统》 北大核心 2026年第3期646-652,共7页
步态识别作为一种远程生物特征识别技术,在医疗康复、刑侦侦查及社会治安等领域展现出广泛的应用前景.近年来,随着深度学习的快速发展,步态识别方法逐渐从传统的卷积神经网络(Convolutional Neural Network,CNN)转向更为先进的Transfor... 步态识别作为一种远程生物特征识别技术,在医疗康复、刑侦侦查及社会治安等领域展现出广泛的应用前景.近年来,随着深度学习的快速发展,步态识别方法逐渐从传统的卷积神经网络(Convolutional Neural Network,CNN)转向更为先进的Transformer架构.尽管CNN在图像处理任务中表现优异,但其对图像关键区域的关注能力有限,而注意力机制则能够通过聚焦图像局部区域来学习更具判别性的特征.为此,本文提出了一种融合注意力机制的Vision Transformer模型(Gait-ViT)用于步态识别,该方法首先将步态轮廓划分成多个小块并转化成块序列;然后通过位置嵌入和类嵌入对序列中的位置信息进行重新排列和编码;最后,将向量序列反馈给Vision Transformer进行预测.Gait-ViT模型在CASIA-B和OU-MVLP两个公开步态数据集上分别取得了98.1%和91.2%的识别准确率,验证了所提模型的有效性. 展开更多
关键词 步态识别 vision Transformer 卷积神经网络 特征提取
在线阅读 下载PDF
有效诊断Vision Transformer网络的滚动轴承故障诊断方法
8
作者 罗志勇 李明周 董鑫 《重庆邮电大学学报(自然科学版)》 北大核心 2026年第1期146-155,共10页
针对滚动轴承故障诊断中特征提取不完整和诊断效率低的问题,提出了有效诊断Vision Transformer(EDViT)网络。采用基于峰度的加权融合策略,合并传感器信息;利用短时傅里叶变换,将融合后的信号转换为时频图像;依次应用EDViT的双重注意卷... 针对滚动轴承故障诊断中特征提取不完整和诊断效率低的问题,提出了有效诊断Vision Transformer(EDViT)网络。采用基于峰度的加权融合策略,合并传感器信息;利用短时傅里叶变换,将融合后的信号转换为时频图像;依次应用EDViT的双重注意卷积模块和双分支补丁视觉变换模块来提取局部和全局特征,使用分类器进行故障分类。实验验证在凯斯西储大学轴承数据集上进行。结果表明,EDViT模型具有出色的特征提取能力、快速的收敛速度和较高的诊断准确性。与其他方法的对比表明,EDViT模型具有很强的泛化能力和鲁棒性。 展开更多
关键词 有效诊断vision Transformer网络 滚动轴承 故障诊断
在线阅读 下载PDF
A Hybrid Vision Transformer with Attention Architecture for Efficient Lung Cancer Diagnosis
9
作者 Abdu Salam Fahd M.Aldosari +4 位作者 Donia Y.Badawood Farhan Amin Isabel de la Torre Gerardo Mendez Mezquita Henry Fabian Gongora 《Computers, Materials & Continua》 2026年第4期1129-1147,共19页
Lung cancer remains a major global health challenge,with early diagnosis crucial for improved patient survival.Traditional diagnostic techniques,including manual histopathology and radiological assessments,are prone t... Lung cancer remains a major global health challenge,with early diagnosis crucial for improved patient survival.Traditional diagnostic techniques,including manual histopathology and radiological assessments,are prone to errors and variability.Deep learning methods,particularly Vision Transformers(ViT),have shown promise for improving diagnostic accuracy by effectively extracting global features.However,ViT-based approaches face challenges related to computational complexity and limited generalizability.This research proposes the DualSet ViT-PSO-SVM framework,integrating aViTwith dual attentionmechanisms,Particle Swarm Optimization(PSO),and SupportVector Machines(SVM),aiming for efficient and robust lung cancer classification acrossmultiple medical image datasets.The study utilized three publicly available datasets:LIDC-IDRI,LUNA16,and TCIA,encompassing computed tomography(CT)scans and histopathological images.Data preprocessing included normalization,augmentation,and segmentation.Dual attention mechanisms enhanced ViT’s feature extraction capabilities.PSO optimized feature selection,and SVM performed classification.Model performance was evaluated on individual and combined datasets,benchmarked against CNN-based and standard ViT approaches.The DualSet ViT-PSO-SVM significantly outperformed existing methods,achieving superior accuracy rates of 97.85%(LIDC-IDRI),98.32%(LUNA16),and 96.75%(TCIA).Crossdataset evaluations demonstrated strong generalization capabilities and stability across similar imagingmodalities.The proposed framework effectively bridges advanced deep learning techniques with clinical applicability,offering a robust diagnostic tool for lung cancer detection,reducing complexity,and improving diagnostic reliability and interpretability. 展开更多
关键词 Deep learning artificial intelligence healthcare medical imaging vision transformer
在线阅读 下载PDF
基于注意机制优化的Vision Transformer在虫草等级识别中的应用
10
作者 刘惠文 《消费电子》 2026年第4期248-250,共3页
在数字时代背景下,深度学习驱动了图像识别的创新,但目前对虫草等级识别的研究主要还是依靠人工经验,存在效率低、主观性强等问题。文章采用视觉转换器(Vision Transformer,ViT)模型对虫草图像进行分级识别。首先,阐述视觉知觉、注意机... 在数字时代背景下,深度学习驱动了图像识别的创新,但目前对虫草等级识别的研究主要还是依靠人工经验,存在效率低、主观性强等问题。文章采用视觉转换器(Vision Transformer,ViT)模型对虫草图像进行分级识别。首先,阐述视觉知觉、注意机制和层次划分的理论依据,并从注意机制和模型结构两个角度对ViT进行调整和优化;在此基础上,利用PyTorch框架对包含5000幅图像的数据集进行5重交叉验证。实验结果显示,该模型的预测精度达到95.2%,召回率达到94.5%,F1值达到94.8%,为虫草行业智能化发展提供了技术支持。 展开更多
关键词 vision Transformer 虫草等级识别 图像分类 深度学习 计算机视觉
在线阅读 下载PDF
孪生多级Vision Transformer高分遥感影像变化检测方法
11
作者 黄英杰 《测绘与空间地理信息》 2026年第2期123-126,130,共5页
针对现有遥感变化检测模型捕获特征不全面,深、浅层特征利用不充分,导致分割精度不高的问题,提出一种结合Vision Transformer与孪生架构的遥感影像变化检测模型。在编码器端,采用孪生多级Vision Transformer实现空间特征提取与全局上下... 针对现有遥感变化检测模型捕获特征不全面,深、浅层特征利用不充分,导致分割精度不高的问题,提出一种结合Vision Transformer与孪生架构的遥感影像变化检测模型。在编码器端,采用孪生多级Vision Transformer实现空间特征提取与全局上下文特征建模,同时采用haar小波下采样层进行特征图尺寸压缩,减少细节特征的丢失;在特征解码过程中,引入全尺度特征连接机制,充分利用不同来源的深、浅层特征。实验结果表明,所提出模型在分割精度上优于当前的主流模型,能够准确地捕获变化目标的边界与细节信息。 展开更多
关键词 遥感变化检测 孪生架构 vision Transformer haar小波下采样 全尺度特征连接
在线阅读 下载PDF
KPA-ViT:Key Part-Level Attention Vision Transformer for Foreign Body Classification on Coal Conveyor Belt
12
作者 Haoxuanye Ji Zhiliang Chen +3 位作者 Pengfei Jiang Ziyue Wang Ting Yu Wei Zhang 《Computers, Materials & Continua》 2026年第3期656-671,共16页
Foreign body classification on coal conveyor belts is a critical component of intelligent coal mining systems.Previous approaches have primarily utilized convolutional neural networks(CNNs)to effectively integrate spa... Foreign body classification on coal conveyor belts is a critical component of intelligent coal mining systems.Previous approaches have primarily utilized convolutional neural networks(CNNs)to effectively integrate spatial and semantic information.However,the performance of CNN-based methods remains limited in classification accuracy,primarily due to insufficient exploration of local image characteristics.Unlike CNNs,Vision Transformer(ViT)captures discriminative features by modeling relationships between local image patches.However,such methods typically require a large number of training samples to perform effectively.In the context of foreign body classification on coal conveyor belts,the limited availability of training samples hinders the full exploitation of Vision Transformer’s(ViT)capabilities.To address this issue,we propose an efficient approach,termed Key Part-level Attention Vision Transformer(KPA-ViT),which incorporates key local information into the transformer architecture to enrich the training information.It comprises three main components:a key-point detection module,a key local mining module,and an attention module.To extract key local regions,a key-point detection strategy is first employed to identify the positions of key points.Subsequently,the key local mining module extracts the relevant local features based on these detected points.Finally,an attention module composed of self-attention and cross-attention blocks is introduced to integrate global and key part-level information,thereby enhancing the model’s ability to learn discriminative features.Compared to recent transformer-based frameworks—such as ViT,Swin-Transformer,and EfficientViT—the proposed KPA-ViT achieves performance improvements of 9.3%,6.6%,and 2.8%,respectively,on the CUMT-BelT dataset,demonstrating its effectiveness. 展开更多
关键词 Foreign body classification global and part-level key information coal conveyor belt vision transformer(ViT) self and cross attention
在线阅读 下载PDF
基于改进Vision Transformer的水稻叶片病害图像识别 被引量:1
13
作者 朱周华 周怡纳 +1 位作者 侯智杰 田成源 《电子测量技术》 北大核心 2025年第10期153-160,共8页
水稻叶片病害智能识别在现代农业生产中具有重要意义。针对传统Vision Transformer网络缺乏归纳偏置,难以有效捕捉图像局部细节特征的问题,提出了一种改进的Vision Transformer模型。该模型通过引入内在归纳偏置,增强了对多尺度上下文... 水稻叶片病害智能识别在现代农业生产中具有重要意义。针对传统Vision Transformer网络缺乏归纳偏置,难以有效捕捉图像局部细节特征的问题,提出了一种改进的Vision Transformer模型。该模型通过引入内在归纳偏置,增强了对多尺度上下文以及局部与全局依赖关系的建模能力,同时降低了对大规模数据集的需求。此外,Vision Transformer中的多层感知器模块被Kolmogorov-Arnold网络结构取代,从而提升了模型对复杂特征的提取能力和可解释性。实验结果表明,所提模型在水稻叶片病害识别任务中取得了优异的性能,识别准确率达到了98.62%,较原始ViT模型提升了6.2%,显著提高了对水稻叶片病害的识别性能。 展开更多
关键词 水稻叶片病害 图像识别 vision Transformer网络 归纳偏置 局部特征
原文传递
基于残差注意力TCN与vision transformer的齿轮剩余寿命预测
14
作者 胡爱军 李晨阳 +2 位作者 邢磊 周卓浩 向玲 《航空动力学报》 北大核心 2025年第12期14-24,共11页
齿轮系统的运行状况受到多个因素的影响,这些因素在时间上存在长期依赖关系,并在局部和全局特征之间存在差异。为了有效地捕捉数据中的时间依赖性并自适应调整对特征的关注度,提出具有残差卷积块注意力机制的时间卷积网络(RCMTCN)。通... 齿轮系统的运行状况受到多个因素的影响,这些因素在时间上存在长期依赖关系,并在局部和全局特征之间存在差异。为了有效地捕捉数据中的时间依赖性并自适应调整对特征的关注度,提出具有残差卷积块注意力机制的时间卷积网络(RCMTCN)。通过在卷积块注意力机制中引入残差连接,模型能够同时关注原始输入和注意力加权的信息,提高了模型对局部信息的感知能力。在此基础上,将vision transformer(ViT)模型与RCMTCN相结合对齿轮的剩余使用寿命(RUL)预测,ViT模型能有效地捕获数据中的全局信息。两者融合后能充分展现在处理时间序列数据局部特征提取能力和全局信息关注方面的优势,提高对多维度特征的感知能力。最后,通过在两种工况齿轮性能退化数据集上对模型进行验证,选用点蚀故障数据进行训练,分别对点蚀和断齿故障进行测试。实验结果表明:与其他方法相比,所提出的方法能更充分地提取关键特征信息,在点蚀故障上评分函数得分为0.8898,且在断齿故障上得分为0.8587,表现出良好的工况、故障适应能力。 展开更多
关键词 齿轮 剩余使用寿命 时序网络 注意力机制 vision transformer模型
原文传递
Vision Transformer模型在中医舌诊图像分类中的应用研究
15
作者 周坚和 王彩雄 +3 位作者 李炜 周晓玲 张丹璇 吴玉峰 《广西科技大学学报》 2025年第5期89-98,共10页
舌诊作为中医望诊中的一项重要且常规的检查手段,在中医临床诊断中发挥着不可或缺的作用。为突破传统舌诊依赖主观经验及卷积神经网络(convolutional neural network,CNN)模型分类性能不足的局限,本文基于高质量舌象分类数据集,提出基于... 舌诊作为中医望诊中的一项重要且常规的检查手段,在中医临床诊断中发挥着不可或缺的作用。为突破传统舌诊依赖主观经验及卷积神经网络(convolutional neural network,CNN)模型分类性能不足的局限,本文基于高质量舌象分类数据集,提出基于Vision Transformer(ViT)深度学习模型,通过预训练与微调策略优化特征提取能力,并结合数据增强技术解决类别分布不平衡问题。实验结果表明,该模型在6项关键舌象特征分类任务中,5项指标的准确率(苔色85.6%、瘀斑98.0%、质地99.6%、舌色96.6%、裂纹87.8%)显著优于现有CNN方法(如ResNet50对应准确率分别为78.0%、91.0%、92.0%、68.0%、80.1%),验证了该模型在突破传统性能瓶颈、提升中医临床智能诊断可靠性方面的有效性和应用潜力。 展开更多
关键词 舌诊 vision Transformer(ViT) 深度学习 医学图像分类
在线阅读 下载PDF
Enhanced Plant Species Identification through Metadata Fusion and Vision Transformer Integration
16
作者 Hassan Javed Labiba Gillani Fahad +2 位作者 Syed Fahad Tahir Mehdi Hassan Hani Alquhayz 《Computers, Materials & Continua》 2025年第11期3981-3996,共16页
Accurate plant species classification is essential for many applications,such as biodiversity conservation,ecological research,and sustainable agricultural practices.Traditional morphological classification methods ar... Accurate plant species classification is essential for many applications,such as biodiversity conservation,ecological research,and sustainable agricultural practices.Traditional morphological classification methods are inherently slow,labour-intensive,and prone to inaccuracies,especiallywhen distinguishing between species exhibiting visual similarities or high intra-species variability.To address these limitations and to overcome the constraints of imageonly approaches,we introduce a novel Artificial Intelligence-driven framework.This approach integrates robust Vision Transformer(ViT)models for advanced visual analysis with a multi-modal data fusion strategy,incorporating contextual metadata such as precise environmental conditions,geographic location,and phenological traits.This combination of visual and ecological cues significantly enhances classification accuracy and robustness,proving especially vital in complex,heterogeneous real-world environments.The proposedmodel achieves an impressive 97.27%of test accuracy,andMean Reciprocal Rank(MRR)of 0.9842 that demonstrates strong generalization capabilities.Furthermore,efficient utilization of high-performance GPU resources(RTX 3090,18 GB memory)ensures scalable processing of highdimensional data.Comparative analysis consistently confirms that ourmetadata fusion approach substantially improves classification performance,particularly formorphologically similar species,and through principled self-supervised and transfer learning from ImageNet,the model adapts efficiently to new species,ensuring enhanced generalization.This comprehensive approach holds profound practical implications for precise conservation initiatives,rigorous ecological monitoring,and advanced agricultural management. 展开更多
关键词 vision transformers(ViTs) transformers machine learning deep learning plant species classification MULTI-ORGAN
在线阅读 下载PDF
Transformers for Multi-Modal Image Analysis in Healthcare
17
作者 Sameera V Mohd Sagheer Meghana K H +2 位作者 P M Ameer Muneer Parayangat Mohamed Abbas 《Computers, Materials & Continua》 2025年第9期4259-4297,共39页
Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status... Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status.Each of these methods contributes unique diagnostic insights,enhancing the overall assessment of patient condition.Nevertheless,the amalgamation of data from multiple modalities presents difficulties due to disparities in resolution,data collection methods,and noise levels.While traditional models like Convolutional Neural Networks(CNNs)excel in single-modality tasks,they struggle to handle multi-modal complexities,lacking the capacity to model global relationships.This research presents a novel approach for examining multi-modal medical imagery using a transformer-based system.The framework employs self-attention and cross-attention mechanisms to synchronize and integrate features across various modalities.Additionally,it shows resilience to variations in noise and image quality,making it adaptable for real-time clinical use.To address the computational hurdles linked to transformer models,particularly in real-time clinical applications in resource-constrained environments,several optimization techniques have been integrated to boost scalability and efficiency.Initially,a streamlined transformer architecture was adopted to minimize the computational load while maintaining model effectiveness.Methods such as model pruning,quantization,and knowledge distillation have been applied to reduce the parameter count and enhance the inference speed.Furthermore,efficient attention mechanisms such as linear or sparse attention were employed to alleviate the substantial memory and processing requirements of traditional self-attention operations.For further deployment optimization,researchers have implemented hardware-aware acceleration strategies,including the use of TensorRT and ONNX-based model compression,to ensure efficient execution on edge devices.These optimizations allow the approach to function effectively in real-time clinical settings,ensuring viability even in environments with limited resources.Future research directions include integrating non-imaging data to facilitate personalized treatment and enhancing computational efficiency for implementation in resource-limited environments.This study highlights the transformative potential of transformer models in multi-modal medical imaging,offering improvements in diagnostic accuracy and patient care outcomes. 展开更多
关键词 Multi-modal image analysis medical imaging deep learning image segmentation disease detection multi-modal fusion vision transformers(ViTs) precision medicine clinical decision support
在线阅读 下载PDF
Vision Transformers with Hierarchical Attention 被引量:4
18
作者 Yun Liu Yu-Huan Wu +3 位作者 Guolei Sun Le Zhang Ajad Chhatkuli Luc Van Gool 《Machine Intelligence Research》 EI CSCD 2024年第4期670-683,共14页
This paper tackles the high computational/space complexity associated with multi-head self-attention(MHSA)in vanilla vision transformers.To this end,we propose hierarchical MHSA(H-MHSA),a novel approach that computes ... This paper tackles the high computational/space complexity associated with multi-head self-attention(MHSA)in vanilla vision transformers.To this end,we propose hierarchical MHSA(H-MHSA),a novel approach that computes self-attention in a hierarchical fashion.Specifically,we first divide the input image into patches as commonly done,and each patch is viewed as a token.Then,the proposed H-MHSA learns token relationships within local patches,serving as local relationship modeling.Then,the small patches are merged into larger ones,and H-MHSA models the global dependencies for the small number of the merged tokens.At last,the local and global attentive features are aggregated to obtain features with powerful representation capacity.Since we only calculate attention for a limited number of tokens at each step,the computational load is reduced dramatically.Hence,H-MHSA can efficiently model global relationships among tokens without sacrificing fine-grained information.With the H-MHSA module incorporated,we build a family of hierarchical-attention-based transformer networks,namely HAT-Net.To demonstrate the superiority of HAT-Net in scene understanding,we conduct extensive experiments on fundamental vision tasks,including image classification,semantic segmentation,object detection and instance segmentation.Therefore,HAT-Net provides a new perspective for vision transformers.Code and pretrained models are available at https://github.com/yun-liu/HAT-Net. 展开更多
关键词 vision transformer hierarchical attention global attention local attention scene understanding.
原文传递
Polyp-PVT:Polyp Segmentation with Pyramid Vision Transformers 被引量:13
19
作者 Bo Dong Wenhai Wang +3 位作者 Deng-Ping Fan Jinpeng Li Huazhu Fu Ling Shao 《CAAI Artificial Intelligence Research》 2023年第1期1-15,共15页
Most polyp segmentation methods use convolutional neural networks(CNNs)as their backbone,leading to two key issues when exchanging information between the encoder and decoder:(1)taking into account the differences in ... Most polyp segmentation methods use convolutional neural networks(CNNs)as their backbone,leading to two key issues when exchanging information between the encoder and decoder:(1)taking into account the differences in contribution between different-level features,and(2)designing an effective mechanism for fusing these features.Unlike existing CNN-based methods,we adopt a transformer encoder,which learns more powerful and robust representations.In addition,considering the image acquisition influence and elusive properties of polyps,we introduce three standard modules,including a cascaded fusion module(CFM),a camouflage identification module(CIM),and a similarity aggregation module(SAM).Among these,the CFM is used to collect the semantic and location information of polyps from high-level features;the CIM is applied to capture polyp information disguised in low-level features,and the SAM extends the pixel features of the polyp area with high-level semantic position information to the entire polyp area,thereby effectively fusing cross-level features.The proposed model,named Polyp-PVT,effectively suppresses noises in the features and significantly improves their expressive capabilities.Extensive experiments on five widely adopted datasets show that the proposed model is more robust to various challenging situations(e.g.,appearance changes,small objects,and rotation)than existing representative methods.The proposed model is available at https://github.com/DengPingFan/Polyp-PVT. 展开更多
关键词 polyp segmentation pyramid vision transformer COLONOSCOPY computer vision
原文传递
A Hybrid Approach for Pavement Crack Detection Using Mask R-CNN and Vision Transformer Model 被引量:2
20
作者 Shorouq Alshawabkeh Li Wu +2 位作者 Daojun Dong Yao Cheng Liping Li 《Computers, Materials & Continua》 SCIE EI 2025年第1期561-577,共17页
Detecting pavement cracks is critical for road safety and infrastructure management.Traditional methods,relying on manual inspection and basic image processing,are time-consuming and prone to errors.Recent deep-learni... Detecting pavement cracks is critical for road safety and infrastructure management.Traditional methods,relying on manual inspection and basic image processing,are time-consuming and prone to errors.Recent deep-learning(DL)methods automate crack detection,but many still struggle with variable crack patterns and environmental conditions.This study aims to address these limitations by introducing the Masker Transformer,a novel hybrid deep learning model that integrates the precise localization capabilities of Mask Region-based Convolutional Neural Network(Mask R-CNN)with the global contextual awareness of Vision Transformer(ViT).The research focuses on leveraging the strengths of both architectures to enhance segmentation accuracy and adaptability across different pavement conditions.We evaluated the performance of theMaskerTransformer against other state-of-theartmodels such asU-Net,TransformerU-Net(TransUNet),U-NetTransformer(UNETr),SwinU-NetTransformer(Swin-UNETr),You Only Look Once version 8(YoloV8),and Mask R-CNN using two benchmark datasets:Crack500 and DeepCrack.The findings reveal that the MaskerTransformer significantly outperforms the existing models,achieving the highest Dice SimilarityCoefficient(DSC),precision,recall,and F1-Score across both datasets.Specifically,the model attained a DSC of 80.04%on Crack500 and 91.37%on DeepCrack,demonstrating superior segmentation accuracy and reliability.The high precision and recall rates further substantiate its effectiveness in real-world applications,suggesting that the Masker Transformer can serve as a robust tool for automated pavement crack detection,potentially replacing more traditional methods. 展开更多
关键词 Pavement crack segmentation TRANSPORTATION deep learning vision transformer Mask R-CNN image segmentation
在线阅读 下载PDF
上一页 1 2 13 下一页 到第
使用帮助 返回顶部