期刊文献+
共找到263篇文章
< 1 2 14 >
每页显示 20 50 100
Advancing Breast Cancer Molecular Subtyping:A Comparative Study of Convolutional Neural Networks and Vision Transformers on Mammograms
1
作者 Chee Chin Lim Hui Wen Tiu +2 位作者 Qi Wei Oung Chiew Chea Lau Xiao Jian Tan 《Computers, Materials & Continua》 2026年第3期1287-1308,共22页
critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study pr... critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study proposes a non-invasive approach using digital mammography images and deep learning algorithm for classifying breast cancer molecular subtypes.Four pretrained models,including two Convolutional Neural Networks(MobileNet_V3_Large and VGG-16)and two Vision Transformers(ViT_B_16 and ViT_Base_Patch16_Clip_224)were fine-tuned to classify images into HER2-enriched,Luminal,Normal-like,and Triple Negative subtypes.Hyperparameter tuning,including learning rate adjustment and layer freezing strategies,was applied to optimize performance.Among the evaluated models,ViT_Base_Patch16_Clip_224 achieved the highest test accuracy(94.44%),with equally high precision,recall,and F1-score of 0.94,demonstrating excellent generalization.MobileNet_V3_Large achieved the same accuracy but showed less training stability.In contrast,VGG-16 recorded the lowest performance,indicating a limitation in its generalizability for this classification task.The study also highlighted the superior performance of the Vision Transformer models over CNNs,particularly due to their ability to capture global contextual features and the benefit of CLIP-based pretraining in ViT_Base_Patch16_Clip_224.To enhance clinical applicability,a graphical user interface(GUI)named“BCMS Dx”was developed for streamlined subtype prediction.Deep learning applied to mammography has proven effective for accurate and non-invasive molecular subtyping.The proposed Vision Transformer-based model and supporting GUI offer a promising direction for augmenting diagnostic workflows,minimizing the need for invasive procedures,and advancing personalized breast cancer management. 展开更多
关键词 Artificial intelligence breast cancer classification convolutional neural network deep learning hyperparameter tuning MAMMOGRAPHY medical imaging molecular subtypes vision transformer
在线阅读 下载PDF
Advancing breast cancer diagnosis:token vision transformers for faster and accurate classification of histopathology images
2
作者 Mouhamed Laid Abimouloud Khaled Bensid +2 位作者 Mohamed Elleuch Mohamed Ben Ammar Monji Kherallah 《Visual Computing for Industry,Biomedicine,and Art》 2025年第1期1-27,共27页
The vision transformer(ViT)architecture,with its attention mechanism based on multi-head attention layers,has been widely adopted in various computer-aided diagnosis tasks due to its effectiveness in processing medica... The vision transformer(ViT)architecture,with its attention mechanism based on multi-head attention layers,has been widely adopted in various computer-aided diagnosis tasks due to its effectiveness in processing medical image information.ViTs are notably recognized for their complex architecture,which requires high-performance GPUs or CPUs for efficient model training and deployment in real-world medical diagnostic devices.This renders them more intricate than convolutional neural networks(CNNs).This difficulty is also challenging in the context of histopathology image analysis,where the images are both limited and complex.In response to these challenges,this study proposes a TokenMixer hybrid-architecture that combines the strengths of CNNs and ViTs.This hybrid architecture aims to enhance feature extraction and classification accuracy with shorter training time and fewer parameters by minimizing the number of input patches employed during training,while incorporating tokenization of input patches using convolutional layers and encoder transformer layers to process patches across all network layers for fast and accurate breast cancer tumor subtype classification.The TokenMixer mechanism is inspired by the ConvMixer and Token-Learner models.First,the ConvMixer model dynamically generates spatial attention maps using convolutional layers,enabling the extraction of patches from input images to minimize the number of input patches used in training.Second,the TokenLearner model extracts relevant regions from the selected input patches,tokenizes them to improve feature extraction,and trains all tokenized patches in an encoder transformer network.We evaluated the TokenMixer model on the BreakHis public dataset,comparing it with ViT-based and other state-of-the-art methods.Our approach achieved impressive results for both binary and multi-classification of breast cancer subtypes across various magnification levels(40×,100×,200×,400×).The model demonstrated accuracies of 97.02%for binary classification and 93.29%for multi-classification,with decision times of 391.71 and 1173.56 s,respectively.These results highlight the potential of our hybrid deep ViT-CNN architecture for advancing tumor classification in histopathological images.The source code is accessible:https://github.com/abimo uloud/Token Mixer. 展开更多
关键词 Breast cancer Convolutional vision transformer Histopathological images Multi classification Brekhis
在线阅读 下载PDF
Unsupervised Time-Series Signal Analysis with Autoencoders and Vision Transformers:A Review of Architectures and Applications
3
作者 Hossein Ahmadi Sajjad Emdadi Mahdimahalleh +1 位作者 Arman Farahat Banafsheh Saffari 《Journal of Intelligent Learning Systems and Applications》 2025年第2期77-111,共35页
The rapid growth of unlabeled time-series data in domains such as wireless communications,radar,biomedical engineering,and the Internet of Things(IoT)has driven advancements in unsupervised learning.This review synthe... The rapid growth of unlabeled time-series data in domains such as wireless communications,radar,biomedical engineering,and the Internet of Things(IoT)has driven advancements in unsupervised learning.This review synthe-sizes recent progress in applying autoencoders and vision transformers for un-supervised signal analysis,focusing on their architectures,applications,and emerging trends.We explore how these models enable feature extraction,anomaly detection,and classification across diverse signal types,including electrocardiograms,radar waveforms,and IoT sensor data.The review high-lights the strengths of hybrid architectures and self-supervised learning,while identifying challenges in interpretability,scalability,and domain generaliza-tion.By bridging methodological innovations and practical applications,this work offers a roadmap for developing robust,adaptive models for signal in-telligence. 展开更多
关键词 Unsupervised Learning Autoencoders vision transformers Time-Series Analysis Signal Processing Representation Learning Anomaly Detection Wireless Signals Biomedical Signals RADAR IoT
在线阅读 下载PDF
TP-ViT:truncated uniform-log2 quantizer and progressive bit-decline reconstruction for vision Transformer quantization
4
作者 Xichuan ZHOU Sihuan ZHAO +4 位作者 Rui DING Jiayu SHI Jing NIE Lihui CHEN Haijun LIU 《ENGINEERING Information Technology & Electronic Engineering》 2026年第1期47-58,共12页
Vision Transformers(ViTs)have achieved remarkable success across various artificial intelligence-based computer vision applications.However,their demanding computational and memory requirements pose significant challe... Vision Transformers(ViTs)have achieved remarkable success across various artificial intelligence-based computer vision applications.However,their demanding computational and memory requirements pose significant challenges for de-ployment on resource-constrained edge devices.Although post-training quantization(PTQ)provides a promising solution by reducing model precision with minimal calibration data,aggressive low-bit quantization typically leads to substantial perfor-mance degradation.To address this challenge,we present the truncated uniform-log2 quantizer and progressive bit-decline reconstruction method for vision Transformer quantization(TP-ViT).It is an innovative PTQ framework specifically designed for ViTs,featuring two key technical contributions:(1)truncated uniform-log2 quantizer,a novel quantization approach which effectively handles outlier values in post-Softmax activations,significantly reducing quantization errors;(2)bit-decline optimiza-tion strategy,which employs transition weights to gradually reduce bit precision while maintaining model performance under extreme quantization conditions.Comprehensive experiments on image classification,object detection,and instance segmenta-tion tasks demonstrate TP-ViT’s superior performance compared to state-of-the-art PTQ methods,particularly in challenging 3-bit quantization scenarios.Our framework achieves a notable 6.18 percentage points improvement in top-1 accuracy for ViT-small under 3-bit quantization.These results validate TP-ViT’s robustness and general applicability,paving the way for more efficient deployment of ViT models in computer vision applications on edge hardware. 展开更多
关键词 vision transformers Post-training quantization Block reconstruction Image classification Object detection Instance segmentation
在线阅读 下载PDF
基于Vision Transformer的肠镜图像识别模型在结肠疾病中的诊断作用研究
5
作者 张婷 徐伟超 +6 位作者 许亚培 王子康 夏悦桐 刘秋华 杜姚 才艳茹 杨倩 《时珍国医国药》 北大核心 2026年第5期987-992,共6页
目的 探究人工智能诊断系统视觉Transformer(ViT)通过分析临床内镜成像数据对结肠疾病的诊断作用。方法 回顾性收集1082例组织学证实患有结肠疾病(包括结肠息肉、结肠炎、结肠癌)患者的3000张标准白光结肠镜图像。对这三类疾病处理后的... 目的 探究人工智能诊断系统视觉Transformer(ViT)通过分析临床内镜成像数据对结肠疾病的诊断作用。方法 回顾性收集1082例组织学证实患有结肠疾病(包括结肠息肉、结肠炎、结肠癌)患者的3000张标准白光结肠镜图像。对这三类疾病处理后的数据集按照7∶2∶1的比例划分,在每类疾病图像中随机选取70%作为训练集(Train),20%作为测试集(Test),10%作为验证集(Predict),最后通过使用ViT模型对图像进行识别分类。结果 在测试集中,该模型对于结肠息肉、结肠炎、结肠癌的肠镜图像分类准确率为:结肠息肉99.61%、结肠炎99.67%、结肠癌100.00%。结论 ViT在检测结肠疾病方面具有较高的诊断准确率,该模型可协助基层医院提高结肠疾病诊断的准确率,也可帮助初级内镜医师提高识别结肠疾病的能力,具有较为可靠的临床应用价值。 展开更多
关键词 结肠疾病 vision Transformer 分类识别 临床应用
原文传递
卷积神经网络与Vision Transformer在胶质瘤中的研究进展
6
作者 杨浩辉 徐涛 +3 位作者 王伟 安良良 敖用芳 朱家宝 《磁共振成像》 北大核心 2026年第1期168-174,共7页
胶质瘤因高度异质性、强侵袭性及预后差,传统诊疗面临巨大挑战。深度学习技术的引入为其精准诊疗提供了新路径,其中卷积神经网络(convolutional neural network,CNN)与Vision Transformer(ViT)是核心工具。CNN凭借层级化卷积操作在局部... 胶质瘤因高度异质性、强侵袭性及预后差,传统诊疗面临巨大挑战。深度学习技术的引入为其精准诊疗提供了新路径,其中卷积神经网络(convolutional neural network,CNN)与Vision Transformer(ViT)是核心工具。CNN凭借层级化卷积操作在局部特征提取(如肿瘤边缘、纹理细节)上具有天然优势,而ViT基于自注意力机制在全局上下文建模(如肿瘤跨区域异质性、多模态关联)方面表现突出,二者的融合策略通过整合局部精细特征与全局关联信息,在应对胶质瘤边界模糊、跨模态数据异构性等临床难题中展现出显著优势。本文综述了二者在胶质瘤检测与分割、病理分级、分子分型、预后评估等关键临床任务中的研究进展,阐述了原理、单独应用及融合策略。同时,本文也探讨了当前研究中存在的挑战,诸如对数据标注的强依赖性、模型可解释性不足等问题,并展望了未来的发展方向,例如构建轻量化架构、发展自监督学习以及推进多组学融合等前沿,以期为胶质瘤智能诊断提供系统性参考。 展开更多
关键词 胶质瘤 深度学习 卷积神经网络 vision Transformer 磁共振成像
暂未订购
基于条件生成对抗网络和Vision Transformer的胎儿颅脑超声标准切面识别方法
7
作者 李惠莲 林艺榕 +1 位作者 刘中华 柳培忠 《临床超声医学杂志》 2026年第2期164-169,共6页
胎儿颅脑超声检查是产前常规筛查中至关重要的一环,准确识别标准切面对于评估胎儿大脑发育状况具有重要意义。然而,由于超声图像质量差异和切面获取的复杂性,准确识别标准切面具有较大的挑战性。本文提出了一种基于条件对抗生成网络(CG... 胎儿颅脑超声检查是产前常规筛查中至关重要的一环,准确识别标准切面对于评估胎儿大脑发育状况具有重要意义。然而,由于超声图像质量差异和切面获取的复杂性,准确识别标准切面具有较大的挑战性。本文提出了一种基于条件对抗生成网络(CGAN)和Vision Transformer的胎儿颅脑超声标准切面识别方法,利用CGAN对原始数据进行增强,生成额外的标准切面和非标准切面图像,解决数据不足的问题;同时采用YOLOv9模型对超声图像中的颅骨区域进行自动裁剪,去除无关信息,确保模型专注于关键区域。在分类模型中采用Vision Transformer对所有输入图像进行归一化和尺寸调整,使用了数据增强技术如随机水平或垂直翻转、调整图像对比度、中心裁剪和调整图像饱和度等。结果显示,相较于现有最优模型CSwin Transformer的方法,本文提出的方法在胎儿颅脑超声标准切面识别任务中表现出色,其精确率、召回率、F1分数及准确率分别为92.5%、92.3%、92.4%和93.3%。该方法在提升识别精度方面具有显著优势,为临床超声检查提供了有效技术支持。 展开更多
关键词 条件生成对抗网络 vision Transformer 颅脑超声 胎儿 标准切面识别方法
暂未订购
Vision transformers for estimating irradiance using data scarce sky images
8
作者 David Hamlyn Sunny Chaudhary Tasmiat Rahman 《Energy and AI》 2025年第3期619-630,共12页
Accurate estimation of diffuse horizontal irradiance(DHI)is critical for optimising photovoltaic system performance and energy forecasting yet remains challenging in regions lacking comprehensive ground-based instrume... Accurate estimation of diffuse horizontal irradiance(DHI)is critical for optimising photovoltaic system performance and energy forecasting yet remains challenging in regions lacking comprehensive ground-based instrumentation.Recent advancements using Vision Transformers(ViTs)trained on extensive sky image datasets have shown promise in replacing costly irradiance measurement equipment,but the scarcity of long-term,high-quality sky imagery significantly restricts practical implementation.Addressing this critical gap,this study proposes a novel dual-framework approach designed for data-scarce scenarios.First,calculated atmospheric parameters,including extraterrestrial irradiance and cyclic time encodings,are integrated to represent sky conditions without utilising any instrumentation.Next,a sequential pipeline initially predicts synthetic global horizontal irradiance(GHI)and uses it as a feature,to refine DHI estimation.Finally,a dual-parallel architecture simultaneously processes raw and overlay-enhanced fisheye sky images.Overlays are generated through unsupervised,physicsinformed cloud segmentation to highlight dynamic sky features.Empirical validation is performed using data from the Chilbolton Observatory,chosen for its temperate climate and frequent cloud variability.To simulate data-scarce conditions,models are trained on a single month(e.g.,January)and evaluated across a temporally disjoint,full-year test set.Under this setup,the sequential and dual-parallel frameworks achieve RMSE values within 2-3 W/m^(2)and 1-6 W/m^(2),respectively,of a state-of-the-art ViT trained on the complete dataset.By combining physics-informed modelling with unsupervised segmentation,the proposed method provides a scalable and cost-effective solution for DHI estimation,advancing solar resource assessment in data-constrained environments. 展开更多
关键词 Computer vision Machine learning Solar irradiance Sky imaging vision transformer(ViT)
在线阅读 下载PDF
基于Vision Transformer的高炉风口智能监测模型及应用
9
作者 王浩男 韩明博 +1 位作者 但家云 李强 《钢铁研究学报》 北大核心 2026年第1期25-37,共13页
高炉下部风口窥视孔可以实时监测高炉回旋区的燃烧特征与喷煤状态等关键冶炼状态信息,进而判断煤气流分布和炉缸活跃程度等重要参数。为解决风口监测过程中存在的主观性与时滞性问题,本工作基于风口图像非结构大数据与Vision Transforme... 高炉下部风口窥视孔可以实时监测高炉回旋区的燃烧特征与喷煤状态等关键冶炼状态信息,进而判断煤气流分布和炉缸活跃程度等重要参数。为解决风口监测过程中存在的主观性与时滞性问题,本工作基于风口图像非结构大数据与Vision Transformer架构,建立了高炉风口智能监测模型TI-ViT。首先,对采集到的风口图像进行预处理,通过特征辨析与标签标定形成典型炉况数据集;进而,基于Vision Transformer架构构建了TI-ViT风口图像识别模型;最后,对TI-ViT模型进行性能评估,重点探究了模型深度对准确率、参数量、训练时间与运行时间的影响,并与传统卷积神经网络模型进行比较。经验证,TI-ViT模型的准确率达到97.7%,相比基于卷积神经网络的模型提升了9.1%,单张图像的推理时间仅为15.75 ms。将基于本研究模型所开发的“智慧眼”系统应用于现场实践,其识别准确率可达95.2%,表明该系统实现了对高炉风口的实时监测、识别与预警,有助于降低钢铁企业对风口异常状态的监测与诊断成本,为高炉炼铁智能化提供了新的发展方向。 展开更多
关键词 高炉风口 计算机视觉 vision Transformer 图像识别 高炉炼铁
原文传递
Gait-ViT:基于Vision Transformer的跨视角步态识别方法
10
作者 沈澍 王森 +1 位作者 黄苏岩 张秉睿 《小型微型计算机系统》 北大核心 2026年第3期646-652,共7页
步态识别作为一种远程生物特征识别技术,在医疗康复、刑侦侦查及社会治安等领域展现出广泛的应用前景.近年来,随着深度学习的快速发展,步态识别方法逐渐从传统的卷积神经网络(Convolutional Neural Network,CNN)转向更为先进的Transfor... 步态识别作为一种远程生物特征识别技术,在医疗康复、刑侦侦查及社会治安等领域展现出广泛的应用前景.近年来,随着深度学习的快速发展,步态识别方法逐渐从传统的卷积神经网络(Convolutional Neural Network,CNN)转向更为先进的Transformer架构.尽管CNN在图像处理任务中表现优异,但其对图像关键区域的关注能力有限,而注意力机制则能够通过聚焦图像局部区域来学习更具判别性的特征.为此,本文提出了一种融合注意力机制的Vision Transformer模型(Gait-ViT)用于步态识别,该方法首先将步态轮廓划分成多个小块并转化成块序列;然后通过位置嵌入和类嵌入对序列中的位置信息进行重新排列和编码;最后,将向量序列反馈给Vision Transformer进行预测.Gait-ViT模型在CASIA-B和OU-MVLP两个公开步态数据集上分别取得了98.1%和91.2%的识别准确率,验证了所提模型的有效性. 展开更多
关键词 步态识别 vision Transformer 卷积神经网络 特征提取
在线阅读 下载PDF
有效诊断Vision Transformer网络的滚动轴承故障诊断方法
11
作者 罗志勇 李明周 董鑫 《重庆邮电大学学报(自然科学版)》 北大核心 2026年第1期146-155,共10页
针对滚动轴承故障诊断中特征提取不完整和诊断效率低的问题,提出了有效诊断Vision Transformer(EDViT)网络。采用基于峰度的加权融合策略,合并传感器信息;利用短时傅里叶变换,将融合后的信号转换为时频图像;依次应用EDViT的双重注意卷... 针对滚动轴承故障诊断中特征提取不完整和诊断效率低的问题,提出了有效诊断Vision Transformer(EDViT)网络。采用基于峰度的加权融合策略,合并传感器信息;利用短时傅里叶变换,将融合后的信号转换为时频图像;依次应用EDViT的双重注意卷积模块和双分支补丁视觉变换模块来提取局部和全局特征,使用分类器进行故障分类。实验验证在凯斯西储大学轴承数据集上进行。结果表明,EDViT模型具有出色的特征提取能力、快速的收敛速度和较高的诊断准确性。与其他方法的对比表明,EDViT模型具有很强的泛化能力和鲁棒性。 展开更多
关键词 有效诊断vision Transformer网络 滚动轴承 故障诊断
在线阅读 下载PDF
A Hybrid Deep Learning Approach Using Vision Transformer and U-Net for Flood Segmentation
12
作者 Cyreneo Dofitas Jr Yong-Woon Kim Yung-Cheol Byun 《Computers, Materials & Continua》 2026年第2期1209-1227,共19页
Recent advances in deep learning have significantly improved flood detection and segmentation from aerial and satellite imagery.However,conventional convolutional neural networks(CNNs)often struggle in complex flood s... Recent advances in deep learning have significantly improved flood detection and segmentation from aerial and satellite imagery.However,conventional convolutional neural networks(CNNs)often struggle in complex flood scenarios involving reflections,occlusions,or indistinct boundaries due to limited contextual modeling.To address these challenges,we propose a hybrid flood segmentation framework that integrates a Vision Transformer(ViT)encoder with a U-Net decoder,enhanced by a novel Flood-Aware Refinement Block(FARB).The FARB module improves boundary delineation and suppresses noise by combining residual smoothing with spatial-channel attention mechanisms.We evaluate our model on a UAV-acquired flood imagery dataset,demonstrating that the proposed ViTUNet+FARB architecture outperforms existing CNN and Transformer-based models in terms of accuracy and mean Intersection over Union(mIoU).Detailed ablation studies further validate the contribution of each component,confirming that the FARB design significantly enhances segmentation quality.To its better performance and computational efficiency,the proposed framework is well-suited for flood monitoring and disaster response applications,particularly in resource-constrained environments. 展开更多
关键词 Flood detection vision transformer(ViT) U-Net segmentation image processing deep learning artificial intelligence
在线阅读 下载PDF
A Hybrid Vision Transformer with Attention Architecture for Efficient Lung Cancer Diagnosis
13
作者 Abdu Salam Fahd M.Aldosari +4 位作者 Donia Y.Badawood Farhan Amin Isabel de la Torre Gerardo Mendez Mezquita Henry Fabian Gongora 《Computers, Materials & Continua》 2026年第4期1129-1147,共19页
Lung cancer remains a major global health challenge,with early diagnosis crucial for improved patient survival.Traditional diagnostic techniques,including manual histopathology and radiological assessments,are prone t... Lung cancer remains a major global health challenge,with early diagnosis crucial for improved patient survival.Traditional diagnostic techniques,including manual histopathology and radiological assessments,are prone to errors and variability.Deep learning methods,particularly Vision Transformers(ViT),have shown promise for improving diagnostic accuracy by effectively extracting global features.However,ViT-based approaches face challenges related to computational complexity and limited generalizability.This research proposes the DualSet ViT-PSO-SVM framework,integrating aViTwith dual attentionmechanisms,Particle Swarm Optimization(PSO),and SupportVector Machines(SVM),aiming for efficient and robust lung cancer classification acrossmultiple medical image datasets.The study utilized three publicly available datasets:LIDC-IDRI,LUNA16,and TCIA,encompassing computed tomography(CT)scans and histopathological images.Data preprocessing included normalization,augmentation,and segmentation.Dual attention mechanisms enhanced ViT’s feature extraction capabilities.PSO optimized feature selection,and SVM performed classification.Model performance was evaluated on individual and combined datasets,benchmarked against CNN-based and standard ViT approaches.The DualSet ViT-PSO-SVM significantly outperformed existing methods,achieving superior accuracy rates of 97.85%(LIDC-IDRI),98.32%(LUNA16),and 96.75%(TCIA).Crossdataset evaluations demonstrated strong generalization capabilities and stability across similar imagingmodalities.The proposed framework effectively bridges advanced deep learning techniques with clinical applicability,offering a robust diagnostic tool for lung cancer detection,reducing complexity,and improving diagnostic reliability and interpretability. 展开更多
关键词 Deep learning artificial intelligence healthcare medical imaging vision transformer
在线阅读 下载PDF
基于注意机制优化的Vision Transformer在虫草等级识别中的应用
14
作者 刘惠文 《消费电子》 2026年第4期248-250,共3页
在数字时代背景下,深度学习驱动了图像识别的创新,但目前对虫草等级识别的研究主要还是依靠人工经验,存在效率低、主观性强等问题。文章采用视觉转换器(Vision Transformer,ViT)模型对虫草图像进行分级识别。首先,阐述视觉知觉、注意机... 在数字时代背景下,深度学习驱动了图像识别的创新,但目前对虫草等级识别的研究主要还是依靠人工经验,存在效率低、主观性强等问题。文章采用视觉转换器(Vision Transformer,ViT)模型对虫草图像进行分级识别。首先,阐述视觉知觉、注意机制和层次划分的理论依据,并从注意机制和模型结构两个角度对ViT进行调整和优化;在此基础上,利用PyTorch框架对包含5000幅图像的数据集进行5重交叉验证。实验结果显示,该模型的预测精度达到95.2%,召回率达到94.5%,F1值达到94.8%,为虫草行业智能化发展提供了技术支持。 展开更多
关键词 vision Transformer 虫草等级识别 图像分类 深度学习 计算机视觉
在线阅读 下载PDF
Brief application notes for vision transformer (ViT) and convolutional neural network (CNN) in medical imaging
15
作者 Wei Kitt Wong Melinda Melinda 《Medical Data Mining》 2026年第2期34-42,共9页
In contemporary computer vision,convolutional neural networks(CNNs)and vision transformers(ViTs)represent the two primary architectural paradigms for image recognition.While both approaches have been widely adopted in... In contemporary computer vision,convolutional neural networks(CNNs)and vision transformers(ViTs)represent the two primary architectural paradigms for image recognition.While both approaches have been widely adopted in medical imaging applications,they operate based on fundamentally different computational principles.This report attempts to provide brief application notes on ViTs and CNNs,particularly focusing on scenarios that guide the selection of one architecture over the other in practical medical implementations.Generally,CNNs rely on convolutional kernels,localized receptive fields,and weight sharing,enabling efficient hierarchical feature extraction.These properties contribute to strong performance in detecting spatially constrained patterns such as textures,edges,and anatomical boundaries,while maintaining relatively low computational requirements.ViTs,on the other hand,decompose images into smaller segments referred to as tokens and employ self-attention mechanisms to model relationships across the entire image.This global modeling capability allows ViTs to capture long-range dependencies that may be difficult for convolution-based architectures to learn.However,ViTs typically achieve optimal performance when trained on extremely large datasets or when supported by extensive pretraining,as their reduced inductive bias requires greater data exposure to learn robust representations.This report briefly examines the architectural structure,underlying mathematical foundations,and relative performance characteristics of CNNs and ViTs,drawing upon recent findings from contemporary research.Emphasis is placed on understanding how differences in data availability,computational resources,and task requirements influence model effectiveness across medical imaging domains.Most importantly,the report serves as a concise application guide for practitioners seeking informed implementation decisions between these two influential deep learning frameworks. 展开更多
关键词 convolutional neural network vision transformer comparative study medical imaging
在线阅读 下载PDF
孪生多级Vision Transformer高分遥感影像变化检测方法
16
作者 黄英杰 《测绘与空间地理信息》 2026年第2期123-126,130,共5页
针对现有遥感变化检测模型捕获特征不全面,深、浅层特征利用不充分,导致分割精度不高的问题,提出一种结合Vision Transformer与孪生架构的遥感影像变化检测模型。在编码器端,采用孪生多级Vision Transformer实现空间特征提取与全局上下... 针对现有遥感变化检测模型捕获特征不全面,深、浅层特征利用不充分,导致分割精度不高的问题,提出一种结合Vision Transformer与孪生架构的遥感影像变化检测模型。在编码器端,采用孪生多级Vision Transformer实现空间特征提取与全局上下文特征建模,同时采用haar小波下采样层进行特征图尺寸压缩,减少细节特征的丢失;在特征解码过程中,引入全尺度特征连接机制,充分利用不同来源的深、浅层特征。实验结果表明,所提出模型在分割精度上优于当前的主流模型,能够准确地捕获变化目标的边界与细节信息。 展开更多
关键词 遥感变化检测 孪生架构 vision Transformer haar小波下采样 全尺度特征连接
在线阅读 下载PDF
KPA-ViT:Key Part-Level Attention Vision Transformer for Foreign Body Classification on Coal Conveyor Belt
17
作者 Haoxuanye Ji Zhiliang Chen +3 位作者 Pengfei Jiang Ziyue Wang Ting Yu Wei Zhang 《Computers, Materials & Continua》 2026年第3期656-671,共16页
Foreign body classification on coal conveyor belts is a critical component of intelligent coal mining systems.Previous approaches have primarily utilized convolutional neural networks(CNNs)to effectively integrate spa... Foreign body classification on coal conveyor belts is a critical component of intelligent coal mining systems.Previous approaches have primarily utilized convolutional neural networks(CNNs)to effectively integrate spatial and semantic information.However,the performance of CNN-based methods remains limited in classification accuracy,primarily due to insufficient exploration of local image characteristics.Unlike CNNs,Vision Transformer(ViT)captures discriminative features by modeling relationships between local image patches.However,such methods typically require a large number of training samples to perform effectively.In the context of foreign body classification on coal conveyor belts,the limited availability of training samples hinders the full exploitation of Vision Transformer’s(ViT)capabilities.To address this issue,we propose an efficient approach,termed Key Part-level Attention Vision Transformer(KPA-ViT),which incorporates key local information into the transformer architecture to enrich the training information.It comprises three main components:a key-point detection module,a key local mining module,and an attention module.To extract key local regions,a key-point detection strategy is first employed to identify the positions of key points.Subsequently,the key local mining module extracts the relevant local features based on these detected points.Finally,an attention module composed of self-attention and cross-attention blocks is introduced to integrate global and key part-level information,thereby enhancing the model’s ability to learn discriminative features.Compared to recent transformer-based frameworks—such as ViT,Swin-Transformer,and EfficientViT—the proposed KPA-ViT achieves performance improvements of 9.3%,6.6%,and 2.8%,respectively,on the CUMT-BelT dataset,demonstrating its effectiveness. 展开更多
关键词 Foreign body classification global and part-level key information coal conveyor belt vision transformer(ViT) self and cross attention
在线阅读 下载PDF
基于改进Vision Transformer的水稻叶片病害图像识别 被引量:1
18
作者 朱周华 周怡纳 +1 位作者 侯智杰 田成源 《电子测量技术》 北大核心 2025年第10期153-160,共8页
水稻叶片病害智能识别在现代农业生产中具有重要意义。针对传统Vision Transformer网络缺乏归纳偏置,难以有效捕捉图像局部细节特征的问题,提出了一种改进的Vision Transformer模型。该模型通过引入内在归纳偏置,增强了对多尺度上下文... 水稻叶片病害智能识别在现代农业生产中具有重要意义。针对传统Vision Transformer网络缺乏归纳偏置,难以有效捕捉图像局部细节特征的问题,提出了一种改进的Vision Transformer模型。该模型通过引入内在归纳偏置,增强了对多尺度上下文以及局部与全局依赖关系的建模能力,同时降低了对大规模数据集的需求。此外,Vision Transformer中的多层感知器模块被Kolmogorov-Arnold网络结构取代,从而提升了模型对复杂特征的提取能力和可解释性。实验结果表明,所提模型在水稻叶片病害识别任务中取得了优异的性能,识别准确率达到了98.62%,较原始ViT模型提升了6.2%,显著提高了对水稻叶片病害的识别性能。 展开更多
关键词 水稻叶片病害 图像识别 vision Transformer网络 归纳偏置 局部特征
原文传递
基于残差注意力TCN与vision transformer的齿轮剩余寿命预测
19
作者 胡爱军 李晨阳 +2 位作者 邢磊 周卓浩 向玲 《航空动力学报》 北大核心 2025年第12期14-24,共11页
齿轮系统的运行状况受到多个因素的影响,这些因素在时间上存在长期依赖关系,并在局部和全局特征之间存在差异。为了有效地捕捉数据中的时间依赖性并自适应调整对特征的关注度,提出具有残差卷积块注意力机制的时间卷积网络(RCMTCN)。通... 齿轮系统的运行状况受到多个因素的影响,这些因素在时间上存在长期依赖关系,并在局部和全局特征之间存在差异。为了有效地捕捉数据中的时间依赖性并自适应调整对特征的关注度,提出具有残差卷积块注意力机制的时间卷积网络(RCMTCN)。通过在卷积块注意力机制中引入残差连接,模型能够同时关注原始输入和注意力加权的信息,提高了模型对局部信息的感知能力。在此基础上,将vision transformer(ViT)模型与RCMTCN相结合对齿轮的剩余使用寿命(RUL)预测,ViT模型能有效地捕获数据中的全局信息。两者融合后能充分展现在处理时间序列数据局部特征提取能力和全局信息关注方面的优势,提高对多维度特征的感知能力。最后,通过在两种工况齿轮性能退化数据集上对模型进行验证,选用点蚀故障数据进行训练,分别对点蚀和断齿故障进行测试。实验结果表明:与其他方法相比,所提出的方法能更充分地提取关键特征信息,在点蚀故障上评分函数得分为0.8898,且在断齿故障上得分为0.8587,表现出良好的工况、故障适应能力。 展开更多
关键词 齿轮 剩余使用寿命 时序网络 注意力机制 vision transformer模型
原文传递
Vision Transformer模型在中医舌诊图像分类中的应用研究
20
作者 周坚和 王彩雄 +3 位作者 李炜 周晓玲 张丹璇 吴玉峰 《广西科技大学学报》 2025年第5期89-98,共10页
舌诊作为中医望诊中的一项重要且常规的检查手段,在中医临床诊断中发挥着不可或缺的作用。为突破传统舌诊依赖主观经验及卷积神经网络(convolutional neural network,CNN)模型分类性能不足的局限,本文基于高质量舌象分类数据集,提出基于... 舌诊作为中医望诊中的一项重要且常规的检查手段,在中医临床诊断中发挥着不可或缺的作用。为突破传统舌诊依赖主观经验及卷积神经网络(convolutional neural network,CNN)模型分类性能不足的局限,本文基于高质量舌象分类数据集,提出基于Vision Transformer(ViT)深度学习模型,通过预训练与微调策略优化特征提取能力,并结合数据增强技术解决类别分布不平衡问题。实验结果表明,该模型在6项关键舌象特征分类任务中,5项指标的准确率(苔色85.6%、瘀斑98.0%、质地99.6%、舌色96.6%、裂纹87.8%)显著优于现有CNN方法(如ResNet50对应准确率分别为78.0%、91.0%、92.0%、68.0%、80.1%),验证了该模型在突破传统性能瓶颈、提升中医临床智能诊断可靠性方面的有效性和应用潜力。 展开更多
关键词 舌诊 vision Transformer(ViT) 深度学习 医学图像分类
在线阅读 下载PDF
上一页 1 2 14 下一页 到第
使用帮助 返回顶部