期刊文献+
共找到37,020篇文章
< 1 2 250 >
每页显示 20 50 100
Total score of the computer vision syndrome questionnaire predicts refractive errors and binocular vision anomalies
1
作者 Mosaad Alhassan Tasneem Samman +5 位作者 Hatoun Badukhen Muhamad Alrashed Balsam Alabdulkader Essam Almutleb Tahani Alqahtani Ali Almustanyir 《International Journal of Ophthalmology(English edition)》 2026年第1期90-96,共7页
AIM:To evaluate the efficacy of the total computer vision syndrome questionnaire(CVS-Q)score as a predictive tool for identifying individuals with symptomatic binocular vision anomalies and refractive errors.METHODS:A... AIM:To evaluate the efficacy of the total computer vision syndrome questionnaire(CVS-Q)score as a predictive tool for identifying individuals with symptomatic binocular vision anomalies and refractive errors.METHODS:A total of 141 healthy computer users underwent comprehensive clinical visual function assessments,including evaluations of refractive errors,accommodation(amplitude of accommodation,positive relative accommodation,negative relative accommodation,accommodative accuracy,and accommodative facility),and vergence(phoria,positive and negative fusional vergence,near point of convergence,and vergence facility).Total CVS-Q scores were recorded to explore potential associations between symptom scores and the aforementioned clinical visual function parameters.RESULTS:The cohort included 54 males(38.3%)with a mean age of 23.9±0.58y and 87 age-matched females(61.7%)with a mean age of 23.9±0.53y.The multiple regression model was statistically significant[R²=0.60,F=13.28,degrees of freedom(DF=17122,P<0.001].This indicates that 60%of the variance in total CVS-Q scores(reflecting reported symptoms)could be explained by four clinical measurements:amplitude of accommodation,positive relative accommodation,exophoria at distance and near,and positive fusional vergence at near.CONCLUSION:The total CVS-Q score is a valid and reliable tool for predicting the presence of various nonstrabismic binocular vision anomalies and refractive errors in symptomatic computer users. 展开更多
关键词 computer vision syndrome refractive errors ACCOMMODATION VERGENCE binocular vision SYMPTOMS
原文传递
From microstructure to performance optimization:Innovative applications of computer vision in materials science
2
作者 Chunyu Guo Xiangyu Tang +10 位作者 Yu’e Chen Changyou Gao Qinglin Shan Heyi Wei Xusheng Liu Chuncheng Lu Meixia Fu Enhui Wang Xinhong Liu Xinmei Hou Yanglong Hou 《International Journal of Minerals,Metallurgy and Materials》 2026年第1期94-115,共22页
The rapid advancements in computer vision(CV)technology have transformed the traditional approaches to material microstructure analysis.This review outlines the history of CV and explores the applications of deep-lear... The rapid advancements in computer vision(CV)technology have transformed the traditional approaches to material microstructure analysis.This review outlines the history of CV and explores the applications of deep-learning(DL)-driven CV in four key areas of materials science:microstructure-based performance prediction,microstructure information generation,microstructure defect detection,and crystal structure-based property prediction.The CV has significantly reduced the cost of traditional experimental methods used in material performance prediction.Moreover,recent progress made in generating microstructure images and detecting microstructural defects using CV has led to increased efficiency and reliability in material performance assessments.The DL-driven CV models can accelerate the design of new materials with optimized performance by integrating predictions based on both crystal and microstructural data,thereby allowing for the discovery and innovation of next-generation materials.Finally,the review provides insights into the rapid interdisciplinary developments in the field of materials science and future prospects. 展开更多
关键词 MICROSTRUCTURE deep learning computer vision performance prediction image generation
在线阅读 下载PDF
Functional outcome and patient satisfaction 5y after laser vision correction
3
作者 Ran Gao Yu Han +4 位作者 Jie Qin Yu-Shan Xu Yu Li Xiao-Tong Lyu Feng-Ju Zhang 《International Journal of Ophthalmology(English edition)》 2026年第1期123-131,共9页
AIM:To investigate the association between functionaloutcomes and postoperative patient satisfaction 5y aftersmall incision lenticule extraction(SMILE)and femtosecondlaser-assisted in situ keratomileusis(FS-LASIK).MET... AIM:To investigate the association between functionaloutcomes and postoperative patient satisfaction 5y aftersmall incision lenticule extraction(SMILE)and femtosecondlaser-assisted in situ keratomileusis(FS-LASIK).METHODS:This is a cross-sectional study.Thepatients underwent basic ophthalmic examinations,axiallength measurement,wide-field fundus photography,andaccommodation function testing.Behavioral habits datawere collected using a self-administered questionnaire,andvisual symptoms were assessed with the Quality of Vision(QoV)questionnaire.Postoperative satisfaction was alsorecorded.RESULTS:Totally 410 subjects[820 eyes,160males(39.02%)and 250 females(60.98%)]who hadundergone SMILE or FS-LASIK 5y ago were enrolled.Themean(standard deviation,SD)age of all patients was29.83y(6.69).The mean(SD)preoperative manifest SEwas-5.80(2.04)diopters(D;range:-0.88 to-13.75).Patient satisfaction at 5y after undergoing SMILE or FSLASIKwas 91.70%.Patients were categorized into twogroups:dissatisfied group and satisfied group.Significantdifferences were observed between the two groups in termsof age(P=0.012),sex(P=0.021),preoperative degreeof myopia(P=0.049),postoperative visual symptoms(frequency,P=0.043;severity,P<0.001;bothersome,P=0.018),difficulty driving at night(P=0.001),andaccommodative amplitude(AMP,P=0.020).Multivariateanalysis confirmed that female sex(P=0.024),severityof visual symptoms(P=0.009),and difficulty driving atnight(P=0.006)were significantly associated with lowersatisfaction.The dissatisfied group showed higher rates ofstarbursts,double or multiple images,and high myopia,but lower age.The frequency,severity,and bothersome ofdistortion exhibited decreased with increasing age.CONCLUSION:Patient satisfaction 5y after SMILEand FS-LASIK is high and stable.Difficulty driving at night,sex,and severity of visual symptoms are important factorsinfluencing patient satisfaction.Special attention should bepaid to younger highly myopic female patients,particularlythose with starbursts and double or multiple images.It is crucial to monitor postoperative visual outcomesand provide patients with comprehensive preoperativecounseling to enhance long-term satisfaction. 展开更多
关键词 patient satisfaction MYOPIA vision small incision lenticule extraction femtosecond laser-assisted in situ keratomileusis
原文传递
基于改进Vision Transformer的水稻叶片病害图像识别
4
作者 朱周华 周怡纳 +1 位作者 侯智杰 田成源 《电子测量技术》 北大核心 2025年第10期153-160,共8页
水稻叶片病害智能识别在现代农业生产中具有重要意义。针对传统Vision Transformer网络缺乏归纳偏置,难以有效捕捉图像局部细节特征的问题,提出了一种改进的Vision Transformer模型。该模型通过引入内在归纳偏置,增强了对多尺度上下文... 水稻叶片病害智能识别在现代农业生产中具有重要意义。针对传统Vision Transformer网络缺乏归纳偏置,难以有效捕捉图像局部细节特征的问题,提出了一种改进的Vision Transformer模型。该模型通过引入内在归纳偏置,增强了对多尺度上下文以及局部与全局依赖关系的建模能力,同时降低了对大规模数据集的需求。此外,Vision Transformer中的多层感知器模块被Kolmogorov-Arnold网络结构取代,从而提升了模型对复杂特征的提取能力和可解释性。实验结果表明,所提模型在水稻叶片病害识别任务中取得了优异的性能,识别准确率达到了98.62%,较原始ViT模型提升了6.2%,显著提高了对水稻叶片病害的识别性能。 展开更多
关键词 水稻叶片病害 图像识别 vision Transformer网络 归纳偏置 局部特征
原文传递
Vision Transformer模型在中医舌诊图像分类中的应用研究
5
作者 周坚和 王彩雄 +3 位作者 李炜 周晓玲 张丹璇 吴玉峰 《广西科技大学学报》 2025年第5期89-98,共10页
舌诊作为中医望诊中的一项重要且常规的检查手段,在中医临床诊断中发挥着不可或缺的作用。为突破传统舌诊依赖主观经验及卷积神经网络(convolutional neural network,CNN)模型分类性能不足的局限,本文基于高质量舌象分类数据集,提出基于... 舌诊作为中医望诊中的一项重要且常规的检查手段,在中医临床诊断中发挥着不可或缺的作用。为突破传统舌诊依赖主观经验及卷积神经网络(convolutional neural network,CNN)模型分类性能不足的局限,本文基于高质量舌象分类数据集,提出基于Vision Transformer(ViT)深度学习模型,通过预训练与微调策略优化特征提取能力,并结合数据增强技术解决类别分布不平衡问题。实验结果表明,该模型在6项关键舌象特征分类任务中,5项指标的准确率(苔色85.6%、瘀斑98.0%、质地99.6%、舌色96.6%、裂纹87.8%)显著优于现有CNN方法(如ResNet50对应准确率分别为78.0%、91.0%、92.0%、68.0%、80.1%),验证了该模型在突破传统性能瓶颈、提升中医临床智能诊断可靠性方面的有效性和应用潜力。 展开更多
关键词 舌诊 vision Transformer(ViT) 深度学习 医学图像分类
在线阅读 下载PDF
Steel Surface Defect Detection Using Learnable Memory Vision Transformer
6
作者 Syed Tasnimul Karim Ayon Farhan Md.Siraj Jia Uddin 《Computers, Materials & Continua》 SCIE EI 2025年第1期499-520,共22页
This study investigates the application of Learnable Memory Vision Transformers(LMViT)for detecting metal surface flaws,comparing their performance with traditional CNNs,specifically ResNet18 and ResNet50,as well as o... This study investigates the application of Learnable Memory Vision Transformers(LMViT)for detecting metal surface flaws,comparing their performance with traditional CNNs,specifically ResNet18 and ResNet50,as well as other transformer-based models including Token to Token ViT,ViT withoutmemory,and Parallel ViT.Leveraging awidely-used steel surface defect dataset,the research applies data augmentation and t-distributed stochastic neighbor embedding(t-SNE)to enhance feature extraction and understanding.These techniques mitigated overfitting,stabilized training,and improved generalization capabilities.The LMViT model achieved a test accuracy of 97.22%,significantly outperforming ResNet18(88.89%)and ResNet50(88.90%),aswell as the Token to TokenViT(88.46%),ViT without memory(87.18),and Parallel ViT(91.03%).Furthermore,LMViT exhibited superior training and validation performance,attaining a validation accuracy of 98.2%compared to 91.0%for ResNet 18,96.0%for ResNet50,and 89.12%,87.51%,and 91.21%for Token to Token ViT,ViT without memory,and Parallel ViT,respectively.The findings highlight the LMViT’s ability to capture long-range dependencies in images,an areawhere CNNs struggle due to their reliance on local receptive fields and hierarchical feature extraction.The additional transformer-based models also demonstrate improved performance in capturing complex features over CNNs,with LMViT excelling particularly at detecting subtle and complex defects,which is critical for maintaining product quality and operational efficiency in industrial applications.For instance,the LMViT model successfully identified fine scratches and minor surface irregularities that CNNs often misclassify.This study not only demonstrates LMViT’s potential for real-world defect detection but also underscores the promise of other transformer-based architectures like Token to Token ViT,ViT without memory,and Parallel ViT in industrial scenarios where complex spatial relationships are key.Future research may focus on enhancing LMViT’s computational efficiency for deployment in real-time quality control systems. 展开更多
关键词 Learnable Memory vision Transformer(LMViT) Convolutional Neural Networks(CNN) metal surface defect detection deep learning computer vision image classification learnable memory gradient clipping label smoothing t-SNE visualization
在线阅读 下载PDF
Causes and factors associated with vision impairment in the elderly population in Mangxin town,Kashgar region,Xinjiang,China
7
作者 Lingling Chen Ruilian Liao +6 位作者 Yuanyuan Liu Ling Jin Jun Fu Xun Wang Hongwen Jiang Lin Ding Qianyun Chen 《Eye Science》 2025年第1期12-24,共13页
Objective:This study aimed to investigate the prevalence,causes,and influencing factors of vision impairment in the elderly population aged 60 years and above in Mangxin Town,Kashgar region,Xinjiang,China.Located in a... Objective:This study aimed to investigate the prevalence,causes,and influencing factors of vision impairment in the elderly population aged 60 years and above in Mangxin Town,Kashgar region,Xinjiang,China.Located in a region characterized by intense ultraviolet radiation and arid climatic conditions,Mangxin Town presents unique environmental challenges that may exacerbate ocular health issues.Despite the global emphasis on addressing vision impairment among aging populations,there remains a paucity of updated and region-specific data in Xinjiang,necessitating this comprehensive assessment to inform targeted interventions.Methods:A cross-sectional study was conducted from May to June 2024,involving 1,311 elderly participants(76.76%participation rate)out of a total eligible population of 1,708 individuals aged≥60 years.Participants underwent detailed ocular examinations,including assessments of uncorrected visual acuity(UVA)and best-corrected visual acuity(BCVA)using standard logarithmic charts,slit-lamp biomicroscopy,optical coherence tomography(OCT,Topcon DRI OCT Triton),fundus photography,and intraocular pressure measurement(Canon TX-20 Tonometer).A multidisciplinary team of 10 ophthalmologists and 2 local village doctors,trained rigorously in standardized protocols,ensured consistent data collection.Demographic,lifestyle,and medical history data were collected via questionnaires.Statistical analyses,performed using STATA 16,included multivariate logistic regression to identify risk factors,with significance defined as P<0.05.Results:The overall prevalence of vision impairment was 13.21%(95%CI:11.37%-15.04%),with low vision at 11.76%(95%CI:10.01%-13.50%)and blindness at 1.45%(95%CI:0.80%-2.10%).Cataract emerged as the leading cause,responsible for 68.20%of cases,followed by glaucoma(5.80%),optic atrophy(5.20%),and age-related macular degeneration(2.90%).Vision impairment prevalence escalated significantly with age:7.74%in the 60–69 age group,17.79%in 70–79,and 33.72%in those≥80.Males exhibited higher prevalence than females(15.84%vs.10.45%,P=0.004).Multivariate analysis revealed age≥80 years(OR=6.43,95%CI:3.79%-10.90%),male sex(OR=0.53,95%CI:0.34%-0.83%),and daily exercise(OR=0.44,95%CI:0.20%-0.95%)as significant factors.History of eye disease showed a non-significant trend toward increased risk(OR=1.49,P=0.107).Education level,income,and smoking status showed no significant associations.Conclusions:This study underscores cataract as the predominant cause of vision impairment in Mangxin Town’s elderly population,with age and sex as critical determinants.The findings align with global patterns but highlight region-specific challenges,such as environmental factors contributing to cataract prevalence.Public health strategies should prioritize improving access to cataract surgery,enhancing grassroots ophthalmic infrastructure,and integrating portable screening technologies for early detection of fundus diseases.Additionally,promoting health education on UV protection and lifestyle modifications,such as regular exercise,may mitigate risks.Future research should expand to broader regions in Xinjiang,employ advanced diagnostic tools for complex conditions like glaucoma,and explore longitudinal trends to refine intervention strategies.These efforts are vital to reducing preventable blindness and improving quality of life for aging populations in underserved areas. 展开更多
关键词 low vision BLINDNESS vision impairment elderly XINJIANG CATARACT
暂未订购
AARPose:Real-time and accurate drogue pose measurement based on monocular vision for autonomous aerial refueling
8
作者 Shuyuan WEN Yang GAO +3 位作者 Bingrui HU Zhongyu LUO Zhenzhong WEI Guangjun ZHANG 《Chinese Journal of Aeronautics》 2025年第6期552-572,共21页
Real-time and accurate drogue pose measurement during docking is basic and critical for Autonomous Aerial Refueling(AAR).Vision measurement is the best practicable technique,but its measurement accuracy and robustness... Real-time and accurate drogue pose measurement during docking is basic and critical for Autonomous Aerial Refueling(AAR).Vision measurement is the best practicable technique,but its measurement accuracy and robustness are easily affected by limited computing power of airborne equipment,complex aerial scenes and partial occlusion.To address the above challenges,we propose a novel drogue keypoint detection and pose measurement algorithm based on monocular vision,and realize real-time processing on airborne embedded devices.Firstly,a lightweight network is designed with structural re-parameterization to reduce computational cost and improve inference speed.And a sub-pixel level keypoints prediction head and loss functions are adopted to improve keypoint detection accuracy.Secondly,a closed-form solution of drogue pose is computed based on double spatial circles,followed by a nonlinear refinement based on Levenberg-Marquardt optimization.Both virtual simulation and physical simulation experiments have been used to test the proposed method.In the virtual simulation,the mean pixel error of the proposed method is 0.787 pixels,which is significantly superior to that of other methods.In the physical simulation,the mean relative measurement error is 0.788%,and the mean processing time is 13.65 ms on embedded devices. 展开更多
关键词 Autonomous aerial refueling vision measurement Deep learning REAL-TIME LIGHTWEIGHT ACCURATE Monocular vision Drogue pose measurement
原文传递
Long-Term Vision in a Rapidly Changing World
9
作者 JOHN QUELCH 《China Today》 2025年第8期43-45,共3页
China’s five-year plans crystallize a governance model that merges long-term strategic vision with adaptive execution.AS China prepares to unveil its 15th Five-Year Plan in 2026,policymakers,investors,and scholars ar... China’s five-year plans crystallize a governance model that merges long-term strategic vision with adaptive execution.AS China prepares to unveil its 15th Five-Year Plan in 2026,policymakers,investors,and scholars around the world are watching closely.For over 70 years,these plans have guided the country’s economic and social development. 展开更多
关键词 economic social development long term vision China economic development five year plans adaptive execution strategic vision governance model
在线阅读 下载PDF
Automated Concrete Bridge Damage Detection Using an Efficient Vision Transformer-Enhanced Anchor-Free YOLO
10
作者 Xiaofei Yang Enrique del Rey Castillo +3 位作者 Yang Zou Liam Wotherspoon Jianxi Yang Hao Li 《Engineering》 2025年第8期311-326,共16页
Deep learning techniques have recently been the most popular method for automatically detecting bridge damage captured by unmanned aerial vehicles(UAVs).However,their wider application to real-world scenarios is hinde... Deep learning techniques have recently been the most popular method for automatically detecting bridge damage captured by unmanned aerial vehicles(UAVs).However,their wider application to real-world scenarios is hindered by three challenges:①defect scale variance,motion blur,and strong illumination significantly affect the accuracy and reliability of damage detectors;②existing commonly used anchor-based damage detectors struggle to effectively generalize to harsh real-world scenarios;and③convolutional neural networks(CNNs)lack the capability to model long-range dependencies across the entire image.This paper presents an efficient Vision Transformer-enhanced anchor-free YOLO(you only look once)method to address these challenges.First,a concrete bridge damage dataset was established,augmented by motion blur and varying brightness.Four key enhancements were then applied to an anchor-based YOLO method:①Four detection heads were introduced to alleviate the multi-scale damage detection issue;②decoupled heads were employed to address the conflict between classification and bounding box regression tasks inherent in the original coupled head design;③an anchor-free mechanism was incorporated to reduce the computational complexity and improve generalization to real-world scenarios;and④a novel Vision Transformer block,C3MaxViT,was added to enable CNNs to model long-range dependencies.These enhancements were integrated into an advanced anchor-based YOLOv5l algorithm,and the proposed Vision Transformer-enhanced anchor-free YOLO method was then compared against cutting-edge damage detection methods.The experimental results demonstrated the effectiveness of the proposed method,with an increase of 8.1%in mean average precision at intersection over union threshold of 0.5(mAP_(50))and an improvement of 8.4%in mAP@[0.5:.05:.95]respectively.Furthermore,extensive ablation studies revealed that the four detection heads,decoupled head design,anchor-free mechanism,and C3MaxViT contributed improvements of 2.4%,1.2%,2.6%,and 1.9%in mAP50,respectively. 展开更多
关键词 Computer vision Deep learning techniques vision Transformer Object detection Bridge visual inspection
在线阅读 下载PDF
Vision care and the sustainable development goals: a brief review and suggested research agenda
11
作者 Nathan Congdon Brad Wong +1 位作者 Xinxing Guo Graeme MacKenzie 《Eye Science》 2025年第2期103-110,共8页
Blindness affected 45 million people globally in 2021,and moderate to severe vision loss a further 295 million.[1]The most common causes,cataract and uncorrected refractive error,are generally the easiest to treat,and... Blindness affected 45 million people globally in 2021,and moderate to severe vision loss a further 295 million.[1]The most common causes,cataract and uncorrected refractive error,are generally the easiest to treat,and are among the most cost-effective procedures in all of medicine and international development.[1-2]Thus,vision impairment is both extremely common and,in principle,readily manageable. 展开更多
关键词 vision care CATARACT cost effective procedures uncorrected refractive error BLINDNESS moderate severe vision loss uncorrected refractive errorare sustainable development goals
暂未订购
Vision Transformer深度学习模型在前列腺癌识别中的价值
12
作者 李梦娟 金龙 +2 位作者 尹胜男 计一丁 丁宁 《中国医学计算机成像杂志》 北大核心 2025年第3期396-401,共6页
目的:旨在探讨Vision Transformer(ViT)深度学习模型在前列腺癌(PCa)识别中的应用价值.方法:回顾性分析了480例接受磁共振成像(MRI)检查的患者影像资料.采用TotalSegmentator模型自动分割前列腺区域,通过ViT深度学习方法分别构建基于T2... 目的:旨在探讨Vision Transformer(ViT)深度学习模型在前列腺癌(PCa)识别中的应用价值.方法:回顾性分析了480例接受磁共振成像(MRI)检查的患者影像资料.采用TotalSegmentator模型自动分割前列腺区域,通过ViT深度学习方法分别构建基于T2加权像(T2WI)、基于表观弥散系数(ADC)图和基于两者结合的三个ViT模型.结果:在PCa的识别能力上,结合模型在训练组和测试组上的受试者工作特征(ROC)曲线下面积(AUC)分别为0.961和0.980,优于仅基于单一成像序列构建的ViT模型.在基于单一序列构建的ViT模型中,基于ADC图的模型相较于基于T2WI的模型表现更佳.此外,决策曲线分析显示结合模型提供了更大的临床效益.结论:ViT深度学习模型在前列腺癌识别中具有较高的诊断准确性和潜在价值. 展开更多
关键词 vision Transformer 深度学习 前列腺癌 自动分割 磁共振成像
暂未订购
卷积增强Vision Mamba模型的构建及其应用 被引量:1
13
作者 俞焕友 范静 黄凡 《计算机技术与发展》 2025年第8期45-52,共8页
针对Vision Mamba(Vim)模型的局限性,该文提出了一种改进的模型——Convolutional Vision Mamba(CvM)。此模型通过摒弃Vim中的图形切割和位置编码机制,转而采用卷积操作进行替代,以实现对全局视觉信息的更高效处理。同时,此模型对Vim模... 针对Vision Mamba(Vim)模型的局限性,该文提出了一种改进的模型——Convolutional Vision Mamba(CvM)。此模型通过摒弃Vim中的图形切割和位置编码机制,转而采用卷积操作进行替代,以实现对全局视觉信息的更高效处理。同时,此模型对Vim模型中的位置嵌入模块进行了优化,以解决其固有的高计算量和内存消耗问题。进而,该文将CvM模型应用于医学图像分类领域,选用了血细胞图像、脑肿瘤图像、胸部CT扫描、病理性近视眼底图像以及肺炎X射线影像等数据集进行实验。实验结果表明,与Vim模型及其他5个神经网络模型相比,CvM模型在准确率上表现更为出色,在内存占用和参数数量方面也展现出明显的优势。消融实验表明,深度可分离卷积比标准卷积使用的参数和显存占用更少,而且在血细胞图像、脑肿瘤图像等医学图像分类上,准确率还有了显著提升。这些结果充分说明了CvM模型的优势和可行性。 展开更多
关键词 深度学习 vision Mamba 卷积神经网络 深度可分离卷积 医学图像分类
在线阅读 下载PDF
A Hybrid Approach for Pavement Crack Detection Using Mask R-CNN and Vision Transformer Model 被引量:2
14
作者 Shorouq Alshawabkeh Li Wu +2 位作者 Daojun Dong Yao Cheng Liping Li 《Computers, Materials & Continua》 SCIE EI 2025年第1期561-577,共17页
Detecting pavement cracks is critical for road safety and infrastructure management.Traditional methods,relying on manual inspection and basic image processing,are time-consuming and prone to errors.Recent deep-learni... Detecting pavement cracks is critical for road safety and infrastructure management.Traditional methods,relying on manual inspection and basic image processing,are time-consuming and prone to errors.Recent deep-learning(DL)methods automate crack detection,but many still struggle with variable crack patterns and environmental conditions.This study aims to address these limitations by introducing the Masker Transformer,a novel hybrid deep learning model that integrates the precise localization capabilities of Mask Region-based Convolutional Neural Network(Mask R-CNN)with the global contextual awareness of Vision Transformer(ViT).The research focuses on leveraging the strengths of both architectures to enhance segmentation accuracy and adaptability across different pavement conditions.We evaluated the performance of theMaskerTransformer against other state-of-theartmodels such asU-Net,TransformerU-Net(TransUNet),U-NetTransformer(UNETr),SwinU-NetTransformer(Swin-UNETr),You Only Look Once version 8(YoloV8),and Mask R-CNN using two benchmark datasets:Crack500 and DeepCrack.The findings reveal that the MaskerTransformer significantly outperforms the existing models,achieving the highest Dice SimilarityCoefficient(DSC),precision,recall,and F1-Score across both datasets.Specifically,the model attained a DSC of 80.04%on Crack500 and 91.37%on DeepCrack,demonstrating superior segmentation accuracy and reliability.The high precision and recall rates further substantiate its effectiveness in real-world applications,suggesting that the Masker Transformer can serve as a robust tool for automated pavement crack detection,potentially replacing more traditional methods. 展开更多
关键词 Pavement crack segmentation TRANSPORTATION deep learning vision transformer Mask R-CNN image segmentation
在线阅读 下载PDF
基于改进Vision Transformer的遥感图像分类研究 被引量:1
15
作者 李宗轩 冷欣 +1 位作者 章磊 陈佳凯 《林业机械与木工设备》 2025年第6期31-35,共5页
通过遥感图像分类能够快速有效获取森林区域分布,为林业资源管理监测提供支持。Vision Transformer(ViT)凭借优秀的全局信息捕捉能力在遥感图像分类任务中广泛应用。但Vision Transformer在浅层特征提取时会冗余捕捉其他局部特征而无法... 通过遥感图像分类能够快速有效获取森林区域分布,为林业资源管理监测提供支持。Vision Transformer(ViT)凭借优秀的全局信息捕捉能力在遥感图像分类任务中广泛应用。但Vision Transformer在浅层特征提取时会冗余捕捉其他局部特征而无法有效捕获关键特征,并且Vision Transformer在将图像分割为patch过程中可能会导致边缘等细节信息的丢失,从而影响分类准确性。针对上述问题提出一种改进Vision Transformer,引入了STA(Super Token Attention)注意力机制来增强Vision Transformer对关键特征信息的提取并减少计算冗余度,还通过加入哈尔小波下采样(Haar Wavelet Downsampling)在减少细节信息丢失的同时增强对图像不同尺度局部和全局信息的捕获能力。通过实验在AID数据集上达到了92.98%的总体准确率,证明了提出方法的有效性。 展开更多
关键词 遥感图像分类 vision Transformer 哈尔小波下采样 STA注意力机制
在线阅读 下载PDF
基于Vision Transformer的混合型晶圆图缺陷模式识别
16
作者 李攀 娄莉 《现代信息科技》 2025年第19期26-30,共5页
晶圆测试作为芯片生产过程中重要的一环,晶圆图缺陷模式的识别和分类对改进前端制造工艺具有关键作用。在实际生产过程中,各类缺陷可能同时出现,形成混合缺陷类型。传统深度学习方法对混合型晶圆图缺陷信息的识别率较低,为此,文章提出... 晶圆测试作为芯片生产过程中重要的一环,晶圆图缺陷模式的识别和分类对改进前端制造工艺具有关键作用。在实际生产过程中,各类缺陷可能同时出现,形成混合缺陷类型。传统深度学习方法对混合型晶圆图缺陷信息的识别率较低,为此,文章提出一种基于Vision Transformer的缺陷识别方法。该方法采用多头自注意力机制对晶圆图的全局特征进行编码,实现了对混合型晶圆缺陷图的高效识别。在混合型缺陷数据集上的实验结果表明,该方法性能优于现有深度学习模型,平均正确率达96.2%。 展开更多
关键词 计算机视觉 晶圆图 缺陷识别 vision Transformer
在线阅读 下载PDF
融合SOLOv2-Vision Transformer的面瘫识别方法研究
17
作者 庄哲笼 丁有伟 +2 位作者 胡孔法 陈科宏 陈功 《南京中医药大学学报》 北大核心 2025年第10期1399-1406,共8页
目的为了使患者和医生更快诊断病情,达到早发现、早诊断、早治疗的目的,建立准确及时的面瘫智能化辅助诊断方法。方法提出融合SOLOv2-Vision Transformer的方法,将收集的面瘫数据经过替换主干网络的SOLOv2模型分割,去除图像中干扰部分,... 目的为了使患者和医生更快诊断病情,达到早发现、早诊断、早治疗的目的,建立准确及时的面瘫智能化辅助诊断方法。方法提出融合SOLOv2-Vision Transformer的方法,将收集的面瘫数据经过替换主干网络的SOLOv2模型分割,去除图像中干扰部分,再输入到Vision Transformer模型中进行分类训练。通过先分割再分类的原则,提高面瘫图像的分类效果。结果该实验方法在MEEI面瘫数据集上的准确率为0.982、召回率为0.982、F1-score为0.981,相比于基础模型分别提高了2%、4%、4%。结论融合SOLOv2-Vision Transformer的面瘫分类模型,相比较于未经分割的方法可实现更高的识别精度,为面瘫诊断提供了新方法。 展开更多
关键词 图像分割 图像分类 注意力机制 面瘫 诊断 SOLOv2-vision Transformer
暂未订购
Vision-LSTM模型在甲状腺影像报告与数据系统4b类甲状腺结节超声影像诊断中的应用与评估
18
作者 张鑫茹 李扬 +2 位作者 孙萌 聂玮 马喆 《山东大学学报(医学版)》 北大核心 2025年第11期68-74,共7页
目的 探讨基于Vision-LSTM的人工智能(artificial intelligence,AI)技术对甲状腺影像报告与数据系统4b (Thyroid Imaging Reporting and Data System Category 4b,TI-RADS 4b)类甲状腺结节的超声诊断准确性,评估其辅助临床决策的可行性... 目的 探讨基于Vision-LSTM的人工智能(artificial intelligence,AI)技术对甲状腺影像报告与数据系统4b (Thyroid Imaging Reporting and Data System Category 4b,TI-RADS 4b)类甲状腺结节的超声诊断准确性,评估其辅助临床决策的可行性。方法 收集我院401例TI-RADS 4b类甲状腺结节的超声影像数据,并利用这些数据对Vision-LSTM模型进行训练和验证。将AI模型的诊断结果与初级医生及高级医生的诊断结果进行对比,评估其在诊断准确性、稳定性等方面的表现;采用曲线下面积(area under the curve,AUC)、精确率-召回率(precision-recall,PR)曲线等指标对模型性能进行量化分析。结果 在独立验证中,Vision-LSTM模型的AUC(0.88)与准确率(89.4%)均显著高于初级医生(AUC:0.624),并达到与高级医生(AUC:0.787)相当的水平,证明了其辅助诊断的应用潜力。AI模型能够准确识别超声影像中的复杂特征,稳定输出一致的诊断结果,展现出较高的准确性和可靠性。结论 基于Vision-LSTM模型的AI技术可显著提升TI-RADS 4b类甲状腺结节的诊断效率与准确性,为医生提供有效辅助,减轻工作负担。 展开更多
关键词 甲状腺影像报告与数据系统 甲状腺结节 vision-LSTM模型 诊断准确性 人工智能
原文传递
Geometric parameter identification of bridge precast box girder sections based on deep learning and computer vision 被引量:1
19
作者 JIA Jingwei NI Youhao +2 位作者 MAO Jianxiao XU Yinfei WANG Hao 《Journal of Southeast University(English Edition)》 2025年第3期278-285,共8页
To overcome the limitations of low efficiency and reliance on manual processes in the measurement of geometric parameters for bridge prefabricated components,a method based on deep learning and computer vision is deve... To overcome the limitations of low efficiency and reliance on manual processes in the measurement of geometric parameters for bridge prefabricated components,a method based on deep learning and computer vision is developed to identify the geometric parameters.The study utilizes a common precast element for highway bridges as the research subject.First,edge feature points of the bridge component section are extracted from images of the precast component cross-sections by combining the Canny operator with mathematical morphology.Subsequently,a deep learning model is developed to identify the geometric parameters of the precast components using the extracted edge coordinates from the images as input and the predefined control parameters of the bridge section as output.A dataset is generated by varying the control parameters and noise levels for model training.Finally,field measurements are conducted to validate the accuracy of the developed method.The results indicate that the developed method effectively identifies the geometric parameters of bridge precast components,with an error rate maintained within 5%. 展开更多
关键词 bridge precast components section geometry parameters size identification computer vision deep learning
在线阅读 下载PDF
Adaptive optoelectronic transistor for intelligent vision system 被引量:1
20
作者 Yiru Wang Shanshuo Liu +5 位作者 Hongxin Zhang Yuchen Cao Zitong Mu Mingdong Yi Linghai Xie Haifeng Ling 《Journal of Semiconductors》 2025年第2期53-70,共18页
Recently,for developing neuromorphic visual systems,adaptive optoelectronic devices become one of the main research directions and attract extensive focus to achieve optoelectronic transistors with high performances a... Recently,for developing neuromorphic visual systems,adaptive optoelectronic devices become one of the main research directions and attract extensive focus to achieve optoelectronic transistors with high performances and flexible func-tionalities.In this review,based on a description of the biological adaptive functions that are favorable for dynamically perceiv-ing,filtering,and processing information in the varying environment,we summarize the representative strategies for achiev-ing these adaptabilities in optoelectronic transistors,including the adaptation for detecting information,adaptive synaptic weight change,and history-dependent plasticity.Moreover,the key points of the corresponding strategies are comprehen-sively discussed.And the applications of these adaptive optoelectronic transistors,including the adaptive color detection,sig-nal filtering,extending the response range of light intensity,and improve learning efficiency,are also illustrated separately.Lastly,the challenges faced in developing adaptive optoelectronic transistor for artificial vision system are discussed.The descrip-tion of biological adaptive functions and the corresponding inspired neuromorphic devices are expected to provide insights for the design and application of next-generation artificial visual systems. 展开更多
关键词 adaptive optoelectronic transistor neuromorphic computing artificial vision
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部