期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Scheme Based on Multi-Level Patch Attention and Lesion Localization for Diabetic Retinopathy Grading 被引量:1
1
作者 Zhuoqun Xia Hangyu Hu +4 位作者 Wenjing Li Qisheng Jiang Lan Pu Yicong Shu Arun Kumar Sangaiah 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期409-430,共22页
Early screening of diabetes retinopathy(DR)plays an important role in preventing irreversible blindness.Existing research has failed to fully explore effective DR lesion information in fundus maps.Besides,traditional ... Early screening of diabetes retinopathy(DR)plays an important role in preventing irreversible blindness.Existing research has failed to fully explore effective DR lesion information in fundus maps.Besides,traditional attention schemes have not considered the impact of lesion type differences on grading,resulting in unreasonable extraction of important lesion features.Therefore,this paper proposes a DR diagnosis scheme that integrates a multi-level patch attention generator(MPAG)and a lesion localization module(LLM).Firstly,MPAGis used to predict patches of different sizes and generate a weighted attention map based on the prediction score and the types of lesions contained in the patches,fully considering the impact of lesion type differences on grading,solving the problem that the attention maps of lesions cannot be further refined and then adapted to the final DR diagnosis task.Secondly,the LLM generates a global attention map based on localization.Finally,the weighted attention map and global attention map are weighted with the fundus map to fully explore effective DR lesion information and increase the attention of the classification network to lesion details.This paper demonstrates the effectiveness of the proposed method through extensive experiments on the public DDR dataset,obtaining an accuracy of 0.8064. 展开更多
关键词 DDR dataset diabetic retinopathy lesion localization multi-level patch attention mechanism
暂未订购
Deep Global Multiple-Scale and Local Patches Attention Dual-Branch Network for Pose-Invariant Facial Expression Recognition
2
作者 Chaoji Liu Xingqiao Liu +1 位作者 Chong Chen Kang Zhou 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期405-440,共36页
Pose-invariant facial expression recognition(FER)is an active but challenging research topic in computer vision.Especially with the involvement of diverse observation angles,FER makes the training parameter models inc... Pose-invariant facial expression recognition(FER)is an active but challenging research topic in computer vision.Especially with the involvement of diverse observation angles,FER makes the training parameter models inconsistent from one view to another.This study develops a deep global multiple-scale and local patches attention(GMS-LPA)dual-branch network for pose-invariant FER to weaken the influence of pose variation and selfocclusion on recognition accuracy.In this research,the designed GMS-LPA network contains four main parts,i.e.,the feature extraction module,the global multiple-scale(GMS)module,the local patches attention(LPA)module,and the model-level fusion model.The feature extraction module is designed to extract and normalize texture information to the same size.The GMS model can extract deep global features with different receptive fields,releasing the sensitivity of deeper convolution layers to pose-variant and self-occlusion.The LPA module is built to force the network to focus on local salient features,which can lower the effect of pose variation and self-occlusion on recognition results.Subsequently,the extracted features are fused with a model-level strategy to improve recognition accuracy.Extensive experimentswere conducted on four public databases,and the recognition results demonstrated the feasibility and validity of the proposed methods. 展开更多
关键词 Pose-invariant FER global multiple-scale(GMS) local patches attention(LPA) model-level fusion
在线阅读 下载PDF
ParMamba:A Parallel Architecture Using CNN and Mamba for Brain Tumor Classification
3
作者 Gaoshuai Su HongyangLi Huafeng Chen 《Computer Modeling in Engineering & Sciences》 2025年第3期2527-2545,共19页
Brain tumors,one of the most lethal diseases with low survival rates,require early detection and accurate diagnosis to enable effective treatment planning.While deep learning architectures,particularly Convolutional N... Brain tumors,one of the most lethal diseases with low survival rates,require early detection and accurate diagnosis to enable effective treatment planning.While deep learning architectures,particularly Convolutional Neural Networks(CNNs),have shown significant performance improvements over traditional methods,they struggle to capture the subtle pathological variations between different brain tumor types.Recent attention-based models have attempted to address this by focusing on global features,but they come with high computational costs.To address these challenges,this paper introduces a novel parallel architecture,ParMamba,which uniquely integrates Convolutional Attention Patch Embedding(CAPE)and the Conv Mamba block including CNN,Mamba and the channel enhancement module,marking a significant advancement in the field.The unique design of ConvMamba block enhances the ability of model to capture both local features and long-range dependencies,improving the detection of subtle differences between tumor types.The channel enhancement module refines feature interactions across channels.Additionally,CAPE is employed as a downsampling layer that extracts both local and global features,further improving classification accuracy.Experimental results on two publicly available brain tumor datasets demonstrate that ParMamba achieves classification accuracies of 99.62%and 99.35%,outperforming existing methods.Notably,ParMamba surpasses vision transformers(ViT)by 1.37%in accuracy,with a throughput improvement of over 30%.These results demonstrate that ParMamba delivers superior performance while operating faster than traditional attention-based methods. 展开更多
关键词 Brain tumor classification convolutional neural networks channel enhancementmodule convolutional attention patch embedding mamba ParMamba
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部