Early screening of diabetes retinopathy(DR)plays an important role in preventing irreversible blindness.Existing research has failed to fully explore effective DR lesion information in fundus maps.Besides,traditional ...Early screening of diabetes retinopathy(DR)plays an important role in preventing irreversible blindness.Existing research has failed to fully explore effective DR lesion information in fundus maps.Besides,traditional attention schemes have not considered the impact of lesion type differences on grading,resulting in unreasonable extraction of important lesion features.Therefore,this paper proposes a DR diagnosis scheme that integrates a multi-level patch attention generator(MPAG)and a lesion localization module(LLM).Firstly,MPAGis used to predict patches of different sizes and generate a weighted attention map based on the prediction score and the types of lesions contained in the patches,fully considering the impact of lesion type differences on grading,solving the problem that the attention maps of lesions cannot be further refined and then adapted to the final DR diagnosis task.Secondly,the LLM generates a global attention map based on localization.Finally,the weighted attention map and global attention map are weighted with the fundus map to fully explore effective DR lesion information and increase the attention of the classification network to lesion details.This paper demonstrates the effectiveness of the proposed method through extensive experiments on the public DDR dataset,obtaining an accuracy of 0.8064.展开更多
Pose-invariant facial expression recognition(FER)is an active but challenging research topic in computer vision.Especially with the involvement of diverse observation angles,FER makes the training parameter models inc...Pose-invariant facial expression recognition(FER)is an active but challenging research topic in computer vision.Especially with the involvement of diverse observation angles,FER makes the training parameter models inconsistent from one view to another.This study develops a deep global multiple-scale and local patches attention(GMS-LPA)dual-branch network for pose-invariant FER to weaken the influence of pose variation and selfocclusion on recognition accuracy.In this research,the designed GMS-LPA network contains four main parts,i.e.,the feature extraction module,the global multiple-scale(GMS)module,the local patches attention(LPA)module,and the model-level fusion model.The feature extraction module is designed to extract and normalize texture information to the same size.The GMS model can extract deep global features with different receptive fields,releasing the sensitivity of deeper convolution layers to pose-variant and self-occlusion.The LPA module is built to force the network to focus on local salient features,which can lower the effect of pose variation and self-occlusion on recognition results.Subsequently,the extracted features are fused with a model-level strategy to improve recognition accuracy.Extensive experimentswere conducted on four public databases,and the recognition results demonstrated the feasibility and validity of the proposed methods.展开更多
Brain tumors,one of the most lethal diseases with low survival rates,require early detection and accurate diagnosis to enable effective treatment planning.While deep learning architectures,particularly Convolutional N...Brain tumors,one of the most lethal diseases with low survival rates,require early detection and accurate diagnosis to enable effective treatment planning.While deep learning architectures,particularly Convolutional Neural Networks(CNNs),have shown significant performance improvements over traditional methods,they struggle to capture the subtle pathological variations between different brain tumor types.Recent attention-based models have attempted to address this by focusing on global features,but they come with high computational costs.To address these challenges,this paper introduces a novel parallel architecture,ParMamba,which uniquely integrates Convolutional Attention Patch Embedding(CAPE)and the Conv Mamba block including CNN,Mamba and the channel enhancement module,marking a significant advancement in the field.The unique design of ConvMamba block enhances the ability of model to capture both local features and long-range dependencies,improving the detection of subtle differences between tumor types.The channel enhancement module refines feature interactions across channels.Additionally,CAPE is employed as a downsampling layer that extracts both local and global features,further improving classification accuracy.Experimental results on two publicly available brain tumor datasets demonstrate that ParMamba achieves classification accuracies of 99.62%and 99.35%,outperforming existing methods.Notably,ParMamba surpasses vision transformers(ViT)by 1.37%in accuracy,with a throughput improvement of over 30%.These results demonstrate that ParMamba delivers superior performance while operating faster than traditional attention-based methods.展开更多
基金supported in part by the Research on the Application of Multimodal Artificial Intelligence in Diagnosis and Treatment of Type 2 Diabetes under Grant No.2020SK50910in part by the Hunan Provincial Natural Science Foundation of China under Grant 2023JJ60020.
文摘Early screening of diabetes retinopathy(DR)plays an important role in preventing irreversible blindness.Existing research has failed to fully explore effective DR lesion information in fundus maps.Besides,traditional attention schemes have not considered the impact of lesion type differences on grading,resulting in unreasonable extraction of important lesion features.Therefore,this paper proposes a DR diagnosis scheme that integrates a multi-level patch attention generator(MPAG)and a lesion localization module(LLM).Firstly,MPAGis used to predict patches of different sizes and generate a weighted attention map based on the prediction score and the types of lesions contained in the patches,fully considering the impact of lesion type differences on grading,solving the problem that the attention maps of lesions cannot be further refined and then adapted to the final DR diagnosis task.Secondly,the LLM generates a global attention map based on localization.Finally,the weighted attention map and global attention map are weighted with the fundus map to fully explore effective DR lesion information and increase the attention of the classification network to lesion details.This paper demonstrates the effectiveness of the proposed method through extensive experiments on the public DDR dataset,obtaining an accuracy of 0.8064.
基金supported by the National Natural Science Foundation of China (No.31872399)Advantage Discipline Construction Project (PAPD,No.6-2018)of Jiangsu University。
文摘Pose-invariant facial expression recognition(FER)is an active but challenging research topic in computer vision.Especially with the involvement of diverse observation angles,FER makes the training parameter models inconsistent from one view to another.This study develops a deep global multiple-scale and local patches attention(GMS-LPA)dual-branch network for pose-invariant FER to weaken the influence of pose variation and selfocclusion on recognition accuracy.In this research,the designed GMS-LPA network contains four main parts,i.e.,the feature extraction module,the global multiple-scale(GMS)module,the local patches attention(LPA)module,and the model-level fusion model.The feature extraction module is designed to extract and normalize texture information to the same size.The GMS model can extract deep global features with different receptive fields,releasing the sensitivity of deeper convolution layers to pose-variant and self-occlusion.The LPA module is built to force the network to focus on local salient features,which can lower the effect of pose variation and self-occlusion on recognition results.Subsequently,the extracted features are fused with a model-level strategy to improve recognition accuracy.Extensive experimentswere conducted on four public databases,and the recognition results demonstrated the feasibility and validity of the proposed methods.
基金supported by the Outstanding Youth Science and Technology Innovation Team Project of Colleges and Universities in Hubei Province(Grant no.T201923)Key Science and Technology Project of Jingmen(Grant nos.2021ZDYF024,2022ZDYF019)Cultivation Project of Jingchu University of Technology(Grant no.PY201904).
文摘Brain tumors,one of the most lethal diseases with low survival rates,require early detection and accurate diagnosis to enable effective treatment planning.While deep learning architectures,particularly Convolutional Neural Networks(CNNs),have shown significant performance improvements over traditional methods,they struggle to capture the subtle pathological variations between different brain tumor types.Recent attention-based models have attempted to address this by focusing on global features,but they come with high computational costs.To address these challenges,this paper introduces a novel parallel architecture,ParMamba,which uniquely integrates Convolutional Attention Patch Embedding(CAPE)and the Conv Mamba block including CNN,Mamba and the channel enhancement module,marking a significant advancement in the field.The unique design of ConvMamba block enhances the ability of model to capture both local features and long-range dependencies,improving the detection of subtle differences between tumor types.The channel enhancement module refines feature interactions across channels.Additionally,CAPE is employed as a downsampling layer that extracts both local and global features,further improving classification accuracy.Experimental results on two publicly available brain tumor datasets demonstrate that ParMamba achieves classification accuracies of 99.62%and 99.35%,outperforming existing methods.Notably,ParMamba surpasses vision transformers(ViT)by 1.37%in accuracy,with a throughput improvement of over 30%.These results demonstrate that ParMamba delivers superior performance while operating faster than traditional attention-based methods.