期刊文献+
共找到1,596篇文章
< 1 2 80 >
每页显示 20 50 100
Super-Resolution Generative Adversarial Network with Pyramid Attention Module for Face Generation
1
作者 Parvathaneni Naga Srinivasu G.JayaLakshmi +4 位作者 Sujatha Canavoy Narahari Victor Hugo C.de Albuquerque Muhammad Attique Khan Hee-Chan Cho Byoungchol Chang 《Computers, Materials & Continua》 2025年第10期2117-2139,共23页
The generation of high-quality,realistic face generation has emerged as a key field of research in computer vision.This paper proposes a robust approach that combines a Super-Resolution Generative Adversarial Network(... The generation of high-quality,realistic face generation has emerged as a key field of research in computer vision.This paper proposes a robust approach that combines a Super-Resolution Generative Adversarial Network(SRGAN)with a Pyramid Attention Module(PAM)to enhance the quality of deep face generation.The SRGAN framework is designed to improve the resolution of generated images,addressing common challenges such as blurriness and a lack of intricate details.The Pyramid Attention Module further complements the process by focusing on multi-scale feature extraction,enabling the network to capture finer details and complex facial features more effectively.The proposed method was trained and evaluated over 100 epochs on the CelebA dataset,demonstrating consistent improvements in image quality and a marked decrease in generator and discriminator losses,reflecting the model’s capacity to learn and synthesize high-quality images effectively,given adequate computational resources.Experimental outcome demonstrates that the SRGAN model with PAM module has outperformed,yielding an aggregate discriminator loss of 0.055 for real,0.043 for fake,and a generator loss of 10.58 after training for 100 epochs.The model has yielded an structural similarity index measure of 0.923,that has outperformed the other models that are considered in the current study for analysis. 展开更多
关键词 Artificial intelligence generative adversarial network pyramid attention module face generation deep learning
在线阅读 下载PDF
Enhanced Cutaneous Melanoma Segmentation in Dermoscopic Images Using a Dual U-Net Framework with Multi-Path Convolution Block Attention Module and SE-Res-Conv
2
作者 Kun Lan Feiyang Gao +2 位作者 Xiaoliang Jiang Jianzhen Cheng Simon Fong 《Computers, Materials & Continua》 2025年第9期4805-4824,共20页
With the continuous development of artificial intelligence and machine learning techniques,there have been effective methods supporting the work of dermatologist in the field of skin cancer detection.However,object si... With the continuous development of artificial intelligence and machine learning techniques,there have been effective methods supporting the work of dermatologist in the field of skin cancer detection.However,object significant challenges have been presented in accurately segmenting melanomas in dermoscopic images due to the objects that could interfere human observations,such as bubbles and scales.To address these challenges,we propose a dual U-Net network framework for skin melanoma segmentation.In our proposed architecture,we introduce several innovative components that aim to enhance the performance and capabilities of the traditional U-Net.First,we establish a novel framework that links two simplified U-Nets,enabling more comprehensive information exchange and feature integration throughout the network.Second,after cascading the second U-Net,we introduce a skip connection between the decoder and encoder networks,and incorporate a modified receptive field block(MRFB),which is designed to capture multi-scale spatial information.Third,to further enhance the feature representation capabilities,we add a multi-path convolution block attention module(MCBAM)to the first two layers of the first U-Net encoding,and integrate a new squeeze-and-excitation(SE)mechanism with residual connections in the second U-Net.To illustrate the performance of our proposed model,we conducted comprehensive experiments on widely recognized skin datasets.On the ISIC-2017 dataset,the IoU value of our proposed model increased from 0.6406 to 0.6819 and the Dice coefficient increased from 0.7625 to 0.8023.On the ISIC-2018 dataset,the IoU value of proposed model also improved from 0.7138 to 0.7709,while the Dice coefficient increased from 0.8285 to 0.8665.Furthermore,the generalization experiments conducted on the jaw cyst dataset from Quzhou People’s Hospital further verified the outstanding segmentation performance of the proposed model.These findings collectively affirm the potential of our approach as a valuable tool in supporting clinical decision-making in the field of skin cancer detection,as well as advancing research in medical image analysis. 展开更多
关键词 Dual U-Net skin lesion segmentation squeeze-and-excitation modified receptive field block multi-path convolution block attention module
在线阅读 下载PDF
ANC: Attention Network for COVID-19 Explainable Diagnosis Based on Convolutional Block Attention Module 被引量:10
3
作者 Yudong Zhang Xin Zhang Weiguo Zhu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2021年第6期1037-1058,共22页
Aim: To diagnose COVID-19 more efficiently and more correctly, this study proposed a novel attention network forCOVID-19 (ANC). Methods: Two datasets were used in this study. An 18-way data augmentation was proposed t... Aim: To diagnose COVID-19 more efficiently and more correctly, this study proposed a novel attention network forCOVID-19 (ANC). Methods: Two datasets were used in this study. An 18-way data augmentation was proposed toavoid overfitting. Then, convolutional block attention module (CBAM) was integrated to our model, the structureof which is fine-tuned. Finally, Grad-CAM was used to provide an explainable diagnosis. Results: The accuracyof our ANC methods on two datasets are 96.32% ± 1.06%, and 96.00% ± 1.03%, respectively. Conclusions: Thisproposed ANC method is superior to 9 state-of-the-art approaches. 展开更多
关键词 Deep learning convolutional block attention module attention mechanism COVID-19 explainable diagnosis
在线阅读 下载PDF
MobileNet network optimization based on convolutional block attention module 被引量:3
4
作者 ZHAO Shuxu MEN Shiyao YUAN Lin 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2022年第2期225-234,共10页
Deep learning technology is widely used in computer vision.Generally,a large amount of data is used to train the model weights in deep learning,so as to obtain a model with higher accuracy.However,massive data and com... Deep learning technology is widely used in computer vision.Generally,a large amount of data is used to train the model weights in deep learning,so as to obtain a model with higher accuracy.However,massive data and complex model structures require more calculating resources.Since people generally can only carry and use mobile and portable devices in application scenarios,neural networks have limitations in terms of calculating resources,size and power consumption.Therefore,the efficient lightweight model MobileNet is used as the basic network in this study for optimization.First,the accuracy of the MobileNet model is improved by adding methods such as the convolutional block attention module(CBAM)and expansion convolution.Then,the MobileNet model is compressed by using pruning and weight quantization algorithms based on weight size.Afterwards,methods such as Python crawlers and data augmentation are employed to create a garbage classification data set.Based on the above model optimization strategy,the garbage classification mobile terminal application is deployed on mobile phones and raspberry pies,realizing completing the garbage classification task more conveniently. 展开更多
关键词 MobileNet convolutional block attention module(CBAM) model pruning and quantization edge machine learning
在线阅读 下载PDF
Traffic Sign Recognition for Autonomous Vehicle Using Optimized YOLOv7 and Convolutional Block Attention Module 被引量:2
5
作者 P.Kuppusamy M.Sanjay +1 位作者 P.V.Deepashree C.Iwendi 《Computers, Materials & Continua》 SCIE EI 2023年第10期445-466,共22页
The infrastructure and construction of roads are crucial for the economic and social development of a region,but traffic-related challenges like accidents and congestion persist.Artificial Intelligence(AI)and Machine ... The infrastructure and construction of roads are crucial for the economic and social development of a region,but traffic-related challenges like accidents and congestion persist.Artificial Intelligence(AI)and Machine Learning(ML)have been used in road infrastructure and construction,particularly with the Internet of Things(IoT)devices.Object detection in Computer Vision also plays a key role in improving road infrastructure and addressing trafficrelated problems.This study aims to use You Only Look Once version 7(YOLOv7),Convolutional Block Attention Module(CBAM),the most optimized object-detection algorithm,to detect and identify traffic signs,and analyze effective combinations of adaptive optimizers like Adaptive Moment estimation(Adam),Root Mean Squared Propagation(RMSprop)and Stochastic Gradient Descent(SGD)with the YOLOv7.Using a portion of German traffic signs for training,the study investigates the feasibility of adopting smaller datasets while maintaining high accuracy.The model proposed in this study not only improves traffic safety by detecting traffic signs but also has the potential to contribute to the rapid development of autonomous vehicle systems.The study results showed an impressive accuracy of 99.7%when using a batch size of 8 and the Adam optimizer.This high level of accuracy demonstrates the effectiveness of the proposed model for the image classification task of traffic sign recognition. 展开更多
关键词 Object detection traffic sign detection YOLOv7 convolutional block attention module road sign detection ADAM
在线阅读 下载PDF
Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module 被引量:1
6
作者 胡振涛 HU Chonghao +1 位作者 YANG Haoran SHUAI Weiwei 《High Technology Letters》 EI CAS 2024年第1期23-30,共8页
The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-genera... The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable. 展开更多
关键词 multi-modal image translation generative adversarial network(GAN) squeezeand-excitation(SE)mechanism feature attention(FA)module
在线阅读 下载PDF
Simplified Inception Module Based Hadamard Attention Mechanism for Medical Image Classification
7
作者 Yanlin Jin Zhiming You Ningyin Cai 《Journal of Computer and Communications》 2023年第6期1-18,共18页
Medical image classification has played an important role in the medical field, and the related method based on deep learning has become an important and powerful technique in medical image classification. In this art... Medical image classification has played an important role in the medical field, and the related method based on deep learning has become an important and powerful technique in medical image classification. In this article, we propose a simplified inception module based Hadamard attention (SI + HA) mechanism for medical image classification. Specifically, we propose a new attention mechanism: Hadamard attention mechanism. It improves the accuracy of medical image classification without greatly increasing the complexity of the model. Meanwhile, we adopt a simplified inception module to improve the utilization of parameters. We use two medical image datasets to prove the superiority of our proposed method. In the BreakHis dataset, the AUCs of our method can reach 98.74%, 98.38%, 98.61% and 97.67% under the magnification factors of 40×, 100×, 200× and 400×, respectively. The accuracies can reach 95.67%, 94.17%, 94.53% and 94.12% under the magnification factors of 40×, 100×, 200× and 400×, respectively. In the KIMIA Path 960 dataset, the AUCs and accuracy of our method can reach 99.91% and 99.03%. It is superior to the currently popular methods and can significantly improve the effectiveness of medical image classification. 展开更多
关键词 Deep Learning Medical Image Classification attention Mechanism Inception module
在线阅读 下载PDF
Residual Attention-BiConvLSTM:一种新的全球电离层TEC map预测模型 被引量:1
8
作者 王浩然 刘海军 +5 位作者 袁静 乐会军 李良超 陈羿 单维锋 袁国铭 《地球物理学报》 北大核心 2025年第2期413-430,共18页
电离层总电子含量(TEC)预测对提高全球卫星导航系统(GNSS)的精度具有重要意义.现有的TEC map预测模型主要通过顺序堆叠时空特征提取单元来实现.这种模型搭建方法会因多个卷积层顺序堆叠而损失细粒度的TEC map的空间特征,导致模型精度不... 电离层总电子含量(TEC)预测对提高全球卫星导航系统(GNSS)的精度具有重要意义.现有的TEC map预测模型主要通过顺序堆叠时空特征提取单元来实现.这种模型搭建方法会因多个卷积层顺序堆叠而损失细粒度的TEC map的空间特征,导致模型精度不够;还会由于多层堆叠导致梯度消失或梯度爆炸问题.本文借鉴残差注意力(Residual Attention)的思想,在TEC map预测模型中增加了残差注意力模块,提出了Residual Attention-BiConvLSTM模型.该模型中的残差注意力模块能同时提取粗、细粒度空间特征,并对其进行加权.本文在全球TEC map数据上与ConvLSTM、ConvGRU、ED-ConvLSTM和C1PG进行了对比实验.实验结果表明,本文所提出的Residual Attention-BiConvLSTM模型的RMSE、MAE、MAPE和R^(2)在太阳活动高年和年均优于对比模型.本文还在一次磁暴事件中对比了5种模型的预测效果.实验结果表明,大磁暴发生时,本文模型与C1PG相近,优于其他3种对比模型.本文的研究工作为电离层map预测模型搭建提供一个新思路. 展开更多
关键词 电离层TEC map预测 残差注意力模块 Residual attention-BiConvLSTM 时空预测模型
在线阅读 下载PDF
Double Self-Attention Based Fully Connected Feature Pyramid Network for Field Crop Pest Detection
9
作者 Zijun Gao Zheyi Li +2 位作者 Chunqi Zhang Ying Wang Jingwen Su 《Computers, Materials & Continua》 2025年第6期4353-4371,共19页
Pest detection techniques are helpful in reducing the frequency and scale of pest outbreaks;however,their application in the actual agricultural production process is still challenging owing to the problems of intersp... Pest detection techniques are helpful in reducing the frequency and scale of pest outbreaks;however,their application in the actual agricultural production process is still challenging owing to the problems of interspecies similarity,multi-scale,and background complexity of pests.To address these problems,this study proposes an FD-YOLO pest target detection model.The FD-YOLO model uses a Fully Connected Feature Pyramid Network(FC-FPN)instead of a PANet in the neck,which can adaptively fuse multi-scale information so that the model can retain small-scale target features in the deep layer,enhance large-scale target features in the shallow layer,and enhance the multiplexing of effective features.A dual self-attention module(DSA)is then embedded in the C3 module of the neck,which captures the dependencies between the information in both spatial and channel dimensions,effectively enhancing global features.We selected 16 types of pests that widely damage field crops in the IP102 pest dataset,which were used as our dataset after data supplementation and enhancement.The experimental results showed that FD-YOLO’s mAP@0.5 improved by 6.8%compared to YOLOv5,reaching 82.6%and 19.1%–5%better than other state-of-the-art models.This method provides an effective new approach for detecting similar or multiscale pests in field crops. 展开更多
关键词 Pest detection YOLOv5 feature pyramid network transformer attention module
在线阅读 下载PDF
Transmission Facility Detection with Feature-Attention Multi-Scale Robustness Network and Generative Adversarial Network
10
作者 Yunho Na Munsu Jeon +4 位作者 Seungmin Joo Junsoo Kim Ki-Yong Oh Min Ku Kim Joon-Young Park 《Computer Modeling in Engineering & Sciences》 2025年第7期1013-1044,共32页
This paper proposes an automated detection framework for transmission facilities using a featureattention multi-scale robustness network(FAMSR-Net)with high-fidelity virtual images.The proposed framework exhibits thre... This paper proposes an automated detection framework for transmission facilities using a featureattention multi-scale robustness network(FAMSR-Net)with high-fidelity virtual images.The proposed framework exhibits three key characteristics.First,virtual images of the transmission facilities generated using StyleGAN2-ADA are co-trained with real images.This enables the neural network to learn various features of transmission facilities to improve the detection performance.Second,the convolutional block attention module is deployed in FAMSR-Net to effectively extract features from images and construct multi-dimensional feature maps,enabling the neural network to perform precise object detection in various environments.Third,an effective bounding box optimization method called Scylla-IoU is deployed on FAMSR-Net,considering the intersection over union,center point distance,angle,and shape of the bounding box.This enables the detection of power facilities of various sizes accurately.Extensive experiments demonstrated that FAMSRNet outperforms other neural networks in detecting power facilities.FAMSR-Net also achieved the highest detection accuracy when virtual images of the transmission facilities were co-trained in the training phase.The proposed framework is effective for the scheduled operation and maintenance of transmission facilities because an optical camera is currently the most promising tool for unmanned aerial vehicles.This ultimately contributes to improved inspection efficiency,reduced maintenance risks,and more reliable power delivery across extensive transmission facilities. 展开更多
关键词 Object detection virtual image transmission facility convolutional block attention module Scylla-IoU
在线阅读 下载PDF
MMIF:Multimodal Medical Image Fusion Network Based on Multi-Scale Hybrid Attention
11
作者 Jianjun Liu Yang Li +2 位作者 Xiaoting Sun Xiaohui Wang Hanjiang Luo 《Computers, Materials & Continua》 2025年第11期3551-3568,共18页
Multimodal image fusion plays an important role in image analysis and applications.Multimodal medical image fusion helps to combine contrast features from two or more input imaging modalities to represent fused inform... Multimodal image fusion plays an important role in image analysis and applications.Multimodal medical image fusion helps to combine contrast features from two or more input imaging modalities to represent fused information in a single image.One of the critical clinical applications of medical image fusion is to fuse anatomical and functional modalities for rapid diagnosis of malignant tissues.This paper proposes a multimodal medical image fusion network(MMIF-Net)based on multiscale hybrid attention.The method first decomposes the original image to obtain the low-rank and significant parts.Then,to utilize the features at different scales,we add amultiscalemechanism that uses three filters of different sizes to extract the features in the encoded network.Also,a hybrid attention module is introduced to obtain more image details.Finally,the fused images are reconstructed by decoding the network.We conducted experiments with clinical images from brain computed tomography/magnetic resonance.The experimental results show that the multimodal medical image fusion network method based on multiscale hybrid attention works better than other advanced fusion methods. 展开更多
关键词 Medical image fusion multiscale mechanism hybrid attention module encoded network
在线阅读 下载PDF
AG-GCN: Vehicle Re-Identification Based on Attention-Guided Graph Convolutional Network
12
作者 Ya-Jie Sun Li-Wei Qiao Sai Ji 《Computers, Materials & Continua》 2025年第7期1769-1785,共17页
Vehicle re-identification involves matching images of vehicles across varying camera views.The diversity of camera locations along different roadways leads to significant intra-class variation and only minimal inter-c... Vehicle re-identification involves matching images of vehicles across varying camera views.The diversity of camera locations along different roadways leads to significant intra-class variation and only minimal inter-class similarity in the collected vehicle images,which increases the complexity of re-identification tasks.To tackle these challenges,this study proposes AG-GCN(Attention-Guided Graph Convolutional Network),a novel framework integrating several pivotal components.Initially,AG-GCN embeds a lightweight attention module within the ResNet-50 structure to learn feature weights automatically,thereby improving the representation of vehicle features globally by highlighting salient features and suppressing extraneous ones.Moreover,AG-GCN adopts a graph-based structure to encapsulate deep local features.A graph convolutional network then amalgamates these features to understand the relationships among vehicle-related characteristics.Subsequently,we amalgamate feature maps from both the attention and graph-based branches for a more comprehensive representation of vehicle features.The framework then gauges feature similarities and ranks them,thus enhancing the accuracy of vehicle re-identification.Comprehensive qualitative and quantitative analyses on two publicly available datasets verify the efficacy of AG-GCN in addressing intra-class and inter-class variability issues. 展开更多
关键词 Vehicle re-identification a lightweight attention module global features local features graph convolution network
在线阅读 下载PDF
Multimodal medical image fusion based on mask optimization and parallel attention mechanism
13
作者 DI Jing LIANG Chan +1 位作者 GUO Wenqing LIAN Jing 《Journal of Measurement Science and Instrumentation》 2025年第1期26-36,共11页
Medical image fusion technology is crucial for improving the detection accuracy and treatment efficiency of diseases,but existing fusion methods have problems such as blurred texture details,low contrast,and inability... Medical image fusion technology is crucial for improving the detection accuracy and treatment efficiency of diseases,but existing fusion methods have problems such as blurred texture details,low contrast,and inability to fully extract fused image information.Therefore,a multimodal medical image fusion method based on mask optimization and parallel attention mechanism was proposed to address the aforementioned issues.Firstly,it converted the entire image into a binary mask,and constructed a contour feature map to maximize the contour feature information of the image and a triple path network for image texture detail feature extraction and optimization.Secondly,a contrast enhancement module and a detail preservation module were proposed to enhance the overall brightness and texture details of the image.Afterwards,a parallel attention mechanism was constructed using channel features and spatial feature changes to fuse images and enhance the salient information of the fused images.Finally,a decoupling network composed of residual networks was set up to optimize the information between the fused image and the source image so as to reduce information loss in the fused image.Compared with nine high-level methods proposed in recent years,the seven objective evaluation indicators of our method have improved by 6%−31%,indicating that this method can obtain fusion results with clearer texture details,higher contrast,and smaller pixel differences between the fused image and the source image.It is superior to other comparison algorithms in both subjective and objective indicators. 展开更多
关键词 multimodal medical image fusion binary mask contrast enhancement module parallel attention mechanism decoupling network
在线阅读 下载PDF
Marine organism classification method based on hierarchical multi-scale attention mechanism
14
作者 XU Haotian CHENG Yuanzhi +1 位作者 ZHAO Dong XIE Peidong 《Optoelectronics Letters》 2025年第6期354-361,共8页
We propose a hierarchical multi-scale attention mechanism-based model in response to the low accuracy and inefficient manual classification of existing oceanic biological image classification methods. Firstly, the hie... We propose a hierarchical multi-scale attention mechanism-based model in response to the low accuracy and inefficient manual classification of existing oceanic biological image classification methods. Firstly, the hierarchical efficient multi-scale attention(H-EMA) module is designed for lightweight feature extraction, achieving outstanding performance at a relatively low cost. Secondly, an improved EfficientNetV2 block is used to integrate information from different scales better and enhance inter-layer message passing. Furthermore, introducing the convolutional block attention module(CBAM) enhances the model's perception of critical features, optimizing its generalization ability. Lastly, Focal Loss is introduced to adjust the weights of complex samples to address the issue of imbalanced categories in the dataset, further improving the model's performance. The model achieved 96.11% accuracy on the intertidal marine organism dataset of Nanji Islands and 84.78% accuracy on the CIFAR-100 dataset, demonstrating its strong generalization ability to meet the demands of oceanic biological image classification. 展开更多
关键词 integrate information different scales hierarchical multi scale attention lightweight feature extraction focal loss efficientnetv marine organism classification oceanic biological image classification methods convolutional block attention module
原文传递
基于Attention-Conv1D-2Bi-LSTM模型的交通流预测
15
作者 张瑜 刘德斌 +1 位作者 戴志敏 杨子兰 《计算机仿真》 2025年第2期181-186,共6页
在智能交通中,实时准确的交通流预测对市民的出行和政府部门的管理至关重要。针对智能交通预测效果不佳的问题,提出了一种基于注意力机制的一维卷积和双层双向长短时记忆的交通流预测模型。模型结合了一维卷积模块和两层双向长短时记忆... 在智能交通中,实时准确的交通流预测对市民的出行和政府部门的管理至关重要。针对智能交通预测效果不佳的问题,提出了一种基于注意力机制的一维卷积和双层双向长短时记忆的交通流预测模型。模型结合了一维卷积模块和两层双向长短时记忆模块提取交通流的时空特征和前后依赖的周期性特征,同时引入注意力机制关注不同时刻的交通流的影响。实验结果表明,提出模型的预测效果优于对比模型,说明所提模型一定程度上提高了交通流的预测精度。 展开更多
关键词 注意力机制 一维卷积模块 循环神经网络 交通预测模型
在线阅读 下载PDF
基于改进I-Attention U-Net的锌浮选泡沫图像分割算法 被引量:5
16
作者 唐朝晖 郭俊岑 +2 位作者 张虎 谢永芳 钟宇泽 《湖南大学学报(自然科学版)》 EI CAS CSCD 北大核心 2023年第2期12-22,共11页
针对泡沫图像的高度复杂性导致其难以被准确分割的难题,本文提出了一种新的I-Attention U-Net网络用于泡沫图像分割.该算法以U-Net网络作为主干网络,使用Inception模块替换第一卷积池化层来提取泡沫图像的多尺度、多层次浅层特征信息;... 针对泡沫图像的高度复杂性导致其难以被准确分割的难题,本文提出了一种新的I-Attention U-Net网络用于泡沫图像分割.该算法以U-Net网络作为主干网络,使用Inception模块替换第一卷积池化层来提取泡沫图像的多尺度、多层次浅层特征信息;引入金字塔池化模块,通过对不同尺度的特征图求和来提升分割效果;并对自注意力门控单元进行改进,使注意力单元更适合于浮选泡沫图像的分割,强化深层特征的重要性并对不同尺寸的泡沫边界进行强化学习.研究结果表明:本文所提出算法的Jaccard系数为91.73%,Dice系数为95.66%.与同类其他分割算法结果相比,Jaccard系数及Dice系数分别提高了1.59%、0.88%.该模型能够较好地对锌浮选泡沫图像进行分割,解决欠分割与过分割的问题,为后续的泡沫特征提取奠定基础.此外,该方法检测时间和模型参数少,具备可以部署在工业现场计算机的能力,有一定的实际应用价值. 展开更多
关键词 泡沫浮选 泡沫图像分割 U-Net Inception模块 增强注意力机制
在线阅读 下载PDF
Attention Res-Unet:一种高效阴影检测算法 被引量:12
17
作者 董月 冯华君 +2 位作者 徐之海 陈跃庭 李奇 《浙江大学学报(工学版)》 EI CAS CSCD 北大核心 2019年第2期373-381,406,共10页
图像中阴影像素的存在会导致图像内容的不确定性,对计算机视觉任务有害,因此常将阴影检测作为计算机视觉算法的预处理步骤.提出全新的阴影检测网络结构,通过结合输入图像中包含的语义信息和像素之间的关联,提升网络性能.使用预训练后的... 图像中阴影像素的存在会导致图像内容的不确定性,对计算机视觉任务有害,因此常将阴影检测作为计算机视觉算法的预处理步骤.提出全新的阴影检测网络结构,通过结合输入图像中包含的语义信息和像素之间的关联,提升网络性能.使用预训练后的深层网络ResNeXt101作为特征提取前端,提取图像的语义信息,并结合U-net的设计思路,搭建网络结构,完成特征层的上采样过程.在输出层之前使用非局部操作,为每一个像素提供全局信息,建立像素与像素之间的联系.设计注意力生成模块和注意力融合模块,进一步提高检测准确率.分别在SBU、UCF这2个阴影检测数据集上进行验证,实验结果表明,所提方法的目视效果及客观指标皆优于此前最优方法所得结果,在2个数据集上的平均检测错误率分别降低14.4%和14.9%. 展开更多
关键词 阴影检测 特征提取 语义信息 像素关联 非局部操作 注意力机制 卷积神经网络(CNN)
在线阅读 下载PDF
An attention-based prototypical network for forest fire smoke few-shot detection 被引量:3
18
作者 Tingting Li Haowei Zhu +1 位作者 Chunhe Hu Junguo Zhang 《Journal of Forestry Research》 SCIE CAS CSCD 2022年第5期1493-1504,共12页
Existing almost deep learning methods rely on a large amount of annotated data, so they are inappropriate for forest fire smoke detection with limited data. In this paper, a novel hybrid attention-based few-shot learn... Existing almost deep learning methods rely on a large amount of annotated data, so they are inappropriate for forest fire smoke detection with limited data. In this paper, a novel hybrid attention-based few-shot learning method, named Attention-Based Prototypical Network, is proposed for forest fire smoke detection. Specifically, feature extraction network, which consists of convolutional block attention module, could extract high-level and discriminative features and further decrease the false alarm rate resulting from suspected smoke areas. Moreover, we design a metalearning module to alleviate the overfitting issue caused by limited smoke images, and the meta-learning network enables achieving effective detection via comparing the distance between the class prototype of support images and the features of query images. A series of experiments on forest fire smoke datasets and miniImageNet dataset testify that the proposed method is superior to state-of-the-art few-shot learning approaches. 展开更多
关键词 Forest fire smoke detection Few-shot learning Channel attention module Spatial attention module Prototypical network
在线阅读 下载PDF
Bilateral U-Net semantic segmentation with spatial attention mechanism 被引量:3
19
作者 Guangzhe Zhao Yimeng Zhang +1 位作者 Maoning Ge Min Yu 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第2期297-307,共11页
Aiming at the problem that the existing models have a poor segmentation effect on imbalanced data sets with small-scale samples,a bilateral U-Net network model with a spatial attention mechanism is designed.The model ... Aiming at the problem that the existing models have a poor segmentation effect on imbalanced data sets with small-scale samples,a bilateral U-Net network model with a spatial attention mechanism is designed.The model uses the lightweight MobileNetV2 as the backbone network for feature hierarchical extraction and proposes an Attentive Pyramid Spatial Attention(APSA)module compared to the Attenuated Spatial Pyramid module,which can increase the receptive field and enhance the information,and finally adds the context fusion prediction branch that fuses high-semantic and low-semantic prediction results,and the model effectively improves the segmentation accuracy of small data sets.The experimental results on the CamVid data set show that compared with some existing semantic segmentation networks,the algorithm has a better segmentation effect and segmentation accuracy,and its mIOU reaches 75.85%.Moreover,to verify the generality of the model and the effectiveness of the APSA module,experiments were conducted on the VOC 2012 data set,and the APSA module improved mIOU by about 12.2%. 展开更多
关键词 attention mechanism receptive field semantic fusion semantic segmentation spatial attention module U-Net
在线阅读 下载PDF
Gear Pitting Measurement by Multi-Scale Splicing Attention U-Net 被引量:3
20
作者 Yi Qin Dejun Xi +1 位作者 Weiwei Chen Yi Wang 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2023年第2期140-154,共15页
The judgment of gear failure is based on the pitting area ratio of gear.Traditional gear pitting calculation method mainly rely on manual visual inspection.This method is greatly affected by human factors,and is great... The judgment of gear failure is based on the pitting area ratio of gear.Traditional gear pitting calculation method mainly rely on manual visual inspection.This method is greatly affected by human factors,and is greatly affected by the working experience,training degree and fatigue degree of the detection personnel,so the detection results may be biased.The non-contact computer vision measurement can carry out non-destructive testing and monitoring under the working condition of the machine,and has high detection accuracy.To improve the measurement accuracy of gear pitting,a novel multi-scale splicing attention U-Net(MSSA U-Net)is explored in this study.An image splicing module is first proposed for concatenating the output feature maps of multiple convolutional layers into a splicing feature map with more semantic information.Then,an attention module is applied to select the key features of the splicing feature map.Given that MSSA U-Net adequately uses multi-scale semantic features,it has better segmentation performance on irregular small objects than U-Net and attention U-Net.On the basis of the designed visual detection platform and MSSA U-Net,a methodology for measuring the area ratio of gear pitting is proposed.With three datasets,experimental results show that MSSA U-Net is superior to existing typical image segmentation methods and can accurately segment different levels of pitting due to its strong segmentation ability.Therefore,the proposed methodology can be effectively applied in measuring the pitting area ratio and determining the level of gear pitting. 展开更多
关键词 Gear pitting Image segmentation attention module Computer vision Quantitative detection
在线阅读 下载PDF
上一页 1 2 80 下一页 到第
使用帮助 返回顶部