期刊文献+
共找到142篇文章
< 1 2 8 >
每页显示 20 50 100
Occluded Gait Emotion Recognition Based on Multi-Scale Suppression Graph Convolutional Network
1
作者 Yuxiang Zou Ning He +2 位作者 Jiwu Sun Xunrui Huang Wenhua Wang 《Computers, Materials & Continua》 SCIE EI 2025年第1期1255-1276,共22页
In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accurac... In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods. 展开更多
关键词 KNN interpolation multi-scale temporal convolution suppression graph convolutional network gait emotion recognition human skeleton
在线阅读 下载PDF
MSSTGCN: Multi-Head Self-Attention and Spatial-Temporal Graph Convolutional Network for Multi-Scale Traffic Flow Prediction
2
作者 Xinlu Zong Fan Yu +1 位作者 Zhen Chen Xue Xia 《Computers, Materials & Continua》 2025年第2期3517-3537,共21页
Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address ... Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address this problem, a Multi-head Self-attention and Spatial-Temporal Graph Convolutional Network (MSSTGCN) for multiscale traffic flow prediction is proposed. Firstly, to capture the hidden traffic periodicity of traffic flow, traffic flow is divided into three kinds of periods, including hourly, daily, and weekly data. Secondly, a graph attention residual layer is constructed to learn the global spatial features across regions. Local spatial-temporal dependence is captured by using a T-GCN module. Thirdly, a transformer layer is introduced to learn the long-term dependence in time. A position embedding mechanism is introduced to label position information for all traffic sequences. Thus, this multi-head self-attention mechanism can recognize the sequence order and allocate weights for different time nodes. Experimental results on four real-world datasets show that the MSSTGCN performs better than the baseline methods and can be successfully adapted to traffic prediction tasks. 展开更多
关键词 Graph convolutional network traffic flow prediction multi-scale traffic flow spatial-temporal model
在线阅读 下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification 被引量:2
3
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet Image Classification Lightweight convolutional Neural Network Depthwise Dilated Separable convolution Hierarchical multi-scale Feature Fusion
在线阅读 下载PDF
M2ANet:Multi-branch and multi-scale attention network for medical image segmentation
4
作者 Wei Xue Chuanghui Chen +3 位作者 Xuan Qi Jian Qin Zhen Tang Yongsheng He 《Chinese Physics B》 2025年第8期547-559,共13页
Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to ... Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to the inability to effectively capture global information from images,CNNs can easily lead to loss of contours and textures in segmentation results.Notice that the transformer model can effectively capture the properties of long-range dependencies in the image,and furthermore,combining the CNN and the transformer can effectively extract local details and global contextual features of the image.Motivated by this,we propose a multi-branch and multi-scale attention network(M2ANet)for medical image segmentation,whose architecture consists of three components.Specifically,in the first component,we construct an adaptive multi-branch patch module for parallel extraction of image features to reduce information loss caused by downsampling.In the second component,we apply residual block to the well-known convolutional block attention module to enhance the network’s ability to recognize important features of images and alleviate the phenomenon of gradient vanishing.In the third component,we design a multi-scale feature fusion module,in which we adopt adaptive average pooling and position encoding to enhance contextual features,and then multi-head attention is introduced to further enrich feature representation.Finally,we validate the effectiveness and feasibility of the proposed M2ANet method through comparative experiments on four benchmark medical image segmentation datasets,particularly in the context of preserving contours and textures. 展开更多
关键词 medical image segmentation convolutional neural network multi-branch attention multi-scale feature fusion
原文传递
Magnetic Resonance Image Super-Resolution Based on GAN and Multi-Scale Residual Dense Attention Network
5
作者 GUAN Chunling YU Suping +1 位作者 XU Wujun FAN Hong 《Journal of Donghua University(English Edition)》 2025年第4期435-441,共7页
The application of image super-resolution(SR)has brought significant assistance in the medical field,aiding doctors to make more precise diagnoses.However,solely relying on a convolutional neural network(CNN)for image... The application of image super-resolution(SR)has brought significant assistance in the medical field,aiding doctors to make more precise diagnoses.However,solely relying on a convolutional neural network(CNN)for image SR may lead to issues such as blurry details and excessive smoothness.To address the limitations,we proposed an algorithm based on the generative adversarial network(GAN)framework.In the generator network,three different sizes of convolutions connected by a residual dense structure were used to extract detailed features,and an attention mechanism combined with dual channel and spatial information was applied to concentrate the computing power on crucial areas.In the discriminator network,using InstanceNorm to normalize tensors sped up the training process while retaining feature information.The experimental results demonstrate that our algorithm achieves higher peak signal-to-noise ratio(PSNR)and structural similarity index measure(SSIM)compared to other methods,resulting in an improved visual quality. 展开更多
关键词 magnetic resonance(MR) image super-resolution(SR) attention mechanism generative adversarial network(GAN) multi-scale convolution
在线阅读 下载PDF
Deep Multi-Scale and Attention-Based Architectures for Semantic Segmentation in Biomedical Imaging
6
作者 Majid Harouni Vishakha Goyal +2 位作者 Gabrielle Feldman Sam Michael Ty C.Voss 《Computers, Materials & Continua》 2025年第10期331-366,共36页
Semantic segmentation plays a foundational role in biomedical image analysis, providing precise information about cellular, tissue, and organ structures in both biological and medical imaging modalities. Traditional a... Semantic segmentation plays a foundational role in biomedical image analysis, providing precise information about cellular, tissue, and organ structures in both biological and medical imaging modalities. Traditional approaches often fail in the face of challenges such as low contrast, morphological variability, and densely packed structures. Recent advancements in deep learning have transformed segmentation capabilities through the integration of fine-scale detail preservation, coarse-scale contextual modeling, and multi-scale feature fusion. This work provides a comprehensive analysis of state-of-the-art deep learning models, including U-Net variants, attention-based frameworks, and Transformer-integrated networks, highlighting innovations that improve accuracy, generalizability, and computational efficiency. Key architectural components such as convolution operations, shallow and deep blocks, skip connections, and hybrid encoders are examined for their roles in enhancing spatial representation and semantic consistency. We further discuss the importance of hierarchical and instance-aware segmentation and annotation in interpreting complex biological scenes and multiplexed medical images. By bridging methodological developments with diverse application domains, this paper outlines current trends and future directions for semantic segmentation, emphasizing its critical role in facilitating annotation, diagnosis, and discovery in biomedical research. 展开更多
关键词 Biomedical semantic segmentation multi-scale feature fusion fine-and coarse-scale features convolution operations shallow and deep blocks skip connections
在线阅读 下载PDF
Enhancing Classroom Behavior Recognition with Lightweight Multi-Scale Feature Fusion
7
作者 Chuanchuan Wang Ahmad Sufril Azlan Mohamed +3 位作者 Xiao Yang Hao Zhang Xiang Li Mohd Halim Bin Mohd Noor 《Computers, Materials & Continua》 2025年第10期855-874,共20页
Classroom behavior recognition is a hot research topic,which plays a vital role in assessing and improving the quality of classroom teaching.However,existing classroom behavior recognition methods have challenges for ... Classroom behavior recognition is a hot research topic,which plays a vital role in assessing and improving the quality of classroom teaching.However,existing classroom behavior recognition methods have challenges for high recognition accuracy with datasets with problems such as scenes with blurred pictures,and inconsistent objects.To address this challenge,we proposed an effective,lightweight object detector method called the RFNet model(YOLO-FR).The YOLO-FR is a lightweight and effective model.Specifically,for efficient multi-scale feature extraction,effective feature pyramid shared convolutional(FPSC)was designed to improve the feature extract performance by leveraging convolutional layers with varying dilation rates from the input image in the backbone.Secondly,to address the problem of multi-scale variability in the scene,we design the Rep Ghost fusion Cross Stage Partial and Efficient Layer Aggregation Network(RGCSPELAN)to improve the network performance further and reduce the amount of computation and the number of parameters.In addition,by conducting experimental valuation on the SCB dataset3 and STBD-08 dataset.Experimental results indicate that,compared to the baseline model,the RFNet model has increased mean accuracy precision(mAP@50)from 69.6%to 71.0%on the SCB dataset3 and from 91.8%to 93.1%on the STBD-08 dataset.The RFNet approach has effectiveness precision at 68.6%,surpassing the baseline method(YOLOv11)at 3.3%and archieve the minimal size(4.9 M)on the SCB dataset3.Finally,comparing it with other algorithms,it accurately detects student behavior in complex classroom environments results confirmed that RFNet is well-suited for real-time and efficiently recognizing classroom behaviors. 展开更多
关键词 Classroom action recognition YOLO-FR feature pyramid shared convolutional rep ghost cross stage partial efficient layer aggregation network(RGCSPELAN)
在线阅读 下载PDF
A multi-scale convolutional auto-encoder and its application in fault diagnosis of rolling bearings 被引量:12
8
作者 Ding Yunhao Jia Minping 《Journal of Southeast University(English Edition)》 EI CAS 2019年第4期417-423,共7页
Aiming at the difficulty of fault identification caused by manual extraction of fault features of rotating machinery,a one-dimensional multi-scale convolutional auto-encoder fault diagnosis model is proposed,based on ... Aiming at the difficulty of fault identification caused by manual extraction of fault features of rotating machinery,a one-dimensional multi-scale convolutional auto-encoder fault diagnosis model is proposed,based on the standard convolutional auto-encoder.In this model,the parallel convolutional and deconvolutional kernels of different scales are used to extract the features from the input signal and reconstruct the input signal;then the feature map extracted by multi-scale convolutional kernels is used as the input of the classifier;and finally the parameters of the whole model are fine-tuned using labeled data.Experiments on one set of simulation fault data and two sets of rolling bearing fault data are conducted to validate the proposed method.The results show that the model can achieve 99.75%,99.3%and 100%diagnostic accuracy,respectively.In addition,the diagnostic accuracy and reconstruction error of the one-dimensional multi-scale convolutional auto-encoder are compared with traditional machine learning,convolutional neural networks and a traditional convolutional auto-encoder.The final results show that the proposed model has a better recognition effect for rolling bearing fault data. 展开更多
关键词 fault diagnosis deep learning convolutional auto-encoder multi-scale convolutional kernel feature extraction
在线阅读 下载PDF
Land cover classification from remote sensing images based on multi-scale fully convolutional network 被引量:17
9
作者 Rui Li Shunyi Zheng +2 位作者 Chenxi Duan Libo Wang Ce Zhang 《Geo-Spatial Information Science》 SCIE EI CSCD 2022年第2期278-294,共17页
Although the Convolutional Neural Network(CNN)has shown great potential for land cover classification,the frequently used single-scale convolution kernel limits the scope of informa-tion extraction.Therefore,we propos... Although the Convolutional Neural Network(CNN)has shown great potential for land cover classification,the frequently used single-scale convolution kernel limits the scope of informa-tion extraction.Therefore,we propose a Multi-Scale Fully Convolutional Network(MSFCN)with a multi-scale convolutional kernel as well as a Channel Attention Block(CAB)and a Global Pooling Module(GPM)in this paper to exploit discriminative representations from two-dimensional(2D)satellite images.Meanwhile,to explore the ability of the proposed MSFCN for spatio-temporal images,we expand our MSFCN to three-dimension using three-dimensional(3D)CNN,capable of harnessing each land cover category’s time series interac-tion from the reshaped spatio-temporal remote sensing images.To verify the effectiveness of the proposed MSFCN,we conduct experiments on two spatial datasets and two spatio-temporal datasets.The proposed MSFCN achieves 60.366%on the WHDLD dataset and 75.127%on the GID dataset in terms of mIoU index while the figures for two spatio-temporal datasets are 87.753%and 77.156%.Extensive comparative experiments and abla-tion studies demonstrate the effectiveness of the proposed MSFCN. 展开更多
关键词 Spatio-temporal remote sensing images multi-scale Fully convolutional Network land cover classification
原文传递
Multi-Scale Convolutional Gated Recurrent Unit Networks for Tool Wear Prediction in Smart Manufacturing 被引量:3
10
作者 Weixin Xu Huihui Miao +3 位作者 Zhibin Zhao Jinxin Liu Chuang Sun Ruqiang Yan 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2021年第3期130-145,共16页
As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symboli... As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symbolic applications of PHM technology in modern manufacturing systems and industry.In this paper,a multi-scale Convolutional Gated Recurrent Unit network(MCGRU)is proposed to address raw sensory data for tool wear prediction.At the bottom of MCGRU,six parallel and independent branches with different kernel sizes are designed to form a multi-scale convolutional neural network,which augments the adaptability to features of different time scales.These features of different scales extracted from raw data are then fed into a Deep Gated Recurrent Unit network to capture long-term dependencies and learn significant representations.At the top of the MCGRU,a fully connected layer and a regression layer are built for cutting tool wear prediction.Two case studies are performed to verify the capability and effectiveness of the proposed MCGRU network and results show that MCGRU outperforms several state-of-the-art baseline models. 展开更多
关键词 Tool wear prediction multi-scale convolutional neural networks Gated recurrent unit
在线阅读 下载PDF
Pedestrian attribute classification with multi-scale and multi-label convolutional neural networks
11
作者 朱建清 Zeng Huanqiang +2 位作者 Zhang Yuzhao Zheng Lixin Cai Canhui 《High Technology Letters》 EI CAS 2018年第1期53-61,共9页
Pedestrian attribute classification from a pedestrian image captured in surveillance scenarios is challenging due to diverse clothing appearances,varied poses and different camera views. A multiscale and multi-label c... Pedestrian attribute classification from a pedestrian image captured in surveillance scenarios is challenging due to diverse clothing appearances,varied poses and different camera views. A multiscale and multi-label convolutional neural network( MSMLCNN) is proposed to predict multiple pedestrian attributes simultaneously. The pedestrian attribute classification problem is firstly transformed into a multi-label problem including multiple binary attributes needed to be classified. Then,the multi-label problem is solved by fully connecting all binary attributes to multi-scale features with logistic regression functions. Moreover,the multi-scale features are obtained by concatenating those featured maps produced from multiple pooling layers of the MSMLCNN at different scales. Extensive experiment results show that the proposed MSMLCNN outperforms state-of-the-art pedestrian attribute classification methods with a large margin. 展开更多
关键词 PEDESTRIAN ATTRIBUTE CLASSIFICATION multi-scale features MULTI-LABEL CLASSIFICATION convolutional NEURAL network (CNN)
在线阅读 下载PDF
MSSTNet:Multi-scale facial videos pulse extraction network based on separable spatiotemporal convolution and dimension separable attention
12
作者 Changchen ZHAO Hongsheng WANG Yuanjing FENG 《Virtual Reality & Intelligent Hardware》 2023年第2期124-141,共18页
Background The use of remote photoplethysmography(rPPG)to estimate blood volume pulse in a noncontact manner has been an active research topic in recent years.Existing methods are primarily based on a singlescale regi... Background The use of remote photoplethysmography(rPPG)to estimate blood volume pulse in a noncontact manner has been an active research topic in recent years.Existing methods are primarily based on a singlescale region of interest(ROI).However,some noise signals that are not easily separated in a single-scale space can be easily separated in a multi-scale space.Also,existing spatiotemporal networks mainly focus on local spatiotemporal information and do not emphasize temporal information,which is crucial in pulse extraction problems,resulting in insufficient spatiotemporal feature modelling.Methods Here,we propose a multi-scale facial video pulse extraction network based on separable spatiotemporal convolution(SSTC)and dimension separable attention(DSAT).First,to solve the problem of a single-scale ROI,we constructed a multi-scale feature space for initial signal separation.Second,SSTC and DSAT were designed for efficient spatiotemporal correlation modeling,which increased the information interaction between the long-span time and space dimensions;this placed more emphasis on temporal features.Results The signal-to-noise ratio(SNR)of the proposed network reached 9.58dB on the PURE dataset and 6.77dB on the UBFC-rPPG dataset,outperforming state-of-the-art algorithms.Conclusions The results showed that fusing multi-scale signals yielded better results than methods based on only single-scale signals.The proposed SSTC and dimension-separable attention mechanism will contribute to more accurate pulse signal extraction. 展开更多
关键词 Remote photoplethysmography Heart rate Separable spatiotemporal convolution Dimension separable attention multi-scale Neural network
在线阅读 下载PDF
基于MDAM-GhostCNN的滚动轴承故障诊断方法
13
作者 郭俊锋 谭宝宏 王智明 《北京航空航天大学学报》 北大核心 2025年第4期1172-1184,共13页
针对传统故障诊断方法特征提取不充分、计算复杂及在变工况下识别准确率低的问题,提出一种基于混合域注意力机制(MDAM)-GhostCNN的滚动轴承故障诊断方法。采用马尔可夫转移场(MTF)将轴承振动信号转化为具有时间相关性的二维特征图;利用G... 针对传统故障诊断方法特征提取不充分、计算复杂及在变工况下识别准确率低的问题,提出一种基于混合域注意力机制(MDAM)-GhostCNN的滚动轴承故障诊断方法。采用马尔可夫转移场(MTF)将轴承振动信号转化为具有时间相关性的二维特征图;利用Ghost卷积计算精简的优点,构造出GhostCNN;设计一种MDAM,使网络从通道和空间2个维度充分捕获特征信息,实现特征通道间相互依赖的同时让网络有效关注特征空间信息。由此,构建出MDAM-GhostCNN模型。将MTF二维特征图输入到MDAM-GhostCNN模型中进行训练并输出诊断结果。采用凯斯西储大学和江南大学(JNU)轴承数据集进行实验验证,并对其数据集进行加噪处理。结果表明:在变工况下,所建模型有着更高的识别准确率、抗噪性能和泛化性能。 展开更多
关键词 滚动轴承 故障诊断 马尔可夫转移场 ghost卷积 注意力机制
原文传递
Chinese named entity recognition with multi-network fusion of multi-scale lexical information 被引量:1
14
作者 Yan Guo Hong-Chen Liu +3 位作者 Fu-Jiang Liu Wei-Hua Lin Quan-Sen Shao Jun-Shun Su 《Journal of Electronic Science and Technology》 EI CAS CSCD 2024年第4期53-80,共28页
Named entity recognition(NER)is an important part in knowledge extraction and one of the main tasks in constructing knowledge graphs.In today’s Chinese named entity recognition(CNER)task,the BERT-BiLSTM-CRF model is ... Named entity recognition(NER)is an important part in knowledge extraction and one of the main tasks in constructing knowledge graphs.In today’s Chinese named entity recognition(CNER)task,the BERT-BiLSTM-CRF model is widely used and often yields notable results.However,recognizing each entity with high accuracy remains challenging.Many entities do not appear as single words but as part of complex phrases,making it difficult to achieve accurate recognition using word embedding information alone because the intricate lexical structure often impacts the performance.To address this issue,we propose an improved Bidirectional Encoder Representations from Transformers(BERT)character word conditional random field(CRF)(BCWC)model.It incorporates a pre-trained word embedding model using the skip-gram with negative sampling(SGNS)method,alongside traditional BERT embeddings.By comparing datasets with different word segmentation tools,we obtain enhanced word embedding features for segmented data.These features are then processed using the multi-scale convolution and iterated dilated convolutional neural networks(IDCNNs)with varying expansion rates to capture features at multiple scales and extract diverse contextual information.Additionally,a multi-attention mechanism is employed to fuse word and character embeddings.Finally,CRFs are applied to learn sequence constraints and optimize entity label annotations.A series of experiments are conducted on three public datasets,demonstrating that the proposed method outperforms the recent advanced baselines.BCWC is capable to address the challenge of recognizing complex entities by combining character-level and word-level embedding information,thereby improving the accuracy of CNER.Such a model is potential to the applications of more precise knowledge extraction such as knowledge graph construction and information retrieval,particularly in domain-specific natural language processing tasks that require high entity recognition precision. 展开更多
关键词 Bi-directional long short-term memory(BiLSTM) Chinese named entity recognition(CNER) Iterated dilated convolutional neural network(IDCNN) Multi-network integration multi-scale lexical features
在线阅读 下载PDF
基于Ghost卷积与自适应注意力的点云分类 被引量:1
15
作者 舒密 王占刚 《现代电子技术》 北大核心 2025年第6期106-112,共7页
点云Transformer网络在提取三维点云的局部特征和携带的多级自注意力机制方面展现出了卓越的特征学习能力。然而,多级自注意力层对计算和内存资源的要求极高,且未充分考虑特征融合中层级间以及通道间的区分度与关联性。为解决上述问题,... 点云Transformer网络在提取三维点云的局部特征和携带的多级自注意力机制方面展现出了卓越的特征学习能力。然而,多级自注意力层对计算和内存资源的要求极高,且未充分考虑特征融合中层级间以及通道间的区分度与关联性。为解决上述问题,提出一种基于点云Transformer的轻量级特征增强融合分类网络EFF-LPCT。EFF-LPCT使用一维化Ghost卷积对原始网络进行重构,以降低计算复杂度和内存要求;引入自适应支路权重,以实现注意力层级间的多尺度特征融合;利用多个通道注意力模块增强特征的通道交互信息,以提高模型分类效果。在ModelNet40数据集进行的实验结果表明,EFF-LPCT在达到93.3%高精度的同时,相较于点云Transformer减少了1.11 GFLOPs的浮点计算量和0.86×10^(6)的参数量。 展开更多
关键词 点云分类 Transformer网络 ghost卷积 特征增强融合模块 ECA通道注意力 特征学习
在线阅读 下载PDF
Image Denoising Using Dual Convolutional Neural Network with Skip Connection 被引量:1
16
作者 Mengnan Lü Xianchun Zhou +2 位作者 Zhiting Du Yuze Chen Binxin Tang 《Instrumentation》 2024年第3期74-85,共12页
In recent years, deep convolutional neural networks have shown superior performance in image denoising. However, deep network structures often come with a large number of model parameters, leading to high training cos... In recent years, deep convolutional neural networks have shown superior performance in image denoising. However, deep network structures often come with a large number of model parameters, leading to high training costs and long inference times, limiting their practical application in denoising tasks. This paper proposes a new dual convolutional denoising network with skip connections(DECDNet), which achieves an ideal balance between denoising effect and network complexity. The proposed DECDNet consists of a noise estimation network, a multi-scale feature extraction network, a dual convolutional neural network, and dual attention mechanisms. The noise estimation network is used to estimate the noise level map, and the multi-scale feature extraction network is combined to improve the model's flexibility in obtaining image features. The dual convolutional neural network branch design includes convolution and dilated convolution interactive connections, with the lower branch consisting of dilated convolution layers, and both branches using skip connections. Experiments show that compared with other models, the proposed DECDNet achieves superior PSNR and SSIM values at all compared noise levels, especially at higher noise levels, showing robustness to images with higher noise levels. It also demonstrates better visual effects, maintaining a balance between denoising and detail preservation. 展开更多
关键词 image denoising convolutional neural network skip connections multi-scale feature extraction network noise estimation network
原文传递
Two Stages Segmentation Algorithm of Breast Tumor in DCE-MRI Based on Multi-Scale Feature and Boundary Attention Mechanism
17
作者 Bing Li Liangyu Wang +3 位作者 Xia Liu Hongbin Fan Bo Wang Shoudi Tong 《Computers, Materials & Continua》 SCIE EI 2024年第7期1543-1561,共19页
Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low a... Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low accuracy and incorrect segmentation during tumor segmentation.Thus,we propose a two-stage breast tumor segmentation method leveraging multi-scale features and boundary attention mechanisms.Initially,the breast region of interest is extracted to isolate the breast area from surrounding tissues and organs.Subsequently,we devise a fusion network incorporatingmulti-scale features and boundary attentionmechanisms for breast tumor segmentation.We incorporate multi-scale parallel dilated convolution modules into the network,enhancing its capability to segment tumors of various sizes through multi-scale convolution and novel fusion techniques.Additionally,attention and boundary detection modules are included to augment the network’s capacity to locate tumors by capturing nonlocal dependencies in both spatial and channel domains.Furthermore,a hybrid loss function with boundary weight is employed to address sample class imbalance issues and enhance the network’s boundary maintenance capability through additional loss.Themethod was evaluated using breast data from 207 patients at RuijinHospital,resulting in a 6.64%increase in Dice similarity coefficient compared to the benchmarkU-Net.Experimental results demonstrate the superiority of the method over other segmentation techniques,with fewer model parameters. 展开更多
关键词 Dynamic contrast-enhanced magnetic resonance imaging(DCE-MRI) breast tumor segmentation multi-scale dilated convolution boundary attention the hybrid loss function with boundary weight
在线阅读 下载PDF
结合ECA与Ghost卷积的轻量级交通标志检测方法
18
作者 邓孝辉 周米纳 《测绘与空间地理信息》 2025年第7期136-139,142,共5页
针对以嵌入式硬件为平台实施的道路交通标志检测问题,提出一种轻量级目标检测方法。使用Ghost卷积核作为骨干网络中特征提取算子,同时引入高效通道注意力模块促使模型学习更多正样本特征;将骨干网络中全部输出特征图聚合后,二次尺寸采样... 针对以嵌入式硬件为平台实施的道路交通标志检测问题,提出一种轻量级目标检测方法。使用Ghost卷积核作为骨干网络中特征提取算子,同时引入高效通道注意力模块促使模型学习更多正样本特征;将骨干网络中全部输出特征图聚合后,二次尺寸采样为2幅特征图来实施检测输出,最大限度利用有限的特征。实验结果表明,本文提出的改进模型在精度方面明显优于同类别轻量级检测模型,经过压缩后能够部署在嵌入式硬件中开展实时级的检测结果输出,能够在智能交通管理、自动驾驶等应用场景中发挥重要的应用价值。 展开更多
关键词 交通标志检测 轻量级模型 ghost卷积 注意力机制
在线阅读 下载PDF
Lightweight Image Super-Resolution via Weighted Multi-Scale Residual Network 被引量:8
19
作者 Long Sun Zhenbing Liu +3 位作者 Xiyan Sun Licheng Liu Rushi Lan Xiaonan Luo 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第7期1271-1280,共10页
The tradeoff between efficiency and model size of the convolutional neural network(CNN)is an essential issue for applications of CNN-based algorithms to diverse real-world tasks.Although deep learning-based methods ha... The tradeoff between efficiency and model size of the convolutional neural network(CNN)is an essential issue for applications of CNN-based algorithms to diverse real-world tasks.Although deep learning-based methods have achieved significant improvements in image super-resolution(SR),current CNNbased techniques mainly contain massive parameters and a high computational complexity,limiting their practical applications.In this paper,we present a fast and lightweight framework,named weighted multi-scale residual network(WMRN),for a better tradeoff between SR performance and computational efficiency.With the modified residual structure,depthwise separable convolutions(DS Convs)are employed to improve convolutional operations’efficiency.Furthermore,several weighted multi-scale residual blocks(WMRBs)are stacked to enhance the multi-scale representation capability.In the reconstruction subnetwork,a group of Conv layers are introduced to filter feature maps to reconstruct the final high-quality image.Extensive experiments were conducted to evaluate the proposed model,and the comparative results with several state-of-the-art algorithms demonstrate the effectiveness of WMRN. 展开更多
关键词 convolutional neural network(CNN) lightweight framework multi-scale SUPER-RESOLUTION
在线阅读 下载PDF
基于Ghost-YOLOv5s的SAR图像舰船目标检测 被引量:2
20
作者 张慧敏 黄炜嘉 李锋 《火力与指挥控制》 CSCD 北大核心 2024年第4期24-30,共7页
基于星载合成孔径雷达(synthetic aperture radar,SAR)图像的舰船目标检测中,为了平衡模型大小与检测精度,提出了一种基于Ghost卷积的SAR图像舰船目标检测方法Ghost-YOLOv5s。在YOLOv5s的颈部引入Ghost卷积,以减少模型参数和压缩模型体... 基于星载合成孔径雷达(synthetic aperture radar,SAR)图像的舰船目标检测中,为了平衡模型大小与检测精度,提出了一种基于Ghost卷积的SAR图像舰船目标检测方法Ghost-YOLOv5s。在YOLOv5s的颈部引入Ghost卷积,以减少模型参数和压缩模型体积;将高效的通道注意力机制(efficient channel attention,ECA)融入到颈部的C3模块里,以突出重要特征,从而保持较高的检测性能;使用SIoU损失函数替换原来的CIoU损失函数,以减少预测框和真实框之间的偏差,提高检测算法精度。实验结果表明,在SSDD遥感数据集上,改进模型与YOLOv5s相比,模型参数量减少了6.28%,模型体积减小了6.21%,检测精度达到了98.21%,实现了模型大小与检测精度的平衡。 展开更多
关键词 合成孔径雷达 深度学习 ghost卷积 注意力机制
在线阅读 下载PDF
上一页 1 2 8 下一页 到第
使用帮助 返回顶部