期刊文献+
共找到1,486篇文章
< 1 2 75 >
每页显示 20 50 100
Occluded Gait Emotion Recognition Based on Multi-Scale Suppression Graph Convolutional Network
1
作者 Yuxiang Zou Ning He +2 位作者 Jiwu Sun Xunrui Huang Wenhua Wang 《Computers, Materials & Continua》 SCIE EI 2025年第1期1255-1276,共22页
In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accurac... In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods. 展开更多
关键词 KNN interpolation multi-scale temporal convolution suppression graph convolutional network gait emotion recognition human skeleton
在线阅读 下载PDF
MSSTGCN: Multi-Head Self-Attention and Spatial-Temporal Graph Convolutional Network for Multi-Scale Traffic Flow Prediction
2
作者 Xinlu Zong Fan Yu +1 位作者 Zhen Chen Xue Xia 《Computers, Materials & Continua》 2025年第2期3517-3537,共21页
Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address ... Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address this problem, a Multi-head Self-attention and Spatial-Temporal Graph Convolutional Network (MSSTGCN) for multiscale traffic flow prediction is proposed. Firstly, to capture the hidden traffic periodicity of traffic flow, traffic flow is divided into three kinds of periods, including hourly, daily, and weekly data. Secondly, a graph attention residual layer is constructed to learn the global spatial features across regions. Local spatial-temporal dependence is captured by using a T-GCN module. Thirdly, a transformer layer is introduced to learn the long-term dependence in time. A position embedding mechanism is introduced to label position information for all traffic sequences. Thus, this multi-head self-attention mechanism can recognize the sequence order and allocate weights for different time nodes. Experimental results on four real-world datasets show that the MSSTGCN performs better than the baseline methods and can be successfully adapted to traffic prediction tasks. 展开更多
关键词 Graph convolutional network traffic flow prediction multi-scale traffic flow spatial-temporal model
在线阅读 下载PDF
Multi-Head Attention Enhanced Parallel Dilated Convolution and Residual Learning for Network Traffic Anomaly Detection
3
作者 Guorong Qi Jian Mao +2 位作者 Kai Huang Zhengxian You Jinliang Lin 《Computers, Materials & Continua》 2025年第2期2159-2176,共18页
Abnormal network traffic, as a frequent security risk, requires a series of techniques to categorize and detect it. Existing network traffic anomaly detection still faces challenges: the inability to fully extract loc... Abnormal network traffic, as a frequent security risk, requires a series of techniques to categorize and detect it. Existing network traffic anomaly detection still faces challenges: the inability to fully extract local and global features, as well as the lack of effective mechanisms to capture complex interactions between features;Additionally, when increasing the receptive field to obtain deeper feature representations, the reliance on increasing network depth leads to a significant increase in computational resource consumption, affecting the efficiency and performance of detection. Based on these issues, firstly, this paper proposes a network traffic anomaly detection model based on parallel dilated convolution and residual learning (Res-PDC). To better explore the interactive relationships between features, the traffic samples are converted into two-dimensional matrix. A module combining parallel dilated convolutions and residual learning (res-pdc) was designed to extract local and global features of traffic at different scales. By utilizing res-pdc modules with different dilation rates, we can effectively capture spatial features at different scales and explore feature dependencies spanning wider regions without increasing computational resources. Secondly, to focus and integrate the information in different feature subspaces, further enhance and extract the interactions among the features, multi-head attention is added to Res-PDC, resulting in the final model: multi-head attention enhanced parallel dilated convolution and residual learning (MHA-Res-PDC) for network traffic anomaly detection. Finally, comparisons with other machine learning and deep learning algorithms are conducted on the NSL-KDD and CIC-IDS-2018 datasets. The experimental results demonstrate that the proposed method in this paper can effectively improve the detection performance. 展开更多
关键词 Network traffic anomaly detection multi-head attention parallel dilated convolution residual learning
在线阅读 下载PDF
Residual-enhanced graph convolutional networks with hypersphere mapping for anomaly detection in attributed networks
4
作者 Wasim Khan Afsaruddin Mohd +3 位作者 Mohammad Suaib Mohammad Ishrat Anwar Ahamed Shaikh Syed Mohd Faisal 《Data Science and Management》 2025年第2期137-146,共10页
In the burgeoning field of anomaly detection within attributed networks,traditional methodologies often encounter the intricacies of network complexity,particularly in capturing nonlinearity and sparsity.This study in... In the burgeoning field of anomaly detection within attributed networks,traditional methodologies often encounter the intricacies of network complexity,particularly in capturing nonlinearity and sparsity.This study introduces an innovative approach that synergizes the strengths of graph convolutional networks with advanced deep residual learning and a unique residual-based attention mechanism,thereby creating a more nuanced and efficient method for anomaly detection in complex networks.The heart of our model lies in the integration of graph convolutional networks that capture complex structural relationships within the network data.This is further bolstered by deep residual learning,which is employed to model intricate nonlinear connections directly from input data.A pivotal innovation in our approach is the incorporation of a residual-based attention mech-anism.This mechanism dynamically adjusts the importance of nodes based on their residual information,thereby significantly enhancing the sensitivity of the model to subtle anomalies.Furthermore,we introduce a novel hypersphere mapping technique in the latent space to distinctly separate normal and anomalous data.This mapping is the key to our model’s ability to pinpoint anomalies with greater precision.An extensive experimental setup was used to validate the efficacy of the proposed model.Using attributed social network datasets,we demonstrate that our model not only competes with but also surpasses existing state-of-the-art methods in anomaly detection.The results show the exceptional capability of our model to handle the multifaceted nature of real-world networks. 展开更多
关键词 Anomaly detection Deep learning Hypersphere learning residual modeling Graph convolution network Attention mechanism
在线阅读 下载PDF
A Modified Deep Residual-Convolutional Neural Network for Accurate Imputation of Missing Data
5
作者 Firdaus Firdaus Siti Nurmaini +8 位作者 Anggun Islami Annisa Darmawahyuni Ade Iriani Sapitri Muhammad Naufal Rachmatullah Bambang Tutuko Akhiar Wista Arum Muhammad Irfan Karim Yultrien Yultrien Ramadhana Noor Salassa Wandya 《Computers, Materials & Continua》 2025年第2期3419-3441,共23页
Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attentio... Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data. 展开更多
关键词 Data imputation missing data deep learning deep residual convolutional neural network
在线阅读 下载PDF
Magnetic Resonance Image Super-Resolution Based on GAN and Multi-Scale Residual Dense Attention Network
6
作者 GUAN Chunling YU Suping +1 位作者 XU Wujun FAN Hong 《Journal of Donghua University(English Edition)》 2025年第4期435-441,共7页
The application of image super-resolution(SR)has brought significant assistance in the medical field,aiding doctors to make more precise diagnoses.However,solely relying on a convolutional neural network(CNN)for image... The application of image super-resolution(SR)has brought significant assistance in the medical field,aiding doctors to make more precise diagnoses.However,solely relying on a convolutional neural network(CNN)for image SR may lead to issues such as blurry details and excessive smoothness.To address the limitations,we proposed an algorithm based on the generative adversarial network(GAN)framework.In the generator network,three different sizes of convolutions connected by a residual dense structure were used to extract detailed features,and an attention mechanism combined with dual channel and spatial information was applied to concentrate the computing power on crucial areas.In the discriminator network,using InstanceNorm to normalize tensors sped up the training process while retaining feature information.The experimental results demonstrate that our algorithm achieves higher peak signal-to-noise ratio(PSNR)and structural similarity index measure(SSIM)compared to other methods,resulting in an improved visual quality. 展开更多
关键词 magnetic resonance(MR) image super-resolution(SR) attention mechanism generative adversarial network(GAN) multi-scale convolution
在线阅读 下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification 被引量:2
7
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet Image Classification Lightweight convolutional Neural Network Depthwise Dilated Separable convolution Hierarchical multi-scale Feature Fusion
在线阅读 下载PDF
M2ANet:Multi-branch and multi-scale attention network for medical image segmentation
8
作者 Wei Xue Chuanghui Chen +3 位作者 Xuan Qi Jian Qin Zhen Tang Yongsheng He 《Chinese Physics B》 2025年第8期547-559,共13页
Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to ... Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to the inability to effectively capture global information from images,CNNs can easily lead to loss of contours and textures in segmentation results.Notice that the transformer model can effectively capture the properties of long-range dependencies in the image,and furthermore,combining the CNN and the transformer can effectively extract local details and global contextual features of the image.Motivated by this,we propose a multi-branch and multi-scale attention network(M2ANet)for medical image segmentation,whose architecture consists of three components.Specifically,in the first component,we construct an adaptive multi-branch patch module for parallel extraction of image features to reduce information loss caused by downsampling.In the second component,we apply residual block to the well-known convolutional block attention module to enhance the network’s ability to recognize important features of images and alleviate the phenomenon of gradient vanishing.In the third component,we design a multi-scale feature fusion module,in which we adopt adaptive average pooling and position encoding to enhance contextual features,and then multi-head attention is introduced to further enrich feature representation.Finally,we validate the effectiveness and feasibility of the proposed M2ANet method through comparative experiments on four benchmark medical image segmentation datasets,particularly in the context of preserving contours and textures. 展开更多
关键词 medical image segmentation convolutional neural network multi-branch attention multi-scale feature fusion
原文传递
Influence of Welding Residual Stress on the Structural Behaviour of Large-Span Steel Tube Arch Rib
9
作者 Chunling Yan Renzhang Yan +2 位作者 Zhenxiu Zhan Xiyang Chen Yu Han 《Structural Durability & Health Monitoring》 2025年第4期1037-1056,共20页
The steel tube arch rib in a large-span concrete-filled steel tube arch bridge has a large span and diameter,which also leads to a larger weld seam scale.Large-scale welding seams will inevitably cause more obvious we... The steel tube arch rib in a large-span concrete-filled steel tube arch bridge has a large span and diameter,which also leads to a larger weld seam scale.Large-scale welding seams will inevitably cause more obvious welding residual stress(WRS).For the purpose of studying the influence of WRS from large-scale welding seam on the mechanical properties of steel tube arch rib during arch rib splicing,test research and numerical simulation analysis on the WRS in arch rib splicing based on the Guangxi Pingnan Third Bridge,which is the world’s largest span concrete-filled steel tube arch bridge,were conducted in this paper,and the distribution pattern of WRS at the arch rib splicing joint was obtained.Subsequently,the WRS was introduced into the mechanical performance analysis of joints and structures to analyze its effects.The findings reveal that the distribution of WRS in the arch rib is greatly influenced by the rib plate,and the axial WRS in the heat-affected zone are primarily tensile,while the circumferential WRS are distributed in an alternating pattern of tensile and compressive stresses along the circumferential direction of the main tube.Under the influence of WRS,the ultimate bearing capacity of the joint is reduced by 29.4%,the initial axial stiffness is reduced by 4.32%,and the vertical deformation of the arch rib structure is increased by 4.7%. 展开更多
关键词 Large-span steel tube arch large-scale welding welding residual stress multi-scale models splicing joint
在线阅读 下载PDF
CT-MFENet:Context Transformer and Multi-Scale Feature Extraction Network via Global-Local Features Fusion for Retinal Vessels Segmentation
10
作者 SHAO Dangguo YANG Yuanbiao +1 位作者 MA Lei YI Sanli 《Journal of Shanghai Jiaotong university(Science)》 2025年第4期668-682,共15页
Segmentation of the retinal vessels in the fundus is crucial for diagnosing ocular diseases.Retinal vessel images often suffer from category imbalance and large scale variations.This ultimately results in incomplete v... Segmentation of the retinal vessels in the fundus is crucial for diagnosing ocular diseases.Retinal vessel images often suffer from category imbalance and large scale variations.This ultimately results in incomplete vessel segmentation and poor continuity.In this study,we propose CT-MFENet to address the aforementioned issues.First,the use of context transformer(CT)allows for the integration of contextual feature information,which helps establish the connection between pixels and solve the problem of incomplete vessel continuity.Second,multi-scale dense residual networks are used instead of traditional CNN to address the issue of inadequate local feature extraction when the model encounters vessels at multiple scales.In the decoding stage,we introduce a local-global fusion module.It enhances the localization of vascular information and reduces the semantic gap between high-and low-level features.To address the class imbalance in retinal images,we propose a hybrid loss function that enhances the segmentation ability of the model for topological structures.We conducted experiments on the publicly available DRIVE,CHASEDB1,STARE,and IOSTAR datasets.The experimental results show that our CT-MFENet performs better than most existing methods,including the baseline U-Net. 展开更多
关键词 retinal vessel segmentation context transformer(CT) multi-scale dense residual hybrid loss function global-local fusion
原文传递
3D Enhanced Residual CNN for Video Super-Resolution Network
11
作者 Weiqiang Xin Zheng Wang +3 位作者 Xi Chen Yufeng Tang Bing Li Chunwei Tian 《Computers, Materials & Continua》 2025年第11期2837-2849,共13页
Deep convolutional neural networks(CNNs)have demonstrated remarkable performance in video super-resolution(VSR).However,the ability of most existing methods to recover fine details in complex scenes is often hindered ... Deep convolutional neural networks(CNNs)have demonstrated remarkable performance in video super-resolution(VSR).However,the ability of most existing methods to recover fine details in complex scenes is often hindered by the loss of shallow texture information during feature extraction.To address this limitation,we propose a 3D Convolutional Enhanced Residual Video Super-Resolution Network(3D-ERVSNet).This network employs a forward and backward bidirectional propagation module(FBBPM)that aligns features across frames using explicit optical flow through lightweight SPyNet.By incorporating an enhanced residual structure(ERS)with skip connections,shallow and deep features are effectively integrated,enhancing texture restoration capabilities.Furthermore,3D convolution module(3DCM)is applied after the backward propagation module to implicitly capture spatio-temporal dependencies.The architecture synergizes these components where FBBPM extracts aligned features,ERS fuses hierarchical representations,and 3DCM refines temporal coherence.Finally,a deep feature aggregation module(DFAM)fuses the processed features,and a pixel-upsampling module(PUM)reconstructs the high-resolution(HR)video frames.Comprehensive evaluations on REDS,Vid4,UDM10,and Vim4 benchmarks demonstrate well performance including 30.95 dB PSNR/0.8822 SSIM on REDS and 32.78 dB/0.8987 on Vim4.3D-ERVSNet achieves significant gains over baselines while maintaining high efficiency with only 6.3M parameters and 77ms/frame runtime(i.e.,20×faster than RBPN).The network’s effectiveness stems from its task-specific asymmetric design that balances explicit alignment and implicit fusion. 展开更多
关键词 Video super-resolution 3D convolution enhanced residual CNN spatio-temporal feature extraction
在线阅读 下载PDF
Deep Multi-Scale and Attention-Based Architectures for Semantic Segmentation in Biomedical Imaging
12
作者 Majid Harouni Vishakha Goyal +2 位作者 Gabrielle Feldman Sam Michael Ty C.Voss 《Computers, Materials & Continua》 2025年第10期331-366,共36页
Semantic segmentation plays a foundational role in biomedical image analysis, providing precise information about cellular, tissue, and organ structures in both biological and medical imaging modalities. Traditional a... Semantic segmentation plays a foundational role in biomedical image analysis, providing precise information about cellular, tissue, and organ structures in both biological and medical imaging modalities. Traditional approaches often fail in the face of challenges such as low contrast, morphological variability, and densely packed structures. Recent advancements in deep learning have transformed segmentation capabilities through the integration of fine-scale detail preservation, coarse-scale contextual modeling, and multi-scale feature fusion. This work provides a comprehensive analysis of state-of-the-art deep learning models, including U-Net variants, attention-based frameworks, and Transformer-integrated networks, highlighting innovations that improve accuracy, generalizability, and computational efficiency. Key architectural components such as convolution operations, shallow and deep blocks, skip connections, and hybrid encoders are examined for their roles in enhancing spatial representation and semantic consistency. We further discuss the importance of hierarchical and instance-aware segmentation and annotation in interpreting complex biological scenes and multiplexed medical images. By bridging methodological developments with diverse application domains, this paper outlines current trends and future directions for semantic segmentation, emphasizing its critical role in facilitating annotation, diagnosis, and discovery in biomedical research. 展开更多
关键词 Biomedical semantic segmentation multi-scale feature fusion fine-and coarse-scale features convolution operations shallow and deep blocks skip connections
在线阅读 下载PDF
Deep residual systolic network for massive MIMO channel estimation by joint training strategies of mixed-SNR and mixed-scenarios
13
作者 SUN Meng JING Qingfeng ZHONG Weizhi 《Journal of Systems Engineering and Electronics》 2025年第4期903-913,共11页
The fifth-generation (5G) communication requires a highly accurate estimation of the channel state information (CSI)to take advantage of the massive multiple-input multiple-output(MIMO) system. However, traditional ch... The fifth-generation (5G) communication requires a highly accurate estimation of the channel state information (CSI)to take advantage of the massive multiple-input multiple-output(MIMO) system. However, traditional channel estimation methods do not always yield reliable estimates. The methodology of this paper consists of deep residual shrinkage network (DRSN)neural network-based method that is used to solve this problem.Thus, the channel estimation approach, based on DRSN with its learning ability of noise-containing data, is first introduced. Then,the DRSN is used to train the noise reduction process based on the results of the least square (LS) channel estimation while applying the pilot frequency subcarriers, where the initially estimated subcarrier channel matrix is considered as a three-dimensional tensor of the DRSN input. Afterward, a mixed signal to noise ratio (SNR) training data strategy is proposed based on the learning ability of DRSN under different SNRs. Moreover, a joint mixed scenario training strategy is carried out to test the multi scenarios robustness of DRSN. As for the findings, the numerical results indicate that the DRSN method outperforms the spatial-frequency-temporal convolutional neural networks (SF-CNN)with similar computational complexity and achieves better advantages in the full SNR range than the minimum mean squared error (MMSE) estimator with a limited dataset. Moreover, the DRSN approach shows robustness in different propagation environments. 展开更多
关键词 massive multiple-input multiple-output(MIMO) channel estimation deep residual shrinkage network(DRSN) deep convolutional neural network(CNN).
在线阅读 下载PDF
A multi-scale convolutional auto-encoder and its application in fault diagnosis of rolling bearings 被引量:12
14
作者 Ding Yunhao Jia Minping 《Journal of Southeast University(English Edition)》 EI CAS 2019年第4期417-423,共7页
Aiming at the difficulty of fault identification caused by manual extraction of fault features of rotating machinery,a one-dimensional multi-scale convolutional auto-encoder fault diagnosis model is proposed,based on ... Aiming at the difficulty of fault identification caused by manual extraction of fault features of rotating machinery,a one-dimensional multi-scale convolutional auto-encoder fault diagnosis model is proposed,based on the standard convolutional auto-encoder.In this model,the parallel convolutional and deconvolutional kernels of different scales are used to extract the features from the input signal and reconstruct the input signal;then the feature map extracted by multi-scale convolutional kernels is used as the input of the classifier;and finally the parameters of the whole model are fine-tuned using labeled data.Experiments on one set of simulation fault data and two sets of rolling bearing fault data are conducted to validate the proposed method.The results show that the model can achieve 99.75%,99.3%and 100%diagnostic accuracy,respectively.In addition,the diagnostic accuracy and reconstruction error of the one-dimensional multi-scale convolutional auto-encoder are compared with traditional machine learning,convolutional neural networks and a traditional convolutional auto-encoder.The final results show that the proposed model has a better recognition effect for rolling bearing fault data. 展开更多
关键词 fault diagnosis deep learning convolutional auto-encoder multi-scale convolutional kernel feature extraction
在线阅读 下载PDF
Land cover classification from remote sensing images based on multi-scale fully convolutional network 被引量:17
15
作者 Rui Li Shunyi Zheng +2 位作者 Chenxi Duan Libo Wang Ce Zhang 《Geo-Spatial Information Science》 SCIE EI CSCD 2022年第2期278-294,共17页
Although the Convolutional Neural Network(CNN)has shown great potential for land cover classification,the frequently used single-scale convolution kernel limits the scope of informa-tion extraction.Therefore,we propos... Although the Convolutional Neural Network(CNN)has shown great potential for land cover classification,the frequently used single-scale convolution kernel limits the scope of informa-tion extraction.Therefore,we propose a Multi-Scale Fully Convolutional Network(MSFCN)with a multi-scale convolutional kernel as well as a Channel Attention Block(CAB)and a Global Pooling Module(GPM)in this paper to exploit discriminative representations from two-dimensional(2D)satellite images.Meanwhile,to explore the ability of the proposed MSFCN for spatio-temporal images,we expand our MSFCN to three-dimension using three-dimensional(3D)CNN,capable of harnessing each land cover category’s time series interac-tion from the reshaped spatio-temporal remote sensing images.To verify the effectiveness of the proposed MSFCN,we conduct experiments on two spatial datasets and two spatio-temporal datasets.The proposed MSFCN achieves 60.366%on the WHDLD dataset and 75.127%on the GID dataset in terms of mIoU index while the figures for two spatio-temporal datasets are 87.753%and 77.156%.Extensive comparative experiments and abla-tion studies demonstrate the effectiveness of the proposed MSFCN. 展开更多
关键词 Spatio-temporal remote sensing images multi-scale Fully convolutional Network land cover classification
原文传递
Lightweight Image Super-Resolution via Weighted Multi-Scale Residual Network 被引量:8
16
作者 Long Sun Zhenbing Liu +3 位作者 Xiyan Sun Licheng Liu Rushi Lan Xiaonan Luo 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第7期1271-1280,共10页
The tradeoff between efficiency and model size of the convolutional neural network(CNN)is an essential issue for applications of CNN-based algorithms to diverse real-world tasks.Although deep learning-based methods ha... The tradeoff between efficiency and model size of the convolutional neural network(CNN)is an essential issue for applications of CNN-based algorithms to diverse real-world tasks.Although deep learning-based methods have achieved significant improvements in image super-resolution(SR),current CNNbased techniques mainly contain massive parameters and a high computational complexity,limiting their practical applications.In this paper,we present a fast and lightweight framework,named weighted multi-scale residual network(WMRN),for a better tradeoff between SR performance and computational efficiency.With the modified residual structure,depthwise separable convolutions(DS Convs)are employed to improve convolutional operations’efficiency.Furthermore,several weighted multi-scale residual blocks(WMRBs)are stacked to enhance the multi-scale representation capability.In the reconstruction subnetwork,a group of Conv layers are introduced to filter feature maps to reconstruct the final high-quality image.Extensive experiments were conducted to evaluate the proposed model,and the comparative results with several state-of-the-art algorithms demonstrate the effectiveness of WMRN. 展开更多
关键词 convolutional neural network(CNN) lightweight framework multi-scale SUPER-RESOLUTION
在线阅读 下载PDF
Multi-Scale Convolutional Gated Recurrent Unit Networks for Tool Wear Prediction in Smart Manufacturing 被引量:3
17
作者 Weixin Xu Huihui Miao +3 位作者 Zhibin Zhao Jinxin Liu Chuang Sun Ruqiang Yan 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2021年第3期130-145,共16页
As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symboli... As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symbolic applications of PHM technology in modern manufacturing systems and industry.In this paper,a multi-scale Convolutional Gated Recurrent Unit network(MCGRU)is proposed to address raw sensory data for tool wear prediction.At the bottom of MCGRU,six parallel and independent branches with different kernel sizes are designed to form a multi-scale convolutional neural network,which augments the adaptability to features of different time scales.These features of different scales extracted from raw data are then fed into a Deep Gated Recurrent Unit network to capture long-term dependencies and learn significant representations.At the top of the MCGRU,a fully connected layer and a regression layer are built for cutting tool wear prediction.Two case studies are performed to verify the capability and effectiveness of the proposed MCGRU network and results show that MCGRU outperforms several state-of-the-art baseline models. 展开更多
关键词 Tool wear prediction multi-scale convolutional neural networks Gated recurrent unit
在线阅读 下载PDF
Microphone Array-Based Sound Source Localization Using Convolutional Residual Network 被引量:1
18
作者 Ziyi Wang Xiaoyan Zhao +2 位作者 Hongjun Rong Ying Tong Jingang Shi 《Journal of New Media》 2022年第3期145-153,共9页
Microphone array-based sound source localization(SSL)is widely used in a variety of occasions such as video conferencing,robotic hearing,speech enhancement,speech recognition and so on.The traditional SSL methods cann... Microphone array-based sound source localization(SSL)is widely used in a variety of occasions such as video conferencing,robotic hearing,speech enhancement,speech recognition and so on.The traditional SSL methods cannot achieve satisfactory performance in adverse noisy and reverberant environments.In order to improve localization performance,a novel SSL algorithm using convolutional residual network(CRN)is proposed in this paper.The spatial features including time difference of arrivals(TDOAs)between microphone pairs and steered response power-phase transform(SRPPHAT)spatial spectrum are extracted in each Gammatone sub-band.The spatial features of different sub-bands with a frame are combine into a feature matrix as the input of CRN.The proposed algorithm employ CRN to fuse the spatial features.Since the CRN introduces the residual structure on the basis of the convolutional network,it reduce the difficulty of training procedure and accelerate the convergence of the model.A CRN model is learned from the training data in various reverberation and noise environments to establish the mapping regularity between the input feature and the sound azimuth.Through simulation verification,compared with the methods using traditional deep neural network,the proposed algorithm can achieve a better localization performance in SSL task,and provide better generalization capacity to untrained noise and reverberation. 展开更多
关键词 convolutional residual network microphone array spatial features sound source localization
在线阅读 下载PDF
Residual Convolution Long Short-Term Memory Network for Machines Remaining Useful Life Prediction and Uncertainty Quantification 被引量:1
19
作者 Wenting Wang Yaguo Lei +2 位作者 Tao Yan Naipeng Li Asoke KNandi 《Journal of Dynamics, Monitoring and Diagnostics》 2022年第1期2-8,共7页
Recently,deep learning(DL)has been widely used in the field of remaining useful life(RUL)prediction.Among various DL technologies,recurrent neural network(RNN)and its variant,e.g.,long short-term memory(LSTM)network,h... Recently,deep learning(DL)has been widely used in the field of remaining useful life(RUL)prediction.Among various DL technologies,recurrent neural network(RNN)and its variant,e.g.,long short-term memory(LSTM)network,have gained extensive attention for their ability to capture temporal dependence.Although existing RNN-based methods have demonstrated their RUL prediction effectiveness,they still suffer from the following two limitations:1)it is difficult for the RNN to directly extract degradation features from original monitoring data and 2)most RNN-based prognostics methods are unable to quantify RUL uncertainty.To address the aforementioned limitations,this paper proposes a new prognostics method named residual convolution LSTM(RC-LSTM)network.In the RC-LSTM,a new ResNet-based convolution LSTM(Res-ConvLSTM)layer is stacked with a convolution LSTM(ConvLSTM)layer to extract degradation representations from monitoring data.Then,under the assumption that the RUL follows a normal distribution,an appropriate output layer is constructed to quantify the uncertainty of prediction results.Finally,the effectiveness and superiority of the RC-LSTM are verified using monitoring data from accelerated bearing degradation tests. 展开更多
关键词 Deep learning residual convolution LSTM network remaining useful life prediction uncertainty quantification
在线阅读 下载PDF
Pedestrian attribute classification with multi-scale and multi-label convolutional neural networks
20
作者 朱建清 Zeng Huanqiang +2 位作者 Zhang Yuzhao Zheng Lixin Cai Canhui 《High Technology Letters》 EI CAS 2018年第1期53-61,共9页
Pedestrian attribute classification from a pedestrian image captured in surveillance scenarios is challenging due to diverse clothing appearances,varied poses and different camera views. A multiscale and multi-label c... Pedestrian attribute classification from a pedestrian image captured in surveillance scenarios is challenging due to diverse clothing appearances,varied poses and different camera views. A multiscale and multi-label convolutional neural network( MSMLCNN) is proposed to predict multiple pedestrian attributes simultaneously. The pedestrian attribute classification problem is firstly transformed into a multi-label problem including multiple binary attributes needed to be classified. Then,the multi-label problem is solved by fully connecting all binary attributes to multi-scale features with logistic regression functions. Moreover,the multi-scale features are obtained by concatenating those featured maps produced from multiple pooling layers of the MSMLCNN at different scales. Extensive experiment results show that the proposed MSMLCNN outperforms state-of-the-art pedestrian attribute classification methods with a large margin. 展开更多
关键词 PEDESTRIAN ATTRIBUTE CLASSIFICATION multi-scale features MULTI-LABEL CLASSIFICATION convolutional NEURAL network (CNN)
在线阅读 下载PDF
上一页 1 2 75 下一页 到第
使用帮助 返回顶部