期刊文献+
共找到8,050篇文章
< 1 2 250 >
每页显示 20 50 100
Self-FAGCFN:Graph-Convolution Fusion Network Based on Feature Fusion and Self-Supervised Feature Alignment for Pneumonia and Tuberculosis Diagnosis
1
作者 Junding Sun Wenhao Tang +5 位作者 Lei Zhao Chaosheng Tang Xiaosheng Wu Zhaozhao Xu Bin Pu Yudong Zhang 《Journal of Bionic Engineering》 2025年第4期2012-2029,共18页
Feature fusion is an important technique in medical image classification that can improve diagnostic accuracy by integrating complementary information from multiple sources.Recently,Deep Learning(DL)has been widely us... Feature fusion is an important technique in medical image classification that can improve diagnostic accuracy by integrating complementary information from multiple sources.Recently,Deep Learning(DL)has been widely used in pulmonary disease diagnosis,such as pneumonia and tuberculosis.However,traditional feature fusion methods often suffer from feature disparity,information loss,redundancy,and increased complexity,hindering the further extension of DL algorithms.To solve this problem,we propose a Graph-Convolution Fusion Network with Self-Supervised Feature Alignment(Self-FAGCFN)to address the limitations of traditional feature fusion methods in deep learning-based medical image classification for respiratory diseases such as pneumonia and tuberculosis.The network integrates Convolutional Neural Networks(CNNs)for robust feature extraction from two-dimensional grid structures and Graph Convolutional Networks(GCNs)within a Graph Neural Network branch to capture features based on graph structure,focusing on significant node representations.Additionally,an Attention-Embedding Ensemble Block is included to capture critical features from GCN outputs.To ensure effective feature alignment between pre-and post-fusion stages,we introduce a feature alignment loss that minimizes disparities.Moreover,to address the limitations of proposed methods,such as inappropriate centroid discrepancies during feature alignment and class imbalance in the dataset,we develop a Feature-Centroid Fusion(FCF)strategy and a Multi-Level Feature-Centroid Update(MLFCU)algorithm,respectively.Extensive experiments on public datasets LungVision and Chest-Xray demonstrate that the Self-FAGCFN model significantly outperforms existing methods in diagnosing pneumonia and tuberculosis,highlighting its potential for practical medical applications. 展开更多
关键词 Feature fusion Self-supervised feature alignment Convolutional neural networks Graph convolutional networks Class imbalance Feature-centroid fusion
在线阅读 下载PDF
Multi-relation spatiotemporal graph residual network model with multi-level feature attention:A novel approach for landslide displacement prediction
2
作者 Ziqian Wang Xiangwei Fang +3 位作者 Wengang Zhang Xuanming Ding Luqi Wang Chao Chen 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第7期4211-4226,共16页
Accurate prediction of landslide displacement is crucial for effective early warning of landslide disasters.While most existing prediction methods focus on time-series forecasting for individual monitoring points,ther... Accurate prediction of landslide displacement is crucial for effective early warning of landslide disasters.While most existing prediction methods focus on time-series forecasting for individual monitoring points,there is limited research on the spatiotemporal characteristics of landslide deformation.This paper proposes a novel Multi-Relation Spatiotemporal Graph Residual Network with Multi-Level Feature Attention(MFA-MRSTGRN)that effectively improves the prediction performance of landslide displacement through spatiotemporal fusion.This model integrates internal seepage factors as data feature enhancements with external triggering factors,allowing for accurate capture of the complex spatiotemporal characteristics of landslide displacement and the construction of a multi-source heterogeneous dataset.The MFA-MRSTGRN model incorporates dynamic graph theory and four key modules:multilevel feature attention,temporal-residual decomposition,spatial multi-relational graph convolution,and spatiotemporal fusion prediction.This comprehensive approach enables the efficient analyses of multi-source heterogeneous datasets,facilitating adaptive exploration of the evolving multi-relational,multi-dimensional spatiotemporal complexities in landslides.When applying this model to predict the displacement of the Liangshuijing landslide,we demonstrate that the MFA-MRSTGRN model surpasses traditional models,such as random forest(RF),long short-term memory(LSTM),and spatial temporal graph convolutional networks(ST-GCN)models in terms of various evaluation metrics including mean absolute error(MAE=1.27 mm),root mean square error(RMSE=1.49 mm),mean absolute percentage error(MAPE=0.026),and R-squared(R^(2)=0.88).Furthermore,feature ablation experiments indicate that incorporating internal seepage factors improves the predictive performance of landslide displacement models.This research provides an advanced and reliable method for landslide displacement prediction. 展开更多
关键词 Landslide displacement prediction Spatiotemporal fusion Dynamic graph Data feature enhancement multi-level feature attention
在线阅读 下载PDF
DMF: A Deep Multimodal Fusion-Based Network Traffic Classification Model
3
作者 Xiangbin Wang Qingjun Yuan +3 位作者 Weina Niu Qianwei Meng Yongjuan Wang Chunxiang Gu 《Computers, Materials & Continua》 2025年第5期2267-2285,共19页
With the rise of encrypted traffic,traditional network analysis methods have become less effective,leading to a shift towards deep learning-based approaches.Among these,multimodal learning-based classification methods... With the rise of encrypted traffic,traditional network analysis methods have become less effective,leading to a shift towards deep learning-based approaches.Among these,multimodal learning-based classification methods have gained attention due to their ability to leverage diverse feature sets from encrypted traffic,improving classification accuracy.However,existing research predominantly relies on late fusion techniques,which hinder the full utilization of deep features within the data.To address this limitation,we propose a novel multimodal encrypted traffic classification model that synchronizes modality fusion with multiscale feature extraction.Specifically,our approach performs real-time fusion of modalities at each stage of feature extraction,enhancing feature representation at each level and preserving inter-level correlations for more effective learning.This continuous fusion strategy improves the model’s ability to detect subtle variations in encrypted traffic,while boosting its robustness and adaptability to evolving network conditions.Experimental results on two real-world encrypted traffic datasets demonstrate that our method achieves a classification accuracy of 98.23% and 97.63%,outperforming existing multimodal learning-based methods. 展开更多
关键词 Deep fusion intrusion detection multimodal learning network traffic classification
在线阅读 下载PDF
Enhanced Multi-Object Dwarf Mongoose Algorithm for Optimization Stochastic Data Fusion Wireless Sensor Network Deployment
4
作者 Shumin Li Qifang Luo Yongquan Zhou 《Computer Modeling in Engineering & Sciences》 2025年第2期1955-1994,共40页
Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic ... Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic models,and there is a significant gap between the research results and actual wireless sensor networks.Some scholars have now modeled data fusion networks to make them more suitable for practical applications.This paper will explore the deployment problem of a stochastic data fusion wireless sensor network(SDFWSN),a model that reflects the randomness of environmental monitoring and uses data fusion techniques widely used in actual sensor networks for information collection.The deployment problem of SDFWSN is modeled as a multi-objective optimization problem.The network life cycle,spatiotemporal coverage,detection rate,and false alarm rate of SDFWSN are used as optimization objectives to optimize the deployment of network nodes.This paper proposes an enhanced multi-objective mongoose optimization algorithm(EMODMOA)to solve the deployment problem of SDFWSN.First,to overcome the shortcomings of the DMOA algorithm,such as its low convergence and tendency to get stuck in a local optimum,an encircling and hunting strategy is introduced into the original algorithm to propose the EDMOA algorithm.The EDMOA algorithm is designed as the EMODMOA algorithm by selecting reference points using the K-Nearest Neighbor(KNN)algorithm.To verify the effectiveness of the proposed algorithm,the EMODMOA algorithm was tested at CEC 2020 and achieved good results.In the SDFWSN deployment problem,the algorithm was compared with the Non-dominated Sorting Genetic Algorithm II(NSGAII),Multiple Objective Particle Swarm Optimization(MOPSO),Multi-Objective Evolutionary Algorithm based on Decomposition(MOEA/D),and Multi-Objective Grey Wolf Optimizer(MOGWO).By comparing and analyzing the performance evaluation metrics and optimization results of the objective functions of the multi-objective algorithms,the algorithm outperforms the other algorithms in the SDFWSN deployment results.To better demonstrate the superiority of the algorithm,simulations of diverse test cases were also performed,and good results were obtained. 展开更多
关键词 Stochastic data fusion wireless sensor networks network deployment spatiotemporal coverage dwarf mongoose optimization algorithm multi-objective optimization
在线阅读 下载PDF
Predictions of complete fusion cross‑sections of ^(6,7)Li,^(9)Be,and ^(10)B using a Bayesian neural network method
5
作者 Kai‑Xuan Cheng Rong‑Xing He +1 位作者 Chun‑Yuan Qiao Chun‑Wang Ma 《Nuclear Science and Techniques》 2025年第10期169-175,共7页
A machine learning approach based on Bayesian neural networks was developed to predict the complete fusion cross-sections of weakly bound nuclei.This method was trained and validated using 475 experimental data points... A machine learning approach based on Bayesian neural networks was developed to predict the complete fusion cross-sections of weakly bound nuclei.This method was trained and validated using 475 experimental data points from 39 reaction systems induced by ^(6,7)Li,^(9)Be,and ^(10)B.The constructed Bayesian neural network demonstrated a high degree of accuracy in evaluating complete fusion cross-sections.By comparing the predicted cross-sections with those obtained from a single-barrier penetration model,the suppression effect of ^(6,7)Li and ^(9)Be with a stable nucleus was systematically analyzed.In the cases of ^(6)Li and ^(7)Li,less suppression was predicted for relatively light-mass targets than for heavy-mass targets,and a notably distinct dependence relationship was identified,suggesting that the predominant breakup mechanisms might change in different mass target regions.In addition,minimum suppression factors were predicted to occur near target nuclei with neutron-closed shell. 展开更多
关键词 fusion reaction Weakly bound nuclei Machine learning Bayesian neural network
在线阅读 下载PDF
MMIF:Multimodal Medical Image Fusion Network Based on Multi-Scale Hybrid Attention
6
作者 Jianjun Liu Yang Li +2 位作者 Xiaoting Sun Xiaohui Wang Hanjiang Luo 《Computers, Materials & Continua》 2025年第11期3551-3568,共18页
Multimodal image fusion plays an important role in image analysis and applications.Multimodal medical image fusion helps to combine contrast features from two or more input imaging modalities to represent fused inform... Multimodal image fusion plays an important role in image analysis and applications.Multimodal medical image fusion helps to combine contrast features from two or more input imaging modalities to represent fused information in a single image.One of the critical clinical applications of medical image fusion is to fuse anatomical and functional modalities for rapid diagnosis of malignant tissues.This paper proposes a multimodal medical image fusion network(MMIF-Net)based on multiscale hybrid attention.The method first decomposes the original image to obtain the low-rank and significant parts.Then,to utilize the features at different scales,we add amultiscalemechanism that uses three filters of different sizes to extract the features in the encoded network.Also,a hybrid attention module is introduced to obtain more image details.Finally,the fused images are reconstructed by decoding the network.We conducted experiments with clinical images from brain computed tomography/magnetic resonance.The experimental results show that the multimodal medical image fusion network method based on multiscale hybrid attention works better than other advanced fusion methods. 展开更多
关键词 Medical image fusion multiscale mechanism hybrid attention module encoded network
在线阅读 下载PDF
Dual-channel graph convolutional network with multi-order information fusion for skeleton-based action recognition
7
作者 JIANG Tao HU Zhentao +2 位作者 WANG Kaige QIU Qian REN Xing 《High Technology Letters》 2025年第3期257-265,共9页
Skeleton-based human action recognition focuses on identifying actions from dynamic skeletal data,which contains both temporal and spatial characteristics.However,this approach faces chal-lenges such as viewpoint vari... Skeleton-based human action recognition focuses on identifying actions from dynamic skeletal data,which contains both temporal and spatial characteristics.However,this approach faces chal-lenges such as viewpoint variations,low recognition accuracy,and high model complexity.Skeleton-based graph convolutional network(GCN)generally outperform other deep learning methods in rec-ognition accuracy.However,they often underutilize temporal features and suffer from high model complexity,leading to increased training and validation costs,especially on large-scale datasets.This paper proposes a dual-channel graph convolutional network with multi-order information fusion(DM-AGCN)for human action recognition.The network integrates high frame rate skeleton chan-nels to capture action dynamics and low frame rate channels to preserve static semantic information,effectively balancing temporal and spatial features.This dual-channel architecture allows for separate processing of temporal and spatial information.Additionally,DM-AGCN extracts joint keypoints and bidirectional bone vectors from skeleton sequences,and employs a three-stream graph convolu-tional structure to extract features that describe human movement.Experimental results on the NTU-RGB+D dataset demonstrate that DM-AGCN achieves an accuracy of 89.4%on the X-Sub and 95.8%on the X-View,while reducing model complexity to 3.68 GFLOPs(Giga Floating-point Oper-ations Per Second).On the Kinetics-Skeleton dataset,the model achieves a Top-1 accuracy of 37.2%and a Top-5 accuracy of 60.3%,further validating its effectiveness across different benchmarks. 展开更多
关键词 human action recognition graph convolutional network spatiotemporal fusion feature extraction
在线阅读 下载PDF
Cross-feature fusion speech emotion recognition based on attention mask residual network and Wav2vec 2.0
8
作者 Xiaoke Li Zufan Zhang 《Digital Communications and Networks》 2025年第5期1567-1577,共11页
Speech Emotion Recognition(SER)has received widespread attention as a crucial way for understanding human emotional states.However,the impact of irrelevant information on speech signals and data sparsity limit the dev... Speech Emotion Recognition(SER)has received widespread attention as a crucial way for understanding human emotional states.However,the impact of irrelevant information on speech signals and data sparsity limit the development of SER system.To address these issues,this paper proposes a framework that incorporates the Attentive Mask Residual Network(AM-ResNet)and the self-supervised learning model Wav2vec 2.0 to obtain AM-ResNet features and Wav2vec 2.0 features respectively,together with a cross-attention module to interact and fuse these two features.The AM-ResNet branch mainly consists of maximum amplitude difference detection,mask residual block,and an attention mechanism.Among them,the maximum amplitude difference detection and the mask residual block act on the pre-processing and the network,respectively,to reduce the impact of silent frames,and the attention mechanism assigns different weights to unvoiced and voiced speech to reduce redundant emotional information caused by unvoiced speech.In the Wav2vec 2.0 branch,this model is introduced as a feature extractor to obtain general speech features(Wav2vec 2.0 features)through pre-training with a large amount of unlabeled speech data,which can assist the SER task and cope with data sparsity problems.In the cross-attention module,AM-ResNet features and Wav2vec 2.0 features are interacted with and fused to obtain the cross-fused features,which are used to predict the final emotion.Furthermore,multi-label learning is also used to add ambiguous emotion utterances to deal with data limitations.Finally,experimental results illustrate the usefulness and superiority of our proposed framework over existing state-of-the-art approaches. 展开更多
关键词 Speech emotion recognition Residual network MASK ATTENTION Wav2vec 2.0 Cross-feature fusion
在线阅读 下载PDF
Low-Light Image Enhancement Based on Wavelet Local and Global Feature Fusion Network
9
作者 Shun Song Xiangqian Jiang Dawei Zhao 《Journal of Contemporary Educational Research》 2025年第11期209-214,共6页
A wavelet-based local and global feature fusion network(LAGN)is proposed for low-light image enhancement,aiming to enhance image details and restore colors in dark areas.This study focuses on addressing three key issu... A wavelet-based local and global feature fusion network(LAGN)is proposed for low-light image enhancement,aiming to enhance image details and restore colors in dark areas.This study focuses on addressing three key issues in low-light image enhancement:Enhancing low-light images using LAGN to preserve image details and colors;extracting image edge information via wavelet transform to enhance image details;and extracting local and global features of images through convolutional neural networks and Transformer to improve image contrast.Comparisons with state-of-the-art methods on two datasets verify that LAGN achieves the best performance in terms of details,brightness,and contrast. 展开更多
关键词 Image enhancement Feature fusion Wavelet transform Convolutional Neural network(CNN) TRANSFORMER
在线阅读 下载PDF
xCViT:Improved Vision Transformer Network with Fusion of CNN and Xception for Skin Disease Recognition with Explainable AI
10
作者 Armughan Ali Hooria Shahbaz Robertas Damaševicius 《Computers, Materials & Continua》 2025年第4期1367-1398,共32页
Skin cancer is the most prevalent cancer globally,primarily due to extensive exposure to Ultraviolet(UV)radiation.Early identification of skin cancer enhances the likelihood of effective treatment,as delays may lead t... Skin cancer is the most prevalent cancer globally,primarily due to extensive exposure to Ultraviolet(UV)radiation.Early identification of skin cancer enhances the likelihood of effective treatment,as delays may lead to severe tumor advancement.This study proposes a novel hybrid deep learning strategy to address the complex issue of skin cancer diagnosis,with an architecture that integrates a Vision Transformer,a bespoke convolutional neural network(CNN),and an Xception module.They were evaluated using two benchmark datasets,HAM10000 and Skin Cancer ISIC.On the HAM10000,the model achieves a precision of 95.46%,an accuracy of 96.74%,a recall of 96.27%,specificity of 96.00%and an F1-Score of 95.86%.It obtains an accuracy of 93.19%,a precision of 93.25%,a recall of 92.80%,a specificity of 92.89%and an F1-Score of 93.19%on the Skin Cancer ISIC dataset.The findings demonstrate that the model that was proposed is robust and trustworthy when it comes to the classification of skin lesions.In addition,the utilization of Explainable AI techniques,such as Grad-CAM visualizations,assists in highlighting the most significant lesion areas that have an impact on the decisions that are made by the model. 展开更多
关键词 Skin lesions vision transformer CNN Xception deep learning network fusion explainable AI Grad-CAM skin cancer detection
在线阅读 下载PDF
Stochastic state of health estimation for lithium-ion batteries with automated feature fusion using quantum convolutional neural network
11
作者 Chen Liang Shengyu Tao +3 位作者 Xinghao Huang Yezhen Wang Bizhong Xia Xuan Zhang 《Journal of Energy Chemistry》 2025年第7期205-219,共15页
The accurate state of health(SOH)estimation of lithium-ion batteries is crucial for efficient,healthy,and safe operation of battery systems.Extracting meaningful aging information from highly stochastic and noisy data... The accurate state of health(SOH)estimation of lithium-ion batteries is crucial for efficient,healthy,and safe operation of battery systems.Extracting meaningful aging information from highly stochastic and noisy data segments while designing SOH estimation algorithms that efficiently handle the large-scale computational demands of cloud-based battery management systems presents a substantial challenge.In this work,we propose a quantum convolutional neural network(QCNN)model designed for accurate,robust,and generalizable SOH estimation with minimal data and parameter requirements and is compatible with quantum computing cloud platforms in the Noisy Intermediate-Scale Quantum.First,we utilize data from 4 datasets comprising 272 cells,covering 5 chemical compositions,4 rated parameters,and 73operating conditions.We design 5 voltage windows as small as 0.3 V for each cell from incremental capacity peaks for stochastic SOH estimation scenarios generation.We extract 3 effective health indicators(HIs)sequences and develop an automated feature fusion method using quantum rotation gate encoding,achieving an R2of 96%.Subsequently,we design a QCNN whose convolutional layer,constructed with variational quantum circuits,comprises merely 39 parameters.Additionally,we explore the impact of training set size,using strategies,and battery materials on the model’s accuracy.Finally,the QCNN with quantum convolutional layers reduces root mean squared error by 28% and achieves an R^(2)exceeding 96% compared to other three commonly used algorithms.This work demonstrates the effectiveness of quantum encoding for automated feature fusion of HIs extracted from limited discharge data.It highlights the potential of QCNN in improving the accuracy,robustness,and generalization of SOH estimation while dealing with stochastic and noisy data with few parameters and simple structure.It also suggests a new paradigm for leveraging quantum computational power in SOH estimation. 展开更多
关键词 Lithium-ion battery State of health Feature fusion Quantum convolutional neural network Quantum machine learning
在线阅读 下载PDF
Adaptive Fusion Neural Networks for Sparse-Angle X-Ray 3D Reconstruction
12
作者 Shaoyong Hong Bo Yang +4 位作者 Yan Chen Hao Quan Shan Liu Minyi Tang Jiawei Tian 《Computer Modeling in Engineering & Sciences》 2025年第7期1091-1112,共22页
3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safe... 3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safer and faster,poses challenges for accurate volumetric reconstruction due to limited spatial information.This study proposes a 3D reconstruction neural network based on adaptive weight fusion(AdapFusionNet)to achieve high-quality 3D medical image reconstruction from sparse-angle X-ray images.To address the issue of spatial inconsistency in multi-angle image reconstruction,an innovative adaptive fusion module was designed to score initial reconstruction results during the inference stage and perform weighted fusion,thereby improving the final reconstruction quality.The reconstruction network is built on an autoencoder(AE)framework and uses orthogonal-angle X-ray images(frontal and lateral projections)as inputs.The encoder extracts 2D features,which the decoder maps into 3D space.This study utilizes a lung CT dataset to obtain complete three-dimensional volumetric data,from which digitally reconstructed radiographs(DRR)are generated at various angles to simulate X-ray images.Since real-world clinical X-ray images rarely come with perfectly corresponding 3D“ground truth,”using CT scans as the three-dimensional reference effectively supports the training and evaluation of deep networks for sparse-angle X-ray 3D reconstruction.Experiments conducted on the LIDC-IDRI dataset with simulated X-ray images(DRR images)as training data demonstrate the superior performance of AdapFusionNet compared to other fusion methods.Quantitative results show that AdapFusionNet achieves SSIM,PSNR,and MAE values of 0.332,13.404,and 0.163,respectively,outperforming other methods(SingleViewNet:0.289,12.363,0.182;AvgFusionNet:0.306,13.384,0.159).Qualitative analysis further confirms that AdapFusionNet significantly enhances the reconstruction of lung and chest contours while effectively reducing noise during the reconstruction process.The findings demonstrate that AdapFusionNet offers significant advantages in 3D reconstruction of sparse-angle X-ray images. 展开更多
关键词 3D reconstruction adaptive fusion X-ray imaging medical imaging deep learning neural networks sparse angles autoencoder
暂未订购
Global Context Fusion Network for SAR Ship Detection
13
作者 Boya Zhang Yong Wang 《Journal of Beijing Institute of Technology》 2025年第6期577-589,共13页
Ship detection in synthetic aperture radar(SAR)image is crucial for marine surveillance and navigation.The application of detection network based on deep learning has achieved a promising result in SAR ship detection.... Ship detection in synthetic aperture radar(SAR)image is crucial for marine surveillance and navigation.The application of detection network based on deep learning has achieved a promising result in SAR ship detection.However,the existing networks encounters challenges due to the complex backgrounds,diverse scales and irregular distribution of ship targets.To address these issues,this article proposes a detection algorithm that integrates global context of the images(GCF-Net).First,we construct a global feature extraction module in the backbone network of GCF-Net,which encodes features along different spatial directions.Then,we incorporate bi-directional feature pyramid network(BiFPN)in the neck network to fuse the multi-scale features selectively.Finally,we design a convolution and transformer mixed(CTM)detection head to obtain contextual information of targets and concentrate network attention on the most informative regions of the images.Experimental results demonstrate that the proposed method achieves more accurate detection of ship targets in SAR images. 展开更多
关键词 synthetic aperture radar(SAR) ship detection global context fusion convolutional neural network feature extraction
在线阅读 下载PDF
Rolling Bearing Fault Detection Based on Self-Adaptive Wasserstein Dual Generative Adversarial Networks and Feature Fusion under Small Sample Conditions
14
作者 Qiang Ma Zhuopei Wei +2 位作者 Kai Yang Long Tian Zepeng Li 《Structural Durability & Health Monitoring》 2025年第4期1011-1035,共25页
An intelligent diagnosis method based on self-adaptiveWasserstein dual generative adversarial networks and feature fusion is proposed due to problems such as insufficient sample size and incomplete fault feature extra... An intelligent diagnosis method based on self-adaptiveWasserstein dual generative adversarial networks and feature fusion is proposed due to problems such as insufficient sample size and incomplete fault feature extraction,which are commonly faced by rolling bearings and lead to low diagnostic accuracy.Initially,dual models of the Wasserstein deep convolutional generative adversarial network incorporating gradient penalty(1D-2DWDCGAN)are constructed to augment the original dataset.A self-adaptive loss threshold control training strategy is introduced,and establishing a self-adaptive balancing mechanism for stable model training.Subsequently,a diagnostic model based on multidimensional feature fusion is designed,wherein complex features from various dimensions are extracted,merging the original signal waveform features,structured features,and time-frequency features into a deep composite feature representation that encompasses multiple dimensions and scales;thus,efficient and accurate small sample fault diagnosis is facilitated.Finally,an experiment between the bearing fault dataset of CaseWestern ReserveUniversity and the fault simulation experimental platformdataset of this research group shows that this method effectively supplements the dataset and remarkably improves the diagnostic accuracy.The diagnostic accuracy after data augmentation reached 99.94%and 99.87%in two different experimental environments,respectively.In addition,robustness analysis is conducted on the diagnostic accuracy of the proposed method under different noise backgrounds,verifying its good generalization performance. 展开更多
关键词 Deep learning Wasserstein deep convolutional generative adversarial network small sample learning feature fusion multidimensional data enhancement small sample fault diagnosis
在线阅读 下载PDF
Prediction of coal ash fusion temperature using constructive-pruning hybrid method for RBF networks
15
作者 丁维明 吴小丽 魏海坤 《Journal of Southeast University(English Edition)》 EI CAS 2011年第2期159-163,共5页
A constructive-pruning hybrid method (CPHM) for radial basis function (RBF) networks is proposed to improve the prediction accuracy of ash fusion temperatures (AFT). The CPHM incorporates the advantages of the c... A constructive-pruning hybrid method (CPHM) for radial basis function (RBF) networks is proposed to improve the prediction accuracy of ash fusion temperatures (AFT). The CPHM incorporates the advantages of the construction algorithm and the pruning algorithm of neural networks, and the training process of the CPHM is divided into two stages: rough tuning and fine tuning. In rough tuning, new hidden units are added to the current network until some performance index is satisfied. In fine tuning, the network structure and the model parameters are further adjusted. And, based on components of coal ash, a model using the CPHM is established to predict the AFT. The results show that the CPHM prediction model is characterized by its high precision, compact network structure, as well as strong generalization ability and robustness. 展开更多
关键词 radial basis function (RBF) networks functionapproximation ash fusion temperature
在线阅读 下载PDF
Three-dimensional Fusion of Spaceborne and Ground Radar Reflectivity Data Using a Neural Network–Based Approach 被引量:5
16
作者 Leilei KOU Zhuihui WANG Fen XU 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2018年第3期346-359,共14页
The spaceborne precipitation radar onboard the Tropical Rainfall Measuring Mission satellite (TRMM PR) can provide good measurement of the vertical structure of reflectivity, while ground radar (GR) has a relative... The spaceborne precipitation radar onboard the Tropical Rainfall Measuring Mission satellite (TRMM PR) can provide good measurement of the vertical structure of reflectivity, while ground radar (GR) has a relatively high horizontal resolution and greater sensitivity. Fusion of TRMM PR and GR reflectivity data may maximize the advantages from both instruments. In this paper, TRMM PR and GR reflectivity data are fused using a neural network (NN)-based approach. The main steps included are: quality control of TRMM PR and GR reflectivity data; spatiotemporal matchup; GR calibration bias correction; conversion of TRMM PR data from Ku to S band; fusion of TRMM PR and GR reflectivity data with an NN method: interpolation of reflectivity data that are below PR's sensitivity; blind areas compensation with a distance weighting-based merging approach; combination of three types of data: data with the NN method, data below PR's sensitivity and data within compensated blind areas. During the NN fusion step, the TRMM PR data are taken as targets of the training NNs, and gridded GR data after horizontal downsampling at different heights are used as the input. The trained NNs are then used to obtain 3D high-resolution reflectivity from the original GR gridded data. After 3D fusion of the TRMM PR and GR reflectivity data, a more complete and finer-scale 3D radar reflectivity dataset incorporating characteristics from both the TRMM PR and GR observations can be obtained. The fused reflectivity data are evaluated based on a convective precipitation event through comparison with the high resolution TRMM PR and GR data with an interpolation algorithm. 展开更多
关键词 TRMM PR ground radar 3D fusion neural network
在线阅读 下载PDF
Multiple Feature Fusion in Convolutional Neural Networks for Action Recognition 被引量:5
17
作者 LI Hongyang CHEN Jun HU Ruimin 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2017年第1期73-78,共6页
Action recognition is important for understanding the human behaviors in the video,and the video representation is the basis for action recognition.This paper provides a new video representation based on convolution n... Action recognition is important for understanding the human behaviors in the video,and the video representation is the basis for action recognition.This paper provides a new video representation based on convolution neural networks(CNN).For capturing human motion information in one CNN,we take both the optical flow maps and gray images as input,and combine multiple convolutional features by max pooling across frames.In another CNN,we input single color frame to capture context information.Finally,we take the top full connected layer vectors as video representation and train the classifiers by linear support vector machine.The experimental results show that the representation which integrates the optical flow maps and gray images obtains more discriminative properties than those which depend on only one element.On the most challenging data sets HMDB51 and UCF101,this video representation obtains competitive performance. 展开更多
关键词 action recognition video deep-learned representa-tion convolutional neural network feature fusion
原文传递
Seismic velocity inversion based on CNN-LSTM fusion deep neural network 被引量:9
18
作者 Cao Wei Guo Xue-Bao +4 位作者 Tian Feng Shi Ying Wang Wei-Hong Sun Hong-Ri Ke Xuan 《Applied Geophysics》 SCIE CSCD 2021年第4期499-514,593,共17页
Based on the CNN-LSTM fusion deep neural network,this paper proposes a seismic velocity model building method that can simultaneously estimate the root mean square(RMS)velocity and interval velocity from the common-mi... Based on the CNN-LSTM fusion deep neural network,this paper proposes a seismic velocity model building method that can simultaneously estimate the root mean square(RMS)velocity and interval velocity from the common-midpoint(CMP)gather.In the proposed method,a convolutional neural network(CNN)Encoder and two long short-term memory networks(LSTMs)are used to extract spatial and temporal features from seismic signals,respectively,and a CNN Decoder is used to recover RMS velocity and interval velocity of underground media from various feature vectors.To address the problems of unstable gradients and easily fall into a local minimum in the deep neural network training process,we propose to use Kaiming normal initialization with zero negative slopes of rectifi ed units and to adjust the network learning process by optimizing the mean square error(MSE)loss function with the introduction of a freezing factor.The experiments on testing dataset show that CNN-LSTM fusion deep neural network can predict RMS velocity as well as interval velocity more accurately,and its inversion accuracy is superior to that of single neural network models.The predictions on the complex structures and Marmousi model are consistent with the true velocity variation trends,and the predictions on fi eld data can eff ectively correct the phase axis,improve the lateral continuity of phase axis and quality of stack section,indicating the eff ectiveness and decent generalization capability of the proposed method. 展开更多
关键词 Velocity inversion CNN-LSTM fusion deep neural network weight initialization training strategy
在线阅读 下载PDF
Classification Fusion in Wireless Sensor Networks 被引量:3
19
作者 LIU Chun-Ting HUO Hong +2 位作者 FANG Tao LI De-Ren SHEN Xiao 《自动化学报》 EI CSCD 北大核心 2006年第6期947-955,共9页
In wireless sensor networks, target classification differs from that in centralized sensing systems because of the distributed detection, wireless communication and limited resources. We study the classification probl... In wireless sensor networks, target classification differs from that in centralized sensing systems because of the distributed detection, wireless communication and limited resources. We study the classification problem of moving vehicles in wireless sensor networks using acoustic signals emitted from vehicles. Three algorithms including wavelet decomposition, weighted k-nearest-neighbor and Dempster-Shafer theory are combined in this paper. Finally, we use real world experimental data to validate the classification methods. The result shows that wavelet based feature extraction method can extract stable features from acoustic signals. By fusion with Dempster's rule, the classification performance is improved. 展开更多
关键词 Wireless sensor networks classification fusion wavelet decomposition weighted k-nearest-neighbor Dempster-Shafer theory
在线阅读 下载PDF
Robust Sequential Covariance Intersection Fusion Kalman Filtering over Multi-agent Sensor Networks with Measurement Delays and Uncertain Noise Variances 被引量:4
20
作者 QI Wen-Juan ZHANG Peng DENG Zi-Li 《自动化学报》 EI CSCD 北大核心 2014年第11期2632-2642,共11页
This paper deals with the problem of designing robust sequential covariance intersection(SCI)fusion Kalman filter for the clustering multi-agent sensor network system with measurement delays and uncertain noise varian... This paper deals with the problem of designing robust sequential covariance intersection(SCI)fusion Kalman filter for the clustering multi-agent sensor network system with measurement delays and uncertain noise variances.The sensor network is partitioned into clusters by the nearest neighbor rule.Using the minimax robust estimation principle,based on the worst-case conservative sensor network system with conservative upper bounds of noise variances,and applying the unbiased linear minimum variance(ULMV)optimal estimation rule,we present the two-layer SCI fusion robust steady-state Kalman filter which can reduce communication and computation burdens and save energy sources,and guarantee that the actual filtering error variances have a less-conservative upper-bound.A Lyapunov equation method for robustness analysis is proposed,by which the robustness of the local and fused Kalman filters is proved.The concept of the robust accuracy is presented and the robust accuracy relations of the local and fused robust Kalman filters are proved.It is proved that the robust accuracy of the global SCI fuser is higher than those of the local SCI fusers and the robust accuracies of all SCI fusers are higher than that of each local robust Kalman filter.A simulation example for a tracking system verifies the robustness and robust accuracy relations. 展开更多
关键词 Multi-agent sensor networks clustering network distributed fusion sequential covariance intersection(SCI)fusion robust Kalman filter uncertain noise variances measurement delay
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部