Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address ...Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address this problem, a Multi-head Self-attention and Spatial-Temporal Graph Convolutional Network (MSSTGCN) for multiscale traffic flow prediction is proposed. Firstly, to capture the hidden traffic periodicity of traffic flow, traffic flow is divided into three kinds of periods, including hourly, daily, and weekly data. Secondly, a graph attention residual layer is constructed to learn the global spatial features across regions. Local spatial-temporal dependence is captured by using a T-GCN module. Thirdly, a transformer layer is introduced to learn the long-term dependence in time. A position embedding mechanism is introduced to label position information for all traffic sequences. Thus, this multi-head self-attention mechanism can recognize the sequence order and allocate weights for different time nodes. Experimental results on four real-world datasets show that the MSSTGCN performs better than the baseline methods and can be successfully adapted to traffic prediction tasks.展开更多
In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accurac...In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods.展开更多
Background The use of remote photoplethysmography(rPPG)to estimate blood volume pulse in a noncontact manner has been an active research topic in recent years.Existing methods are primarily based on a singlescale regi...Background The use of remote photoplethysmography(rPPG)to estimate blood volume pulse in a noncontact manner has been an active research topic in recent years.Existing methods are primarily based on a singlescale region of interest(ROI).However,some noise signals that are not easily separated in a single-scale space can be easily separated in a multi-scale space.Also,existing spatiotemporal networks mainly focus on local spatiotemporal information and do not emphasize temporal information,which is crucial in pulse extraction problems,resulting in insufficient spatiotemporal feature modelling.Methods Here,we propose a multi-scale facial video pulse extraction network based on separable spatiotemporal convolution(SSTC)and dimension separable attention(DSAT).First,to solve the problem of a single-scale ROI,we constructed a multi-scale feature space for initial signal separation.Second,SSTC and DSAT were designed for efficient spatiotemporal correlation modeling,which increased the information interaction between the long-span time and space dimensions;this placed more emphasis on temporal features.Results The signal-to-noise ratio(SNR)of the proposed network reached 9.58dB on the PURE dataset and 6.77dB on the UBFC-rPPG dataset,outperforming state-of-the-art algorithms.Conclusions The results showed that fusing multi-scale signals yielded better results than methods based on only single-scale signals.The proposed SSTC and dimension-separable attention mechanism will contribute to more accurate pulse signal extraction.展开更多
In the realm of video understanding,the demand for accurate and contextually rich video captioning has surged with the increasing volume and complexity of multimedia content.This research introduces an innovative solu...In the realm of video understanding,the demand for accurate and contextually rich video captioning has surged with the increasing volume and complexity of multimedia content.This research introduces an innovative solution for video captioning by integrating a Convolutional BiLSTM Convolutional Bidirectional Long Short-Term Memory(BiLSTM)constructed Variational Sequence-to-Sequence(CBVSS)approach.The proposed framework is adept at capturing intricate temporal dependencies within video sequences,enabling a more nuanced and contextually relevant description of dynamic scenes.However,optimizing its parameters for improved performance remains a crucial challenge.In response,in this research Golden Eagle Optimization(GEO)a metaheuristic optimization technique is used to fine-tune the Convolutional BiLSTM variational sequence-to-sequence model parameters.The application of GEO aims to enhancing the CBVSS ability to produce more exact and contextually rich video captions.The proposed attains an overall higher Recall of 59.75%and Precision of 63.78%for both datasets.Additionally,the proposed CBVSS method demonstrated superior performance across both datasets,achieving the highest METEOR(25.67)and CIDER(39.87)scores on the ActivityNet dataset,and further outperforming all compared models on the YouCook2 dataset with METEOR(28.67)and CIDER(43.02),highlighting its effectiveness in generating semantically rich and contextually accurate video captions.展开更多
Essential proteins are an indispensable part of cells and play an extremely significant role in genetic disease diagnosis and drug development.Therefore,the prediction of essential proteins has received extensive atte...Essential proteins are an indispensable part of cells and play an extremely significant role in genetic disease diagnosis and drug development.Therefore,the prediction of essential proteins has received extensive attention from researchers.Many centrality methods and machine learning algorithms have been proposed to predict essential proteins.Nevertheless,the topological characteristics learned by the centrality method are not comprehensive enough,resulting in low accuracy.In addition,machine learning algorithms need sufficient prior knowledge to select features,and the ability to solve imbalanced classification problems needs to be further strengthened.These two factors greatly affect the performance of predicting essential proteins.In this paper,we propose a deep learning framework based on temporal convolutional networks to predict essential proteins by integrating gene expression data and protein-protein interaction(PPI)network.We make use of the method of network embedding to automatically learn more abundant features of proteins in the PPI network.For gene expression data,we treat it as sequence data,and use temporal convolutional networks to extract sequence features.Finally,the two types of features are integrated and put into the multi-layer neural network to complete the final classification task.The performance of our method is evaluated by comparing with seven centrality methods,six machine learning algorithms,and two deep learning models.The results of the experiment show that our method is more effective than the comparison methods for predicting essential proteins.展开更多
Aiming at the difficulty of fault identification caused by manual extraction of fault features of rotating machinery,a one-dimensional multi-scale convolutional auto-encoder fault diagnosis model is proposed,based on ...Aiming at the difficulty of fault identification caused by manual extraction of fault features of rotating machinery,a one-dimensional multi-scale convolutional auto-encoder fault diagnosis model is proposed,based on the standard convolutional auto-encoder.In this model,the parallel convolutional and deconvolutional kernels of different scales are used to extract the features from the input signal and reconstruct the input signal;then the feature map extracted by multi-scale convolutional kernels is used as the input of the classifier;and finally the parameters of the whole model are fine-tuned using labeled data.Experiments on one set of simulation fault data and two sets of rolling bearing fault data are conducted to validate the proposed method.The results show that the model can achieve 99.75%,99.3%and 100%diagnostic accuracy,respectively.In addition,the diagnostic accuracy and reconstruction error of the one-dimensional multi-scale convolutional auto-encoder are compared with traditional machine learning,convolutional neural networks and a traditional convolutional auto-encoder.The final results show that the proposed model has a better recognition effect for rolling bearing fault data.展开更多
Although the Convolutional Neural Network(CNN)has shown great potential for land cover classification,the frequently used single-scale convolution kernel limits the scope of informa-tion extraction.Therefore,we propos...Although the Convolutional Neural Network(CNN)has shown great potential for land cover classification,the frequently used single-scale convolution kernel limits the scope of informa-tion extraction.Therefore,we propose a Multi-Scale Fully Convolutional Network(MSFCN)with a multi-scale convolutional kernel as well as a Channel Attention Block(CAB)and a Global Pooling Module(GPM)in this paper to exploit discriminative representations from two-dimensional(2D)satellite images.Meanwhile,to explore the ability of the proposed MSFCN for spatio-temporal images,we expand our MSFCN to three-dimension using three-dimensional(3D)CNN,capable of harnessing each land cover category’s time series interac-tion from the reshaped spatio-temporal remote sensing images.To verify the effectiveness of the proposed MSFCN,we conduct experiments on two spatial datasets and two spatio-temporal datasets.The proposed MSFCN achieves 60.366%on the WHDLD dataset and 75.127%on the GID dataset in terms of mIoU index while the figures for two spatio-temporal datasets are 87.753%and 77.156%.Extensive comparative experiments and abla-tion studies demonstrate the effectiveness of the proposed MSFCN.展开更多
In this paper,we investigate a spectrumsensing system in the presence of a satellite,where the satellite works as a sensing node.Considering the conventional energy detection method is sensitive to the noise uncertain...In this paper,we investigate a spectrumsensing system in the presence of a satellite,where the satellite works as a sensing node.Considering the conventional energy detection method is sensitive to the noise uncertainty,thus,a temporal convolutional network(TCN)based spectrum-sensing method is designed to eliminate the effect of the noise uncertainty and improve the performance of spectrum sensing,relying on the offline training and the online detection stages.Specifically,in the offline training stage,spectrum data captured by the satellite is sent to the TCN deployed on the gateway for training purpose.Moreover,in the online detection stage,the well trained TCN is utilized to perform real-time spectrum sensing,which can upgrade spectrum-sensing performance by exploiting the temporal features.Additionally,simulation results demonstrate that the proposed method achieves a higher probability of detection than that of the conventional energy detection(ED),the convolutional neural network(CNN),and deep neural network(DNN).Furthermore,the proposed method outperforms the CNN and the DNN in terms of a lower computational complexity.展开更多
As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symboli...As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symbolic applications of PHM technology in modern manufacturing systems and industry.In this paper,a multi-scale Convolutional Gated Recurrent Unit network(MCGRU)is proposed to address raw sensory data for tool wear prediction.At the bottom of MCGRU,six parallel and independent branches with different kernel sizes are designed to form a multi-scale convolutional neural network,which augments the adaptability to features of different time scales.These features of different scales extracted from raw data are then fed into a Deep Gated Recurrent Unit network to capture long-term dependencies and learn significant representations.At the top of the MCGRU,a fully connected layer and a regression layer are built for cutting tool wear prediction.Two case studies are performed to verify the capability and effectiveness of the proposed MCGRU network and results show that MCGRU outperforms several state-of-the-art baseline models.展开更多
Since the oil production of single well in water flooding reservoir varies greatly and is hard to predict, an oil production prediction method of single well based on temporal convolutional network(TCN) is proposed an...Since the oil production of single well in water flooding reservoir varies greatly and is hard to predict, an oil production prediction method of single well based on temporal convolutional network(TCN) is proposed and verified. This method is started from data processing, the correspondence between water injectors and oil producers is determined according to the influence radius of the water injectors, the influence degree of a water injector on an oil producer in the month concerned is added as a model feature, and a Random Forest(RF) model is built to fill the dynamic data of water flooding. The single well history is divided into 4 stages according to its water cut, that is, low water cut, middle water cut, high water cut and extra-high water cut stages. In each stage, a TCN based prediction model is established, hyperparameters of the model are optimized by the Sparrow Search Algorithm(SSA). Finally, the models of the 4 stages are integrated into one whole-life model of the well for production prediction. The application of this method in Daqing Oilfield, NE China shows that:(1) Compared with conventional data processing methods, the data obtained by this processing method are more close to the actual production, and the data set obtained is more authentic and complete.(2) The TCN model has higher prediction accuracy than other 11 models such as Long Short Term Memory(LSTM).(3) Compared with the conventional full-life-cycle models, the model of integrated stages can significantly reduce the error of production prediction.展开更多
The ever-growing available visual data(i.e.,uploaded videos and pictures by internet users)has attracted the research community’s attention in the computer vision field.Therefore,finding efficient solutions to extrac...The ever-growing available visual data(i.e.,uploaded videos and pictures by internet users)has attracted the research community’s attention in the computer vision field.Therefore,finding efficient solutions to extract knowledge from these sources is imperative.Recently,the BlazePose system has been released for skeleton extraction from images oriented to mobile devices.With this skeleton graph representation in place,a Spatial-Temporal Graph Convolutional Network can be implemented to predict the action.We hypothesize that just by changing the skeleton input data for a different set of joints that offers more information about the action of interest,it is possible to increase the performance of the Spatial-Temporal Graph Convolutional Network for HAR tasks.Hence,in this study,we present the first implementation of the BlazePose skeleton topology upon this architecture for action recognition.Moreover,we propose the Enhanced-BlazePose topology that can achieve better results than its predecessor.Additionally,we propose different skeleton detection thresholds that can improve the accuracy performance even further.We reached a top-1 accuracy performance of 40.1%on the Kinetics dataset.For the NTU-RGB+D dataset,we achieved 87.59%and 92.1%accuracy for Cross-Subject and Cross-View evaluation criteria,respectively.展开更多
In order to reduce the physical impairment caused by signal distortion,in this paper,we investigate symbol detection with Deep Learning(DL)methods to improve bit-error performance in the optical communication system.M...In order to reduce the physical impairment caused by signal distortion,in this paper,we investigate symbol detection with Deep Learning(DL)methods to improve bit-error performance in the optical communication system.Many DL-based methods have been applied to such systems to improve bit-error performance.Referring to the speech-to-text method of automatic speech recognition,this paper proposes a signal-to-symbol method based on DL and designs a receiver for symbol detection on single-polarized optical communications modes.To realize this detection method,we propose a non-causal temporal convolutional network-assisted receiver to detect symbols directly from the baseband signal,which specifically integrates most modules of the receiver.Meanwhile,we adopt three training approaches for different signal-to-noise ratios.We also apply a parametric rectified linear unit to enhance the noise robustness of the proposed network.According to the simulation experiments,the biterror-rate performance of the proposed method is close to or even superior to that of the conventional receiver and better than the recurrent neural network-based receiver.展开更多
A lightweight multi-layer residual temporal convolutional network model(RTCN)is proposed to target the highly complex kinematics and temporal correlation of human motion.RTCN uses 1-D convolution to efficiently obtain...A lightweight multi-layer residual temporal convolutional network model(RTCN)is proposed to target the highly complex kinematics and temporal correlation of human motion.RTCN uses 1-D convolution to efficiently obtain the spatial structure information of human motion and extract the correlation in the time series of human motion.The residual structure is applied to the proposed network model to alleviate the problem of gradient disappearance in the deep network.Experiments on the Human 3.6M dataset demonstrate that the proposed method effectively reduces the errors of motion prediction compared with previous methods,especially of long-term prediction.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in t...Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.展开更多
Thrust estimation is a significant part of aeroengine thrust control systems.The traditional estimation methods are either low in accuracy or large in computation.To further improve the estimation effect,a thrust esti...Thrust estimation is a significant part of aeroengine thrust control systems.The traditional estimation methods are either low in accuracy or large in computation.To further improve the estimation effect,a thrust estimator based on Multi-layer Residual Temporal Convolutional Network(M-RTCN)is proposed.To solve the problem of dead Rectified Linear Unit(ReLU),the proposed method uses the Gaussian Error Linear Unit(GELU)activation function instead of ReLU in residual block.Then the overall architecture of the multi-layer convolutional network is adjusted by using residual connections,so that the network thrust estimation effect and memory consumption are further improved.Moreover,the comparison with seven other methods shows that the proposed method has the advantages of higher estimation accuracy and faster convergence speed.Furthermore,six neural network models are deployed in the embedded controller of the micro-turbojet engine.The Hardware-in-the-Loop(HIL)testing results demonstrate the superiority of M-RTCN in terms of estimation accuracy,memory occupation and running time.Finally,an ignition verification is conducted to confirm the expected thrust estimation and real-time performance.展开更多
In the field of speech bandwidth exten-sion,it is difficult to achieve high speech quality based on the shallow statistical model method.Although the application of deep learning has greatly improved the extended spee...In the field of speech bandwidth exten-sion,it is difficult to achieve high speech quality based on the shallow statistical model method.Although the application of deep learning has greatly improved the extended speech quality,the high model complex-ity makes it infeasible to run on the client.In order to tackle these issues,this paper proposes an end-to-end speech bandwidth extension method based on a temporal convolutional neural network,which greatly reduces the complexity of the model.In addition,a new time-frequency loss function is designed to en-able narrowband speech to acquire a more accurate wideband mapping in the time domain and the fre-quency domain.The experimental results show that the reconstructed wideband speech generated by the proposed method is superior to the traditional heuris-tic rule based approaches and the conventional neu-ral network methods for both subjective and objective evaluation.展开更多
Pedestrian attribute classification from a pedestrian image captured in surveillance scenarios is challenging due to diverse clothing appearances,varied poses and different camera views. A multiscale and multi-label c...Pedestrian attribute classification from a pedestrian image captured in surveillance scenarios is challenging due to diverse clothing appearances,varied poses and different camera views. A multiscale and multi-label convolutional neural network( MSMLCNN) is proposed to predict multiple pedestrian attributes simultaneously. The pedestrian attribute classification problem is firstly transformed into a multi-label problem including multiple binary attributes needed to be classified. Then,the multi-label problem is solved by fully connecting all binary attributes to multi-scale features with logistic regression functions. Moreover,the multi-scale features are obtained by concatenating those featured maps produced from multiple pooling layers of the MSMLCNN at different scales. Extensive experiment results show that the proposed MSMLCNN outperforms state-of-the-art pedestrian attribute classification methods with a large margin.展开更多
Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurr...Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.展开更多
Diabetes,as a chronic disease,is caused by the increase of blood glucose concentration due to pancreatic insulin production failure or insulin resistance in the body.Predicting the change trend of blood glucose level ...Diabetes,as a chronic disease,is caused by the increase of blood glucose concentration due to pancreatic insulin production failure or insulin resistance in the body.Predicting the change trend of blood glucose level in advance brings convenience for prompt treatment,so as to maintain blood glucose level within the recommended levels.Based on the flash glucose monitoring data,we propose a method that combines prophet with temporal convolutional networks(TCN)to achieve good experimental results in predicting patient blood glucose.The proposed model achieves high accuracy in the long-term and short-term prediction of blood glucose,and outperforms other models on the adaptability to non-stationary and detection capability of periodic changes.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.62472149,62376089,62202147)Hubei Provincial Science and Technology Plan Project(2023BCB04100).
文摘Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address this problem, a Multi-head Self-attention and Spatial-Temporal Graph Convolutional Network (MSSTGCN) for multiscale traffic flow prediction is proposed. Firstly, to capture the hidden traffic periodicity of traffic flow, traffic flow is divided into three kinds of periods, including hourly, daily, and weekly data. Secondly, a graph attention residual layer is constructed to learn the global spatial features across regions. Local spatial-temporal dependence is captured by using a T-GCN module. Thirdly, a transformer layer is introduced to learn the long-term dependence in time. A position embedding mechanism is introduced to label position information for all traffic sequences. Thus, this multi-head self-attention mechanism can recognize the sequence order and allocate weights for different time nodes. Experimental results on four real-world datasets show that the MSSTGCN performs better than the baseline methods and can be successfully adapted to traffic prediction tasks.
基金supported by the National Natural Science Foundation of China(62272049,62236006,62172045)the Key Projects of Beijing Union University(ZKZD202301).
文摘In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods.
基金Supported by the National Natural Science Foundation of China(61903336,61976190)the Natural Science Foundation of Zhejiang Province(LY21F030015)。
文摘Background The use of remote photoplethysmography(rPPG)to estimate blood volume pulse in a noncontact manner has been an active research topic in recent years.Existing methods are primarily based on a singlescale region of interest(ROI).However,some noise signals that are not easily separated in a single-scale space can be easily separated in a multi-scale space.Also,existing spatiotemporal networks mainly focus on local spatiotemporal information and do not emphasize temporal information,which is crucial in pulse extraction problems,resulting in insufficient spatiotemporal feature modelling.Methods Here,we propose a multi-scale facial video pulse extraction network based on separable spatiotemporal convolution(SSTC)and dimension separable attention(DSAT).First,to solve the problem of a single-scale ROI,we constructed a multi-scale feature space for initial signal separation.Second,SSTC and DSAT were designed for efficient spatiotemporal correlation modeling,which increased the information interaction between the long-span time and space dimensions;this placed more emphasis on temporal features.Results The signal-to-noise ratio(SNR)of the proposed network reached 9.58dB on the PURE dataset and 6.77dB on the UBFC-rPPG dataset,outperforming state-of-the-art algorithms.Conclusions The results showed that fusing multi-scale signals yielded better results than methods based on only single-scale signals.The proposed SSTC and dimension-separable attention mechanism will contribute to more accurate pulse signal extraction.
文摘In the realm of video understanding,the demand for accurate and contextually rich video captioning has surged with the increasing volume and complexity of multimedia content.This research introduces an innovative solution for video captioning by integrating a Convolutional BiLSTM Convolutional Bidirectional Long Short-Term Memory(BiLSTM)constructed Variational Sequence-to-Sequence(CBVSS)approach.The proposed framework is adept at capturing intricate temporal dependencies within video sequences,enabling a more nuanced and contextually relevant description of dynamic scenes.However,optimizing its parameters for improved performance remains a crucial challenge.In response,in this research Golden Eagle Optimization(GEO)a metaheuristic optimization technique is used to fine-tune the Convolutional BiLSTM variational sequence-to-sequence model parameters.The application of GEO aims to enhancing the CBVSS ability to produce more exact and contextually rich video captions.The proposed attains an overall higher Recall of 59.75%and Precision of 63.78%for both datasets.Additionally,the proposed CBVSS method demonstrated superior performance across both datasets,achieving the highest METEOR(25.67)and CIDER(39.87)scores on the ActivityNet dataset,and further outperforming all compared models on the YouCook2 dataset with METEOR(28.67)and CIDER(43.02),highlighting its effectiveness in generating semantically rich and contextually accurate video captions.
基金the National Natural Science Foundation of China(Nos.11861045 and 62162040)。
文摘Essential proteins are an indispensable part of cells and play an extremely significant role in genetic disease diagnosis and drug development.Therefore,the prediction of essential proteins has received extensive attention from researchers.Many centrality methods and machine learning algorithms have been proposed to predict essential proteins.Nevertheless,the topological characteristics learned by the centrality method are not comprehensive enough,resulting in low accuracy.In addition,machine learning algorithms need sufficient prior knowledge to select features,and the ability to solve imbalanced classification problems needs to be further strengthened.These two factors greatly affect the performance of predicting essential proteins.In this paper,we propose a deep learning framework based on temporal convolutional networks to predict essential proteins by integrating gene expression data and protein-protein interaction(PPI)network.We make use of the method of network embedding to automatically learn more abundant features of proteins in the PPI network.For gene expression data,we treat it as sequence data,and use temporal convolutional networks to extract sequence features.Finally,the two types of features are integrated and put into the multi-layer neural network to complete the final classification task.The performance of our method is evaluated by comparing with seven centrality methods,six machine learning algorithms,and two deep learning models.The results of the experiment show that our method is more effective than the comparison methods for predicting essential proteins.
基金The National Natural Science Foundation of China(No.51675098)
文摘Aiming at the difficulty of fault identification caused by manual extraction of fault features of rotating machinery,a one-dimensional multi-scale convolutional auto-encoder fault diagnosis model is proposed,based on the standard convolutional auto-encoder.In this model,the parallel convolutional and deconvolutional kernels of different scales are used to extract the features from the input signal and reconstruct the input signal;then the feature map extracted by multi-scale convolutional kernels is used as the input of the classifier;and finally the parameters of the whole model are fine-tuned using labeled data.Experiments on one set of simulation fault data and two sets of rolling bearing fault data are conducted to validate the proposed method.The results show that the model can achieve 99.75%,99.3%and 100%diagnostic accuracy,respectively.In addition,the diagnostic accuracy and reconstruction error of the one-dimensional multi-scale convolutional auto-encoder are compared with traditional machine learning,convolutional neural networks and a traditional convolutional auto-encoder.The final results show that the proposed model has a better recognition effect for rolling bearing fault data.
基金supported by the National Natural Science Foundation of China[grant number 41671452].
文摘Although the Convolutional Neural Network(CNN)has shown great potential for land cover classification,the frequently used single-scale convolution kernel limits the scope of informa-tion extraction.Therefore,we propose a Multi-Scale Fully Convolutional Network(MSFCN)with a multi-scale convolutional kernel as well as a Channel Attention Block(CAB)and a Global Pooling Module(GPM)in this paper to exploit discriminative representations from two-dimensional(2D)satellite images.Meanwhile,to explore the ability of the proposed MSFCN for spatio-temporal images,we expand our MSFCN to three-dimension using three-dimensional(3D)CNN,capable of harnessing each land cover category’s time series interac-tion from the reshaped spatio-temporal remote sensing images.To verify the effectiveness of the proposed MSFCN,we conduct experiments on two spatial datasets and two spatio-temporal datasets.The proposed MSFCN achieves 60.366%on the WHDLD dataset and 75.127%on the GID dataset in terms of mIoU index while the figures for two spatio-temporal datasets are 87.753%and 77.156%.Extensive comparative experiments and abla-tion studies demonstrate the effectiveness of the proposed MSFCN.
基金the National Science Foundation of China (No.91738201, 61971440)the Jiangsu Province Basic Research Project (No.BK20192002)+1 种基金the China Postdoctoral Science Foundation (No.2018M632347)the Natural Science Research of Higher Education Institutions of Jiangsu Province (No.18KJB510030)。
文摘In this paper,we investigate a spectrumsensing system in the presence of a satellite,where the satellite works as a sensing node.Considering the conventional energy detection method is sensitive to the noise uncertainty,thus,a temporal convolutional network(TCN)based spectrum-sensing method is designed to eliminate the effect of the noise uncertainty and improve the performance of spectrum sensing,relying on the offline training and the online detection stages.Specifically,in the offline training stage,spectrum data captured by the satellite is sent to the TCN deployed on the gateway for training purpose.Moreover,in the online detection stage,the well trained TCN is utilized to perform real-time spectrum sensing,which can upgrade spectrum-sensing performance by exploiting the temporal features.Additionally,simulation results demonstrate that the proposed method achieves a higher probability of detection than that of the conventional energy detection(ED),the convolutional neural network(CNN),and deep neural network(DNN).Furthermore,the proposed method outperforms the CNN and the DNN in terms of a lower computational complexity.
基金Supported in part by Natural Science Foundation of China(Grant Nos.51835009,51705398)Shaanxi Province 2020 Natural Science Basic Research Plan(Grant No.2020JQ-042)Aeronautical Science Foundation(Grant No.2019ZB070001).
文摘As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symbolic applications of PHM technology in modern manufacturing systems and industry.In this paper,a multi-scale Convolutional Gated Recurrent Unit network(MCGRU)is proposed to address raw sensory data for tool wear prediction.At the bottom of MCGRU,six parallel and independent branches with different kernel sizes are designed to form a multi-scale convolutional neural network,which augments the adaptability to features of different time scales.These features of different scales extracted from raw data are then fed into a Deep Gated Recurrent Unit network to capture long-term dependencies and learn significant representations.At the top of the MCGRU,a fully connected layer and a regression layer are built for cutting tool wear prediction.Two case studies are performed to verify the capability and effectiveness of the proposed MCGRU network and results show that MCGRU outperforms several state-of-the-art baseline models.
基金Major Unified Construction Project of Petro China(2019-40210-000020-02)。
文摘Since the oil production of single well in water flooding reservoir varies greatly and is hard to predict, an oil production prediction method of single well based on temporal convolutional network(TCN) is proposed and verified. This method is started from data processing, the correspondence between water injectors and oil producers is determined according to the influence radius of the water injectors, the influence degree of a water injector on an oil producer in the month concerned is added as a model feature, and a Random Forest(RF) model is built to fill the dynamic data of water flooding. The single well history is divided into 4 stages according to its water cut, that is, low water cut, middle water cut, high water cut and extra-high water cut stages. In each stage, a TCN based prediction model is established, hyperparameters of the model are optimized by the Sparrow Search Algorithm(SSA). Finally, the models of the 4 stages are integrated into one whole-life model of the well for production prediction. The application of this method in Daqing Oilfield, NE China shows that:(1) Compared with conventional data processing methods, the data obtained by this processing method are more close to the actual production, and the data set obtained is more authentic and complete.(2) The TCN model has higher prediction accuracy than other 11 models such as Long Short Term Memory(LSTM).(3) Compared with the conventional full-life-cycle models, the model of integrated stages can significantly reduce the error of production prediction.
文摘The ever-growing available visual data(i.e.,uploaded videos and pictures by internet users)has attracted the research community’s attention in the computer vision field.Therefore,finding efficient solutions to extract knowledge from these sources is imperative.Recently,the BlazePose system has been released for skeleton extraction from images oriented to mobile devices.With this skeleton graph representation in place,a Spatial-Temporal Graph Convolutional Network can be implemented to predict the action.We hypothesize that just by changing the skeleton input data for a different set of joints that offers more information about the action of interest,it is possible to increase the performance of the Spatial-Temporal Graph Convolutional Network for HAR tasks.Hence,in this study,we present the first implementation of the BlazePose skeleton topology upon this architecture for action recognition.Moreover,we propose the Enhanced-BlazePose topology that can achieve better results than its predecessor.Additionally,we propose different skeleton detection thresholds that can improve the accuracy performance even further.We reached a top-1 accuracy performance of 40.1%on the Kinetics dataset.For the NTU-RGB+D dataset,we achieved 87.59%and 92.1%accuracy for Cross-Subject and Cross-View evaluation criteria,respectively.
基金supported by the National Key R&D Program of China under Grant 2018YFB1801500.
文摘In order to reduce the physical impairment caused by signal distortion,in this paper,we investigate symbol detection with Deep Learning(DL)methods to improve bit-error performance in the optical communication system.Many DL-based methods have been applied to such systems to improve bit-error performance.Referring to the speech-to-text method of automatic speech recognition,this paper proposes a signal-to-symbol method based on DL and designs a receiver for symbol detection on single-polarized optical communications modes.To realize this detection method,we propose a non-causal temporal convolutional network-assisted receiver to detect symbols directly from the baseband signal,which specifically integrates most modules of the receiver.Meanwhile,we adopt three training approaches for different signal-to-noise ratios.We also apply a parametric rectified linear unit to enhance the noise robustness of the proposed network.According to the simulation experiments,the biterror-rate performance of the proposed method is close to or even superior to that of the conventional receiver and better than the recurrent neural network-based receiver.
文摘A lightweight multi-layer residual temporal convolutional network model(RTCN)is proposed to target the highly complex kinematics and temporal correlation of human motion.RTCN uses 1-D convolution to efficiently obtain the spatial structure information of human motion and extract the correlation in the time series of human motion.The residual structure is applied to the proposed network model to alleviate the problem of gradient disappearance in the deep network.Experiments on the Human 3.6M dataset demonstrate that the proposed method effectively reduces the errors of motion prediction compared with previous methods,especially of long-term prediction.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
基金supported by the National Key Research and Development Program of China(No.2018YFB2101300)the National Natural Science Foundation of China(Grant No.61871186)the Dean’s Fund of Engineering Research Center of Software/Hardware Co-Design Technology and Application,Ministry of Education(East China Normal University).
文摘Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.
基金co-supported by the National Natural Science Foundation of China(Nos.61890920,61890921)。
文摘Thrust estimation is a significant part of aeroengine thrust control systems.The traditional estimation methods are either low in accuracy or large in computation.To further improve the estimation effect,a thrust estimator based on Multi-layer Residual Temporal Convolutional Network(M-RTCN)is proposed.To solve the problem of dead Rectified Linear Unit(ReLU),the proposed method uses the Gaussian Error Linear Unit(GELU)activation function instead of ReLU in residual block.Then the overall architecture of the multi-layer convolutional network is adjusted by using residual connections,so that the network thrust estimation effect and memory consumption are further improved.Moreover,the comparison with seven other methods shows that the proposed method has the advantages of higher estimation accuracy and faster convergence speed.Furthermore,six neural network models are deployed in the embedded controller of the micro-turbojet engine.The Hardware-in-the-Loop(HIL)testing results demonstrate the superiority of M-RTCN in terms of estimation accuracy,memory occupation and running time.Finally,an ignition verification is conducted to confirm the expected thrust estimation and real-time performance.
文摘In the field of speech bandwidth exten-sion,it is difficult to achieve high speech quality based on the shallow statistical model method.Although the application of deep learning has greatly improved the extended speech quality,the high model complex-ity makes it infeasible to run on the client.In order to tackle these issues,this paper proposes an end-to-end speech bandwidth extension method based on a temporal convolutional neural network,which greatly reduces the complexity of the model.In addition,a new time-frequency loss function is designed to en-able narrowband speech to acquire a more accurate wideband mapping in the time domain and the fre-quency domain.The experimental results show that the reconstructed wideband speech generated by the proposed method is superior to the traditional heuris-tic rule based approaches and the conventional neu-ral network methods for both subjective and objective evaluation.
基金Supported by the National Natural Science Foundation of China(No.61602191,61672521,61375037,61473291,61572501,61572536,61502491,61372107,61401167)the Natural Science Foundation of Fujian Province(No.2016J01308)+3 种基金the Scientific and Technology Funds of Quanzhou(No.2015Z114)the Scientific and Technology Funds of Xiamen(No.3502Z20173045)the Promotion Program for Young and Middle aged Teacher in Science and Technology Research of Huaqiao University(No.ZQN-PY418,ZQN-YX403)the Scientific Research Funds of Huaqiao University(No.16BS108)
文摘Pedestrian attribute classification from a pedestrian image captured in surveillance scenarios is challenging due to diverse clothing appearances,varied poses and different camera views. A multiscale and multi-label convolutional neural network( MSMLCNN) is proposed to predict multiple pedestrian attributes simultaneously. The pedestrian attribute classification problem is firstly transformed into a multi-label problem including multiple binary attributes needed to be classified. Then,the multi-label problem is solved by fully connecting all binary attributes to multi-scale features with logistic regression functions. Moreover,the multi-scale features are obtained by concatenating those featured maps produced from multiple pooling layers of the MSMLCNN at different scales. Extensive experiment results show that the proposed MSMLCNN outperforms state-of-the-art pedestrian attribute classification methods with a large margin.
基金the National Natural Science Founda-tion of China(62062062)hosted by Gulila Altenbek.
文摘Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.
文摘Diabetes,as a chronic disease,is caused by the increase of blood glucose concentration due to pancreatic insulin production failure or insulin resistance in the body.Predicting the change trend of blood glucose level in advance brings convenience for prompt treatment,so as to maintain blood glucose level within the recommended levels.Based on the flash glucose monitoring data,we propose a method that combines prophet with temporal convolutional networks(TCN)to achieve good experimental results in predicting patient blood glucose.The proposed model achieves high accuracy in the long-term and short-term prediction of blood glucose,and outperforms other models on the adaptability to non-stationary and detection capability of periodic changes.