As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symboli...As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symbolic applications of PHM technology in modern manufacturing systems and industry.In this paper,a multi-scale Convolutional Gated Recurrent Unit network(MCGRU)is proposed to address raw sensory data for tool wear prediction.At the bottom of MCGRU,six parallel and independent branches with different kernel sizes are designed to form a multi-scale convolutional neural network,which augments the adaptability to features of different time scales.These features of different scales extracted from raw data are then fed into a Deep Gated Recurrent Unit network to capture long-term dependencies and learn significant representations.At the top of the MCGRU,a fully connected layer and a regression layer are built for cutting tool wear prediction.Two case studies are performed to verify the capability and effectiveness of the proposed MCGRU network and results show that MCGRU outperforms several state-of-the-art baseline models.展开更多
Micro-expressions are spontaneous, unconscious movements that reveal true emotions.Accurate facial movement information and network training learning methods are crucial for micro-expression recognition.However, most ...Micro-expressions are spontaneous, unconscious movements that reveal true emotions.Accurate facial movement information and network training learning methods are crucial for micro-expression recognition.However, most existing micro-expression recognition technologies so far focus on modeling the single category of micro-expression images and neural network structure.Aiming at the problems of low recognition rate and weak model generalization ability in micro-expression recognition, a micro-expression recognition algorithm is proposed based on graph convolution network(GCN) and Transformer model.Firstly, action unit(AU) feature detection is extracted and facial muscle nodes in the neighborhood are divided into three subsets for recognition.Then, graph convolution layer is used to find the layout of dependencies between AU nodes of micro-expression classification.Finally, multiple attentional features of each facial action are enriched with Transformer model to include more sequence information before calculating the overall correlation of each region.The proposed method is validated in CASME II and CAS(ME)^2 datasets, and the recognition rate reached 69.85%.展开更多
Versatile video coding(H.266/VVC),which was newly released by the Joint Video Exploration Team(JVET),introduces quad-tree plus multitype tree(QTMT)partition structure on the basis of quad-tree(QT)partition structure i...Versatile video coding(H.266/VVC),which was newly released by the Joint Video Exploration Team(JVET),introduces quad-tree plus multitype tree(QTMT)partition structure on the basis of quad-tree(QT)partition structure in High Efficiency Video Coding(H.265/HEVC).More complicated coding unit(CU)partitioning processes in H.266/VVC significantly improve video compression efficiency,but greatly increase the computational complexity compared.The ultra-high encoding complexity has obstructed its real-time applications.In order to solve this problem,a CU partition algorithm using convolutional neural network(CNN)is proposed in this paper to speed up the H.266/VVC CU partition process.Firstly,64×64 CU is divided into smooth texture CU,mildly complex texture CU and complex texture CU according to the CU texture characteristics.Second,CU texture complexity classification convolutional neural network(CUTCC-CNN)is proposed to classify CUs.Finally,according to the classification results,the encoder is guided to skip different RDO search process.And optimal CU partition results will be determined.Experimental results show that the proposed method reduces the average coding time by 32.2%with only 0.55%BD-BR loss compared with VTM 10.2.展开更多
情绪识别是人机交互(HCI)与情感智能领域的重要前沿课题之一。然而,目前基于脑电(EGG)信号的情绪识别方法主要提取静态特征,无法挖掘情绪的动态变化特性,难以提升情绪识别能力。在基于EGG构建动态脑功能网络的研究中,常采用滑动窗口方法...情绪识别是人机交互(HCI)与情感智能领域的重要前沿课题之一。然而,目前基于脑电(EGG)信号的情绪识别方法主要提取静态特征,无法挖掘情绪的动态变化特性,难以提升情绪识别能力。在基于EGG构建动态脑功能网络的研究中,常采用滑动窗口方法,通过依次构建不同窗口内的功能连接网络以形成动态网络。但该方法存在主观设定窗长的问题,无法提取每个时间点情绪状态的连接模式,导致时间信息丢失和脑连接信息不完整。针对上述问题,提出动态线性相位测量(dyPLM)方法,该方法无需使用滑窗,即可自适应地在每个时间点构建情绪相关脑网络,更精准地刻画情绪的动态变化特性。此外,还提出一种卷积门控神经网络(CNGRU)情绪识别模型,该模型可进一步提取动态脑网络深层次特征,有效提高情绪识别准确性。在公开情绪识别脑电数据集DEAP(Database for Emotion Analysis using Physiological signals)上进行验证,所提方法四分类准确率高达99.71%,较MFBPST-3D-DRLF提高3.51百分点。在SEED(SJTU Emotion EEG Dataset)数据集上进行验证,所提方法三分类准确率达到99.99%,较MFBPST-3D-DRLF提高3.32百分点。实验结果证明了所提出的动态脑网络构建方法dyPLM和情绪识别模型CNGRU的有效性和实用性。展开更多
方面级情感分析旨在识别文本中针对特定方面的情感倾向,然而现有研究仍面临多重挑战:基于BERT的方面级情感分析研究存在语义过拟合、低层级语义利用不足的问题;自注意力机制存在局部信息丢失的问题;多编码层和多粒度语义的结构存在信息...方面级情感分析旨在识别文本中针对特定方面的情感倾向,然而现有研究仍面临多重挑战:基于BERT的方面级情感分析研究存在语义过拟合、低层级语义利用不足的问题;自注意力机制存在局部信息丢失的问题;多编码层和多粒度语义的结构存在信息冗余问题。为此,提出一种融合BERT编码层的多粒度语义方面级情感分析模型(multi-granular semantic aspect-based sentiment analysis model with fusion of BERT encoding layers,MSBEL)。具体地,引入金字塔注意力机制,利用各个编码层的语义特征,并结合低层编码器以降低过拟合;通过多尺度门控卷积增强模型处理局部信息丢失的能力;使用余弦注意力突出与方面词相关的情感特征,从而减少信息冗余。t-SNE的可视化分析表明,MSBEL的情感表示聚类效果优于BERT。此外,在多个基准数据集上将本文模型与主流模型的性能进行了对比,结果显示:与LCF-BERT相比,本文模型在5个数据集上的F1分别提升了1.53%、3.94%、1.39%、6.68%、5.97%;与SenticGCN相比,本文模型的F1平均提升0.94%,最大提升2.12%;与ABSA-DeBERTa相比,本文模型的F1平均提升1.16%,最大提升4.20%,验证了本文模型在方面级情感分析任务上的有效性和优越性。展开更多
Recently,speech enhancement methods based on Generative Adversarial Networks have achieved good performance in time-domain noisy signals.However,the training of Generative Adversarial Networks has such problems as con...Recently,speech enhancement methods based on Generative Adversarial Networks have achieved good performance in time-domain noisy signals.However,the training of Generative Adversarial Networks has such problems as convergence difficulty,model collapse,etc.In this work,an end-to-end speech enhancement model based on Wasserstein Generative Adversarial Networks is proposed,and some improvements have been made in order to get faster convergence speed and better generated speech quality.Specifically,in the generator coding part,each convolution layer adopts different convolution kernel sizes to conduct convolution operations for obtaining speech coding information from multiple scales;a gated linear unit is introduced to alleviate the vanishing gradient problem with the increase of network depth;the gradient penalty of the discriminator is replaced with spectral normalization to accelerate the convergence rate of themodel;a hybrid penalty termcomposed of L1 regularization and a scale-invariant signal-to-distortion ratio is introduced into the loss function of the generator to improve the quality of generated speech.The experimental results on both TIMIT corpus and Tibetan corpus show that the proposed model improves the speech quality significantly and accelerates the convergence speed of the model.展开更多
基金Supported in part by Natural Science Foundation of China(Grant Nos.51835009,51705398)Shaanxi Province 2020 Natural Science Basic Research Plan(Grant No.2020JQ-042)Aeronautical Science Foundation(Grant No.2019ZB070001).
文摘As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symbolic applications of PHM technology in modern manufacturing systems and industry.In this paper,a multi-scale Convolutional Gated Recurrent Unit network(MCGRU)is proposed to address raw sensory data for tool wear prediction.At the bottom of MCGRU,six parallel and independent branches with different kernel sizes are designed to form a multi-scale convolutional neural network,which augments the adaptability to features of different time scales.These features of different scales extracted from raw data are then fed into a Deep Gated Recurrent Unit network to capture long-term dependencies and learn significant representations.At the top of the MCGRU,a fully connected layer and a regression layer are built for cutting tool wear prediction.Two case studies are performed to verify the capability and effectiveness of the proposed MCGRU network and results show that MCGRU outperforms several state-of-the-art baseline models.
基金Supported by Shaanxi Province Key Research and Development Project (2021GY-280)the National Natural Science Foundation of China (No.61834005,61772417,61802304)。
文摘Micro-expressions are spontaneous, unconscious movements that reveal true emotions.Accurate facial movement information and network training learning methods are crucial for micro-expression recognition.However, most existing micro-expression recognition technologies so far focus on modeling the single category of micro-expression images and neural network structure.Aiming at the problems of low recognition rate and weak model generalization ability in micro-expression recognition, a micro-expression recognition algorithm is proposed based on graph convolution network(GCN) and Transformer model.Firstly, action unit(AU) feature detection is extracted and facial muscle nodes in the neighborhood are divided into three subsets for recognition.Then, graph convolution layer is used to find the layout of dependencies between AU nodes of micro-expression classification.Finally, multiple attentional features of each facial action are enriched with Transformer model to include more sequence information before calculating the overall correlation of each region.The proposed method is validated in CASME II and CAS(ME)^2 datasets, and the recognition rate reached 69.85%.
基金This paper is supported by the following funds:The National Key Research and Development Program of China(2018YFF01010100)Basic Research Program of Qinghai Province under Grants No.2021-ZJ-704,The Beijing Natural Science Foundation(4212001)Advanced information network Beijing laboratory(PXM2019_014204_500029).
文摘Versatile video coding(H.266/VVC),which was newly released by the Joint Video Exploration Team(JVET),introduces quad-tree plus multitype tree(QTMT)partition structure on the basis of quad-tree(QT)partition structure in High Efficiency Video Coding(H.265/HEVC).More complicated coding unit(CU)partitioning processes in H.266/VVC significantly improve video compression efficiency,but greatly increase the computational complexity compared.The ultra-high encoding complexity has obstructed its real-time applications.In order to solve this problem,a CU partition algorithm using convolutional neural network(CNN)is proposed in this paper to speed up the H.266/VVC CU partition process.Firstly,64×64 CU is divided into smooth texture CU,mildly complex texture CU and complex texture CU according to the CU texture characteristics.Second,CU texture complexity classification convolutional neural network(CUTCC-CNN)is proposed to classify CUs.Finally,according to the classification results,the encoder is guided to skip different RDO search process.And optimal CU partition results will be determined.Experimental results show that the proposed method reduces the average coding time by 32.2%with only 0.55%BD-BR loss compared with VTM 10.2.
文摘情绪识别是人机交互(HCI)与情感智能领域的重要前沿课题之一。然而,目前基于脑电(EGG)信号的情绪识别方法主要提取静态特征,无法挖掘情绪的动态变化特性,难以提升情绪识别能力。在基于EGG构建动态脑功能网络的研究中,常采用滑动窗口方法,通过依次构建不同窗口内的功能连接网络以形成动态网络。但该方法存在主观设定窗长的问题,无法提取每个时间点情绪状态的连接模式,导致时间信息丢失和脑连接信息不完整。针对上述问题,提出动态线性相位测量(dyPLM)方法,该方法无需使用滑窗,即可自适应地在每个时间点构建情绪相关脑网络,更精准地刻画情绪的动态变化特性。此外,还提出一种卷积门控神经网络(CNGRU)情绪识别模型,该模型可进一步提取动态脑网络深层次特征,有效提高情绪识别准确性。在公开情绪识别脑电数据集DEAP(Database for Emotion Analysis using Physiological signals)上进行验证,所提方法四分类准确率高达99.71%,较MFBPST-3D-DRLF提高3.51百分点。在SEED(SJTU Emotion EEG Dataset)数据集上进行验证,所提方法三分类准确率达到99.99%,较MFBPST-3D-DRLF提高3.32百分点。实验结果证明了所提出的动态脑网络构建方法dyPLM和情绪识别模型CNGRU的有效性和实用性。
文摘方面级情感分析旨在识别文本中针对特定方面的情感倾向,然而现有研究仍面临多重挑战:基于BERT的方面级情感分析研究存在语义过拟合、低层级语义利用不足的问题;自注意力机制存在局部信息丢失的问题;多编码层和多粒度语义的结构存在信息冗余问题。为此,提出一种融合BERT编码层的多粒度语义方面级情感分析模型(multi-granular semantic aspect-based sentiment analysis model with fusion of BERT encoding layers,MSBEL)。具体地,引入金字塔注意力机制,利用各个编码层的语义特征,并结合低层编码器以降低过拟合;通过多尺度门控卷积增强模型处理局部信息丢失的能力;使用余弦注意力突出与方面词相关的情感特征,从而减少信息冗余。t-SNE的可视化分析表明,MSBEL的情感表示聚类效果优于BERT。此外,在多个基准数据集上将本文模型与主流模型的性能进行了对比,结果显示:与LCF-BERT相比,本文模型在5个数据集上的F1分别提升了1.53%、3.94%、1.39%、6.68%、5.97%;与SenticGCN相比,本文模型的F1平均提升0.94%,最大提升2.12%;与ABSA-DeBERTa相比,本文模型的F1平均提升1.16%,最大提升4.20%,验证了本文模型在方面级情感分析任务上的有效性和优越性。
基金supported by the National Science Foundation under Grant No.62066039.
文摘Recently,speech enhancement methods based on Generative Adversarial Networks have achieved good performance in time-domain noisy signals.However,the training of Generative Adversarial Networks has such problems as convergence difficulty,model collapse,etc.In this work,an end-to-end speech enhancement model based on Wasserstein Generative Adversarial Networks is proposed,and some improvements have been made in order to get faster convergence speed and better generated speech quality.Specifically,in the generator coding part,each convolution layer adopts different convolution kernel sizes to conduct convolution operations for obtaining speech coding information from multiple scales;a gated linear unit is introduced to alleviate the vanishing gradient problem with the increase of network depth;the gradient penalty of the discriminator is replaced with spectral normalization to accelerate the convergence rate of themodel;a hybrid penalty termcomposed of L1 regularization and a scale-invariant signal-to-distortion ratio is introduced into the loss function of the generator to improve the quality of generated speech.The experimental results on both TIMIT corpus and Tibetan corpus show that the proposed model improves the speech quality significantly and accelerates the convergence speed of the model.