摘要
情绪识别是人机交互(HCI)与情感智能领域的重要前沿课题之一。然而,目前基于脑电(EGG)信号的情绪识别方法主要提取静态特征,无法挖掘情绪的动态变化特性,难以提升情绪识别能力。在基于EGG构建动态脑功能网络的研究中,常采用滑动窗口方法,通过依次构建不同窗口内的功能连接网络以形成动态网络。但该方法存在主观设定窗长的问题,无法提取每个时间点情绪状态的连接模式,导致时间信息丢失和脑连接信息不完整。针对上述问题,提出动态线性相位测量(dyPLM)方法,该方法无需使用滑窗,即可自适应地在每个时间点构建情绪相关脑网络,更精准地刻画情绪的动态变化特性。此外,还提出一种卷积门控神经网络(CNGRU)情绪识别模型,该模型可进一步提取动态脑网络深层次特征,有效提高情绪识别准确性。在公开情绪识别脑电数据集DEAP(Database for Emotion Analysis using Physiological signals)上进行验证,所提方法四分类准确率高达99.71%,较MFBPST-3D-DRLF提高3.51百分点。在SEED(SJTU Emotion EEG Dataset)数据集上进行验证,所提方法三分类准确率达到99.99%,较MFBPST-3D-DRLF提高3.32百分点。实验结果证明了所提出的动态脑网络构建方法dyPLM和情绪识别模型CNGRU的有效性和实用性。
Emotion recognition is one of the most important frontier research topics in the field of HumanComputer Interaction(HCI)emotional intelligence.However,at present,Electroencephalogram(EEG)-based emotion recognition extracts static features and cannot mine the dynamic characteristics of emotions,and it is difficult to improve the emotion recognition ability.In current research on the construction of dynamic Brain Functional Networks(dBFNs)using EEG signals,a sliding window is usually used to form a dBFN by sequentially constructing a functional connectivity network in different windows.However,this method is limited by subjectively setting the window length and cannot extract the connection mode of the emotional state at each time point.Therefore,the loss of temporal information leads to the loss of brain connection information.To solve these problems,this study proposes a dynamic Phase Linearity Measurement(dyPLM)method that can adaptively construct an emotion-related brain network at each time point without sliding windows to characterize the dynamic characteristics of emotions.In addition,a Convolutional Neural Gate Recurrent Unit(CNGRU)emotion recognition model is proposed that could further extract the deep-seated features of the dynamic brain network and effectively improve the accuracy of emotion recognition.In experiments on the public emotion recognition EEG dataset,Database for Emotion Analysis using Physiological signals(DEAP),the four-class classification accuracy is as high as 99.71%,and the recognition accuracy is improved by 3.51 percentage points compared to MFBPST3D-DRLF.On the SJTU Emotion EEG Dataset(SEED),the three-class classification accuracy is 99.99%,and the recognition accuracy is improved by 3.32 percentage points compared to MFBPST-3D-DRLF.This study demonstrates the effectiveness and practicality of the proposed methods.
作者
王海玲
姜廷威
方志军
高宇飞
WANG Hailing;JIANG Tingwei;FANG Zhijun;GAO Yufei(School of Information Management and Mathematics,Jiangxi University of Finance and Economics,Nanchang 330013,Jiangxi,China;School of Electronic and Electrical Engineering,Shanghai University of Engineering Technology,Shanghai 201200,China;Academy of Cyberspace Security,Zhengzhou University,Zhengzhou 450002,Henan,China)
出处
《计算机工程》
北大核心
2026年第2期125-135,共11页
Computer Engineering
基金
国家自然科学基金青年基金项目(62001284,62006210)
河南省重大科技专项(221100210100)
郑州大学高层次人才科研启动项目(32340306)。
关键词
脑电信号
情绪识别
动态脑网络
卷积神经网络
门控循环单元
Electroencephalogram(EEG)signal
emotion recognition
dynamic brain network
convolutional neural network
Gate Recurrent Unit(GRU)