Emotion recognition systems are helpful in human-machine interactions and Intelligence Medical applications.Electroencephalogram(EEG)is closely related to the central nervous system activity of the brain.Compared with...Emotion recognition systems are helpful in human-machine interactions and Intelligence Medical applications.Electroencephalogram(EEG)is closely related to the central nervous system activity of the brain.Compared with other signals,EEG is more closely associated with the emotional activity.It is essential to study emotion recognition based on EEG information.In the research of emotion recognition based on EEG,it is a common problem that the results of individual emotion classification vary greatly under the same scheme of emotion recognition,which affects the engineering application of emotion recognition.In order to improve the overall emotion recognition rate of the emotion classification system,we propose the CSP_VAR_CNN(CVC)emotion recognition system,which is based on the convolutional neural network(CNN)algorithm to classify emotions of EEG signals.Firstly,the emotion recognition system using common spatial patterns(CSP)to reduce the EEG data,then the standardized variance(VAR)is selected as the parameter to form the emotion feature vectors.Lastly,a 5-layer CNN model is built to classify the EEG signal.The classification results show that this emotion recognition system can better the overall emotion recognition rate:the variance has been reduced to 0.0067,which is a decrease of 64%compared to that of the CSP_VAR_SVM(CVS)system.On the other hand,the average accuracy reaches 69.84%,which is 0.79%higher than that of the CVS system.It shows that the overall emotion recognition rate of the proposed emotion recognition system is more stable,and its emotion recognition rate is higher.展开更多
With the rapid development of deep learning and artificial intelligence,affective computing,as a branch field,has attracted increasing research attention.Human emotions are diverse and are directly expressed via nonph...With the rapid development of deep learning and artificial intelligence,affective computing,as a branch field,has attracted increasing research attention.Human emotions are diverse and are directly expressed via nonphysiological indicators,such as electroencephalogram(EEG)signals.However,whether emotion-based or EEG-based,these remain single-modes of emotion recognition.Multi-mode fusion emotion recognition can improve accuracy by utilizing feature diversity and correlation.Therefore,three different models have been established:the single-mode-based EEG-long and short-term memory(LSTM)model,the Facial-LSTM model based on facial expressions processing EEG data,and the multi-mode LSTM-convolutional neural network(CNN)model that combines expressions and EEG.Their average classification accuracy was 86.48%,89.42%,and 93.13%,respectively.Compared with the EEG-LSTM model,the Facial-LSTM model improved by about 3%.This indicated that the expression mode helped eliminate EEG signals that contained few or no emotional features,enhancing emotion recognition accuracy.Compared with the Facial-LSTM model,the classification accuracy of the LSTM-CNN model improved by 3.7%,showing that the addition of facial expressions affected the EEG features to a certain extent.Therefore,using various modal features for emotion recognition conforms to human emotional expression.Furthermore,it improves feature diversity to facilitate further emotion recognition research.展开更多
Emotions serve various functions.The traditional emotion recognition methods are based primarily on readily accessible facial expressions,gestures,and voice signals.However,it is often challenging to ensure that these...Emotions serve various functions.The traditional emotion recognition methods are based primarily on readily accessible facial expressions,gestures,and voice signals.However,it is often challenging to ensure that these non-physical signals are valid and reliable in practical applications.Electroencephalogram(EEG)signals are more successful than other signal recognition methods in recognizing these characteristics in real-time since they are difficult to camouflage.Although EEG signals are commonly used in current emotional recognition research,the accuracy is low when using traditional methods.Therefore,this study presented an optimized hybrid pattern with an attention mechanism(FFT_CLA)for EEG emotional recognition.First,the EEG signal was processed via the fast fourier transform(FFT),after which the convolutional neural network(CNN),long short-term memory(LSTM),and CNN-LSTM-attention(CLA)methods were used to extract and classify the EEG features.Finally,the experiments compared and analyzed the recognition results obtained via three DEAP dataset models,namely FFT_CNN,FFT_LSTM,and FFT_CLA.The final experimental results indicated that the recognition rates of the FFT_CNN,FFT_LSTM,and FFT_CLA models within the DEAP dataset were 87.39%,88.30%,and 92.38%,respectively.The FFT_CLA model improved the accuracy of EEG emotion recognition and used the attention mechanism to address the often-ignored importance of different channels and samples when extracting EEG features.展开更多
基金This work has been supported by the National Nature Science Foundation of China(No.61503423,H.P.Jiang).And its URls is http://www.nsfc.gov.cn/.
文摘Emotion recognition systems are helpful in human-machine interactions and Intelligence Medical applications.Electroencephalogram(EEG)is closely related to the central nervous system activity of the brain.Compared with other signals,EEG is more closely associated with the emotional activity.It is essential to study emotion recognition based on EEG information.In the research of emotion recognition based on EEG,it is a common problem that the results of individual emotion classification vary greatly under the same scheme of emotion recognition,which affects the engineering application of emotion recognition.In order to improve the overall emotion recognition rate of the emotion classification system,we propose the CSP_VAR_CNN(CVC)emotion recognition system,which is based on the convolutional neural network(CNN)algorithm to classify emotions of EEG signals.Firstly,the emotion recognition system using common spatial patterns(CSP)to reduce the EEG data,then the standardized variance(VAR)is selected as the parameter to form the emotion feature vectors.Lastly,a 5-layer CNN model is built to classify the EEG signal.The classification results show that this emotion recognition system can better the overall emotion recognition rate:the variance has been reduced to 0.0067,which is a decrease of 64%compared to that of the CSP_VAR_SVM(CVS)system.On the other hand,the average accuracy reaches 69.84%,which is 0.79%higher than that of the CVS system.It shows that the overall emotion recognition rate of the proposed emotion recognition system is more stable,and its emotion recognition rate is higher.
基金supported by the National Nature Science Foundation of China(No.61503423,H.P.Jiang).The URL is http://www.nsfc.gov.cn/.
文摘With the rapid development of deep learning and artificial intelligence,affective computing,as a branch field,has attracted increasing research attention.Human emotions are diverse and are directly expressed via nonphysiological indicators,such as electroencephalogram(EEG)signals.However,whether emotion-based or EEG-based,these remain single-modes of emotion recognition.Multi-mode fusion emotion recognition can improve accuracy by utilizing feature diversity and correlation.Therefore,three different models have been established:the single-mode-based EEG-long and short-term memory(LSTM)model,the Facial-LSTM model based on facial expressions processing EEG data,and the multi-mode LSTM-convolutional neural network(CNN)model that combines expressions and EEG.Their average classification accuracy was 86.48%,89.42%,and 93.13%,respectively.Compared with the EEG-LSTM model,the Facial-LSTM model improved by about 3%.This indicated that the expression mode helped eliminate EEG signals that contained few or no emotional features,enhancing emotion recognition accuracy.Compared with the Facial-LSTM model,the classification accuracy of the LSTM-CNN model improved by 3.7%,showing that the addition of facial expressions affected the EEG features to a certain extent.Therefore,using various modal features for emotion recognition conforms to human emotional expression.Furthermore,it improves feature diversity to facilitate further emotion recognition research.
基金This work was supported by the National Nature Science Foundation of China(No.61503423,H.P.Jiang).The URL is http://www.nsfc.gov.cn/.
文摘Emotions serve various functions.The traditional emotion recognition methods are based primarily on readily accessible facial expressions,gestures,and voice signals.However,it is often challenging to ensure that these non-physical signals are valid and reliable in practical applications.Electroencephalogram(EEG)signals are more successful than other signal recognition methods in recognizing these characteristics in real-time since they are difficult to camouflage.Although EEG signals are commonly used in current emotional recognition research,the accuracy is low when using traditional methods.Therefore,this study presented an optimized hybrid pattern with an attention mechanism(FFT_CLA)for EEG emotional recognition.First,the EEG signal was processed via the fast fourier transform(FFT),after which the convolutional neural network(CNN),long short-term memory(LSTM),and CNN-LSTM-attention(CLA)methods were used to extract and classify the EEG features.Finally,the experiments compared and analyzed the recognition results obtained via three DEAP dataset models,namely FFT_CNN,FFT_LSTM,and FFT_CLA.The final experimental results indicated that the recognition rates of the FFT_CNN,FFT_LSTM,and FFT_CLA models within the DEAP dataset were 87.39%,88.30%,and 92.38%,respectively.The FFT_CLA model improved the accuracy of EEG emotion recognition and used the attention mechanism to address the often-ignored importance of different channels and samples when extracting EEG features.