Electroencephalographic(EEG)-based emotion recognition has received increasing attention in the field of human-computer interaction(HCI)recently,there however remains a number of challenges in building a generalized e...Electroencephalographic(EEG)-based emotion recognition has received increasing attention in the field of human-computer interaction(HCI)recently,there however remains a number of challenges in building a generalized emotion recognition model,one of which includes the difficulty of an EEG-based emotion classifier trained on a specific task to handle other tasks.Lit-tle attention has been paid to this issue.The current study is to determine the feasibility of coping with this challenge using feature selection.12 healthy volunteers were emotionally elicited when conducting picture induced and videoinduced tasks.Firstly,support vector machine(SVM)classifier was examined under within-task conditions(trained and tested on the same task)and cross-task conditions(trained on one task and tested on another task)for pictureinduced and videoinduced tasks.The within-task classification performed fairly well(classification accuracy:51.6%for picture task and 94.4%for video task).Cross-task classification,however,deteriorated to low levels(around 44%).Trained and tested with the most robust feature subset selected by SVM-recursive feature elimination(RFE),the performance of cross-task classifier was significantly improved to above 68%.These results suggest that cross-task emotion recognition is feasible with proper methods and bring EEG-based emotion recognition models closer to being able to discriminate emotion states for any tasks.展开更多
Recently,large-scale pretrained models have revealed their benefits in various tasks.However,due to the enormous computation complexity and storage demands,it is challenging to apply large-scale models to real scenari...Recently,large-scale pretrained models have revealed their benefits in various tasks.However,due to the enormous computation complexity and storage demands,it is challenging to apply large-scale models to real scenarios.Existing knowledge distillation methods require mainly the teacher model and the student model to share the same label space,which restricts their application in real scenarios.To alleviate the constraint of different label spaces,we propose a prototype-guided cross-task knowledge distillation(ProC-KD)method to migrate the intrinsic local-level object knowledge of the teacher network to various task scenarios.First,to better learn the generalized knowledge in cross-task scenarios,we present a prototype learning module to learn the invariant intrinsic local representation of objects from the teacher network.Second,for diverse downstream tasks,a task-adaptive feature augmentation module is proposed to enhance the student network features with the learned generalization prototype representations and guide the learning of the student network to improve its generalization ability.Experimental results on various visual tasks demonstrate the effectiveness of our approach for cross-task knowledge distillation scenarios.展开更多
基金supported by National Natural Science Foundation of China(No.81222021,61172008,81171423,81127003,)National Key Technology R&D Program of the Ministry of Science and Technology of China(No.2012BAI34B02)Program for New Century Excellent Talents in University of the Ministry of Education of China(No.NCET-10-0618).
文摘Electroencephalographic(EEG)-based emotion recognition has received increasing attention in the field of human-computer interaction(HCI)recently,there however remains a number of challenges in building a generalized emotion recognition model,one of which includes the difficulty of an EEG-based emotion classifier trained on a specific task to handle other tasks.Lit-tle attention has been paid to this issue.The current study is to determine the feasibility of coping with this challenge using feature selection.12 healthy volunteers were emotionally elicited when conducting picture induced and videoinduced tasks.Firstly,support vector machine(SVM)classifier was examined under within-task conditions(trained and tested on the same task)and cross-task conditions(trained on one task and tested on another task)for pictureinduced and videoinduced tasks.The within-task classification performed fairly well(classification accuracy:51.6%for picture task and 94.4%for video task).Cross-task classification,however,deteriorated to low levels(around 44%).Trained and tested with the most robust feature subset selected by SVM-recursive feature elimination(RFE),the performance of cross-task classifier was significantly improved to above 68%.These results suggest that cross-task emotion recognition is feasible with proper methods and bring EEG-based emotion recognition models closer to being able to discriminate emotion states for any tasks.
基金Project supported by the National Natural Science Foundation of China(Nos.62376186 and 61932009)。
文摘Recently,large-scale pretrained models have revealed their benefits in various tasks.However,due to the enormous computation complexity and storage demands,it is challenging to apply large-scale models to real scenarios.Existing knowledge distillation methods require mainly the teacher model and the student model to share the same label space,which restricts their application in real scenarios.To alleviate the constraint of different label spaces,we propose a prototype-guided cross-task knowledge distillation(ProC-KD)method to migrate the intrinsic local-level object knowledge of the teacher network to various task scenarios.First,to better learn the generalized knowledge in cross-task scenarios,we present a prototype learning module to learn the invariant intrinsic local representation of objects from the teacher network.Second,for diverse downstream tasks,a task-adaptive feature augmentation module is proposed to enhance the student network features with the learned generalization prototype representations and guide the learning of the student network to improve its generalization ability.Experimental results on various visual tasks demonstrate the effectiveness of our approach for cross-task knowledge distillation scenarios.