Electroencephalographic(EEG)-based emotion recognition has received increasing attention in the field of human-computer interaction(HCI)recently,there however remains a number of challenges in building a generalized e...Electroencephalographic(EEG)-based emotion recognition has received increasing attention in the field of human-computer interaction(HCI)recently,there however remains a number of challenges in building a generalized emotion recognition model,one of which includes the difficulty of an EEG-based emotion classifier trained on a specific task to handle other tasks.Lit-tle attention has been paid to this issue.The current study is to determine the feasibility of coping with this challenge using feature selection.12 healthy volunteers were emotionally elicited when conducting picture induced and videoinduced tasks.Firstly,support vector machine(SVM)classifier was examined under within-task conditions(trained and tested on the same task)and cross-task conditions(trained on one task and tested on another task)for pictureinduced and videoinduced tasks.The within-task classification performed fairly well(classification accuracy:51.6%for picture task and 94.4%for video task).Cross-task classification,however,deteriorated to low levels(around 44%).Trained and tested with the most robust feature subset selected by SVM-recursive feature elimination(RFE),the performance of cross-task classifier was significantly improved to above 68%.These results suggest that cross-task emotion recognition is feasible with proper methods and bring EEG-based emotion recognition models closer to being able to discriminate emotion states for any tasks.展开更多
Recently,large-scale pretrained models have revealed their benefits in various tasks.However,due to the enormous computation complexity and storage demands,it is challenging to apply large-scale models to real scenari...Recently,large-scale pretrained models have revealed their benefits in various tasks.However,due to the enormous computation complexity and storage demands,it is challenging to apply large-scale models to real scenarios.Existing knowledge distillation methods require mainly the teacher model and the student model to share the same label space,which restricts their application in real scenarios.To alleviate the constraint of different label spaces,we propose a prototype-guided cross-task knowledge distillation(ProC-KD)method to migrate the intrinsic local-level object knowledge of the teacher network to various task scenarios.First,to better learn the generalized knowledge in cross-task scenarios,we present a prototype learning module to learn the invariant intrinsic local representation of objects from the teacher network.Second,for diverse downstream tasks,a task-adaptive feature augmentation module is proposed to enhance the student network features with the learned generalization prototype representations and guide the learning of the student network to improve its generalization ability.Experimental results on various visual tasks demonstrate the effectiveness of our approach for cross-task knowledge distillation scenarios.展开更多
Indoor scene semantic segmentation is essential for enabling robots to understand and interact with their environments effectively.However,numerous challenges remain unresolved,particularly in single-robot systems,whi...Indoor scene semantic segmentation is essential for enabling robots to understand and interact with their environments effectively.However,numerous challenges remain unresolved,particularly in single-robot systems,which often struggle with the complexity and variability of indoor scenes.To address these limitations,we introduce a novel multi-robot collaborative framework based on multiplex interactive learning(MPIL)in which each robot specialises in a distinct visual task within a unified multitask architecture.During training,the framework employs task-specific decoders and cross-task feature sharing to enhance collaborative optimisation.At inference time,robots operate independently with optimised models,enabling scalable,asynchronous and efficient deployment in real-world scenarios.Specifically,MPIL employs specially designed modules that integrate RGB and depth data,refine feature representations and facilitate the simultaneous execution of multiple tasks,such as instance segmentation,scene classification and semantic segmentation.By leveraging these modules,distinct agents within multi-robot systems can effectively handle specialised tasks,thereby enhancing the overall system's flexibility and adaptability.This collaborative effort maximises the strengths of each robot,resulting in a more comprehensive understanding of environments.Extensive experiments on two public benchmark datasets demonstrate MPIL's competitive performance compared to state-of-the-art approaches,highlighting the effectiveness and robustness of our multi-robot system in complex indoor environments.展开更多
基金supported by National Natural Science Foundation of China(No.81222021,61172008,81171423,81127003,)National Key Technology R&D Program of the Ministry of Science and Technology of China(No.2012BAI34B02)Program for New Century Excellent Talents in University of the Ministry of Education of China(No.NCET-10-0618).
文摘Electroencephalographic(EEG)-based emotion recognition has received increasing attention in the field of human-computer interaction(HCI)recently,there however remains a number of challenges in building a generalized emotion recognition model,one of which includes the difficulty of an EEG-based emotion classifier trained on a specific task to handle other tasks.Lit-tle attention has been paid to this issue.The current study is to determine the feasibility of coping with this challenge using feature selection.12 healthy volunteers were emotionally elicited when conducting picture induced and videoinduced tasks.Firstly,support vector machine(SVM)classifier was examined under within-task conditions(trained and tested on the same task)and cross-task conditions(trained on one task and tested on another task)for pictureinduced and videoinduced tasks.The within-task classification performed fairly well(classification accuracy:51.6%for picture task and 94.4%for video task).Cross-task classification,however,deteriorated to low levels(around 44%).Trained and tested with the most robust feature subset selected by SVM-recursive feature elimination(RFE),the performance of cross-task classifier was significantly improved to above 68%.These results suggest that cross-task emotion recognition is feasible with proper methods and bring EEG-based emotion recognition models closer to being able to discriminate emotion states for any tasks.
基金Project supported by the National Natural Science Foundation of China(Nos.62376186 and 61932009)。
文摘Recently,large-scale pretrained models have revealed their benefits in various tasks.However,due to the enormous computation complexity and storage demands,it is challenging to apply large-scale models to real scenarios.Existing knowledge distillation methods require mainly the teacher model and the student model to share the same label space,which restricts their application in real scenarios.To alleviate the constraint of different label spaces,we propose a prototype-guided cross-task knowledge distillation(ProC-KD)method to migrate the intrinsic local-level object knowledge of the teacher network to various task scenarios.First,to better learn the generalized knowledge in cross-task scenarios,we present a prototype learning module to learn the invariant intrinsic local representation of objects from the teacher network.Second,for diverse downstream tasks,a task-adaptive feature augmentation module is proposed to enhance the student network features with the learned generalization prototype representations and guide the learning of the student network to improve its generalization ability.Experimental results on various visual tasks demonstrate the effectiveness of our approach for cross-task knowledge distillation scenarios.
文摘目前,多无人机(Unmanned Aerial Vehicle,UAV)在大规模任务场景下的任务分配问题仍是一个挑战性问题。传统启发式算法可在较低计算复杂度下得到满意的解,但收敛速度慢且难以收敛到全局最优解。为此提出一种基于UAV链、任务链和双阶段修复策略的遗传算法(Genetic Algorithm Based on UAV-chain,Task-chain,and Two-Stage Repair strategy,UTTSRGA)。在编码结构中设计UAV链和任务链来量化任务执行代价,增强了编码中的信息承载能力并显著提升搜索效率。针对交叉操作后出现任务缺失与任务重复问题,设计双阶段修复策略。第一阶段设计随机填充机制,增强对解空间的全局搜索能力;第二阶段设计邻接映射表修复机制,根据任务间的邻接关系提供进化方向,有效引导种群向当前最优解快速收敛。提出动态复合变异策略,融合自适应变异率与基于任务链值的变异点选择,并设计4种功能互补的变异算子,多维度协同优化解的质量。针对大规模场景下的路径交叉问题,引入路径优化策略,从实践角度进一步优化任务分配方案。实验结果表明,UTTSRGA在不同任务规模下,尤其是大规模复杂任务场景中,在解的质量、收敛速度和鲁棒性3个方面均表现出显著优势。
基金supported by the National Natural Science Foundation of China under Grant 62373009.
文摘Indoor scene semantic segmentation is essential for enabling robots to understand and interact with their environments effectively.However,numerous challenges remain unresolved,particularly in single-robot systems,which often struggle with the complexity and variability of indoor scenes.To address these limitations,we introduce a novel multi-robot collaborative framework based on multiplex interactive learning(MPIL)in which each robot specialises in a distinct visual task within a unified multitask architecture.During training,the framework employs task-specific decoders and cross-task feature sharing to enhance collaborative optimisation.At inference time,robots operate independently with optimised models,enabling scalable,asynchronous and efficient deployment in real-world scenarios.Specifically,MPIL employs specially designed modules that integrate RGB and depth data,refine feature representations and facilitate the simultaneous execution of multiple tasks,such as instance segmentation,scene classification and semantic segmentation.By leveraging these modules,distinct agents within multi-robot systems can effectively handle specialised tasks,thereby enhancing the overall system's flexibility and adaptability.This collaborative effort maximises the strengths of each robot,resulting in a more comprehensive understanding of environments.Extensive experiments on two public benchmark datasets demonstrate MPIL's competitive performance compared to state-of-the-art approaches,highlighting the effectiveness and robustness of our multi-robot system in complex indoor environments.