Constrained multi-objective optimization problems(CMOPs) include the optimization of objective functions and the satisfaction of constraint conditions, which challenge the solvers.To solve CMOPs, constrained multi-obj...Constrained multi-objective optimization problems(CMOPs) include the optimization of objective functions and the satisfaction of constraint conditions, which challenge the solvers.To solve CMOPs, constrained multi-objective evolutionary algorithms(CMOEAs) have been developed. However, most of them tend to converge into local areas due to the loss of diversity. Evolutionary multitasking(EMT) is new model of solving complex optimization problems, through the knowledge transfer between the source task and other related tasks. Inspired by EMT, this paper develops a new EMT-based CMOEA to solve CMOPs, in which the main task, a global auxiliary task, and a local auxiliary task are created and optimized by one specific population respectively. The main task focuses on finding the feasible Pareto front(PF), and global and local auxiliary tasks are used to respectively enhance global and local diversity. Moreover, the global auxiliary task is used to implement the global search by ignoring constraints, so as to help the population of the main task pass through infeasible obstacles. The local auxiliary task is used to provide local diversity around the population of the main task, so as to exploit promising regions. Through the knowledge transfer among the three tasks, the search ability of the population of the main task will be significantly improved. Compared with other state-of-the-art CMOEAs, the experimental results on three benchmark test suites demonstrate the superior or competitive performance of the proposed CMOEA.展开更多
The historical interaction sequences of users play a crucial role in training recommender systems that can accurately predict user preferences.However,due to the arbitrariness of user behaviors,the presence of noise i...The historical interaction sequences of users play a crucial role in training recommender systems that can accurately predict user preferences.However,due to the arbitrariness of user behaviors,the presence of noise in these sequences poses a challenge to predicting their next actions in recommender systems.To address this issue,our motivation is based on the observation that training noisy sequences and clean sequences(sequences without noise)with equal weights can impact the performance of the model.We propose the novel self-supervised Auxiliary Task Joint Training(ATJT)method aimed at more accurately reweighting noisy sequences in recommender systems.Specifically,we strategically select subsets from users’original sequences and perform random replacements to generate artificially replaced noisy sequences.Subsequently,we perform joint training on these artificially replaced noisy sequences and the original sequences.Through effective reweighting,we incorporate the training results of the noise recognition model into the recommender model.We evaluate our method on three datasets using a consistent base model.Experimental results demonstrate the effectiveness of introducing the self-supervised auxiliary task to enhance the base model’s performance.展开更多
Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks.Existing MTL works mainly focus on the scenario where label sets among multiple tasks(MTs)are ...Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks.Existing MTL works mainly focus on the scenario where label sets among multiple tasks(MTs)are usually the same,thus they can be utilized for learning across the tasks.However,the real world has more general scenarios in which each task has only a small number of training samples and their label sets are just partially overlapped or even not.Learning such MTs is more challenging because of less correlation information available among these tasks.For this,we propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks and the information shared among those partiallyoverlapped tasks.In our implementation of using the same neural network architecture of the learnt auxiliary task to learn individual tasks,the key idea is to utilize available label information to adaptively prune the hidden layer neurons of the auxiliary network to construct corresponding network for each task,while accompanying a joint learning across individual tasks.Extensive experimental results demonstrate that our proposed method is significantly competitive compared to state-of-the-art methods.展开更多
基金supported in part by the National Natural Science Fund for Outstanding Young Scholars of China (61922072)the National Natural Science Foundation of China (62176238, 61806179, 61876169, 61976237)+2 种基金China Postdoctoral Science Foundation (2020M682347)the Training Program of Young Backbone Teachers in Colleges and Universities in Henan Province (2020GGJS006)Henan Provincial Young Talents Lifting Project (2021HYTP007)。
文摘Constrained multi-objective optimization problems(CMOPs) include the optimization of objective functions and the satisfaction of constraint conditions, which challenge the solvers.To solve CMOPs, constrained multi-objective evolutionary algorithms(CMOEAs) have been developed. However, most of them tend to converge into local areas due to the loss of diversity. Evolutionary multitasking(EMT) is new model of solving complex optimization problems, through the knowledge transfer between the source task and other related tasks. Inspired by EMT, this paper develops a new EMT-based CMOEA to solve CMOPs, in which the main task, a global auxiliary task, and a local auxiliary task are created and optimized by one specific population respectively. The main task focuses on finding the feasible Pareto front(PF), and global and local auxiliary tasks are used to respectively enhance global and local diversity. Moreover, the global auxiliary task is used to implement the global search by ignoring constraints, so as to help the population of the main task pass through infeasible obstacles. The local auxiliary task is used to provide local diversity around the population of the main task, so as to exploit promising regions. Through the knowledge transfer among the three tasks, the search ability of the population of the main task will be significantly improved. Compared with other state-of-the-art CMOEAs, the experimental results on three benchmark test suites demonstrate the superior or competitive performance of the proposed CMOEA.
基金supported by the Program for Student Innovation Through Research and Training of Guizhou University under Grant No.2023SRT071.
文摘The historical interaction sequences of users play a crucial role in training recommender systems that can accurately predict user preferences.However,due to the arbitrariness of user behaviors,the presence of noise in these sequences poses a challenge to predicting their next actions in recommender systems.To address this issue,our motivation is based on the observation that training noisy sequences and clean sequences(sequences without noise)with equal weights can impact the performance of the model.We propose the novel self-supervised Auxiliary Task Joint Training(ATJT)method aimed at more accurately reweighting noisy sequences in recommender systems.Specifically,we strategically select subsets from users’original sequences and perform random replacements to generate artificially replaced noisy sequences.Subsequently,we perform joint training on these artificially replaced noisy sequences and the original sequences.Through effective reweighting,we incorporate the training results of the noise recognition model into the recommender model.We evaluate our method on three datasets using a consistent base model.Experimental results demonstrate the effectiveness of introducing the self-supervised auxiliary task to enhance the base model’s performance.
基金supported by the NSFC(Grant No.61672281)the Key Program of NSFC(No.61732006).
文摘Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks.Existing MTL works mainly focus on the scenario where label sets among multiple tasks(MTs)are usually the same,thus they can be utilized for learning across the tasks.However,the real world has more general scenarios in which each task has only a small number of training samples and their label sets are just partially overlapped or even not.Learning such MTs is more challenging because of less correlation information available among these tasks.For this,we propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks and the information shared among those partiallyoverlapped tasks.In our implementation of using the same neural network architecture of the learnt auxiliary task to learn individual tasks,the key idea is to utilize available label information to adaptively prune the hidden layer neurons of the auxiliary network to construct corresponding network for each task,while accompanying a joint learning across individual tasks.Extensive experimental results demonstrate that our proposed method is significantly competitive compared to state-of-the-art methods.