期刊文献+

Learning to optimize by multi-gradient for multi-objective optimization

原文传递
导出
摘要 The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.The new generation MOO methods should be rooted in automated learning rather than manual design.In this paper,we introduce a new automatic learning paradigm for optimizing MOO problems,and propose a multi-gradient learning to optimize(ML2O)method,which automatically learns a generator(or mappings)from multiple gradients to update directions.As a learning-based method,ML2O acquires knowledge of local landscapes by leveraging information from the current step and incorporates global experience extracted from historical iteration trajectory data.By introducing a new guarding mechanism,we propose a guarded multi-gradient learning to optimize(GML2O)method,and prove that the iterative sequence generated by GML2O converges to a Pareto stationary point.The experimental results demonstrate that our learned optimizer outperforms hand-designed competitors on training the multi-task learning neural network.
机构地区 School of Mathematics
出处 《Science China Mathematics》 2026年第2期539-570,共32页 中国科学(数学英文版)
基金 supported by the Major Program of National Natural Science Foundation of China(Grant Nos.11991020 and 11991024) National Natural Science Foundation of China(Grant Nos.11971084and 12171060) National Natural Science Foundation of China and Hong Kong Research Grants Council Joint Research Program(Grant No.12261160365) the Team Project of Innovation Leading Talent in Chongqing(Grant No.CQYC20210309536) the Natural Science Foundation of Chongqing of China(Grant No.CSTB2024NSCQLZX0140) the Major Project of Science and Technology Research Rrogram of Chongqing Education Commission of China(Grant No.KJZD-M202300504) the Foundation of Chongqing Normal University(Grant Nos.22XLB005 and 22XLB006)。

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部