期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Learning to optimize by multi-gradient for multi-objective optimization
1
作者 Linxi Yang Xinmin Yang Liping Tang 《Science China Mathematics》 2026年第2期539-570,共32页
The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.Th... The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.The new generation MOO methods should be rooted in automated learning rather than manual design.In this paper,we introduce a new automatic learning paradigm for optimizing MOO problems,and propose a multi-gradient learning to optimize(ML2O)method,which automatically learns a generator(or mappings)from multiple gradients to update directions.As a learning-based method,ML2O acquires knowledge of local landscapes by leveraging information from the current step and incorporates global experience extracted from historical iteration trajectory data.By introducing a new guarding mechanism,we propose a guarded multi-gradient learning to optimize(GML2O)method,and prove that the iterative sequence generated by GML2O converges to a Pareto stationary point.The experimental results demonstrate that our learned optimizer outperforms hand-designed competitors on training the multi-task learning neural network. 展开更多
关键词 multi-objective optimization learning to optimize stochastic gradient method SAFEGUARD
原文传递
A Learning to Optimize Approach to Accelerating Distributed Optimal Power Flow Solving
2
作者 Huihuang Cai Huan Long +2 位作者 Zhi Wu Wei Gu Jingtao Zhao 《Journal of Modern Power Systems and Clean Energy》 2025年第6期1884-1895,共12页
As the scale of power system continues to grow,a fast and accurate distributed optimal power flow solver becomes crucial for the effective dispatch of power system.This paper presents a learning to optimize(L2O)approa... As the scale of power system continues to grow,a fast and accurate distributed optimal power flow solver becomes crucial for the effective dispatch of power system.This paper presents a learning to optimize(L2O)approach to accelerating the distributed optimal power flow solving.The final convergence values of global variables and Lagrange multipliers of the alternating direction method of multipliers(ADMM)are estimated as its warm-start solution.A long short-term memory-variational auto-encoder(LSTM-VAE)model is developed as the core for estimating the convergence value,and the LSTM-VAE assisted ADMM is proposed.The LSTM generates low-dimensional representations of global variables and Lagrange multipliers,while the decoder part of VAE reconstructs the high-dimensional asymptotic convergence values.A novel loss function is designed in the form of a quadratic sum penalty term to incorporate the constraint violations of the Lagrange multipliers.Additionally,a two-stage training data generation strategy is proposed to efficiently generate substantial data within a limited amount of time.The effectiveness of the LSTM-VAE assisted ADMM is evaluated using the modified IEEE 123-bus system,a synthetic 500-bus system,and a 793-bus system. 展开更多
关键词 Optimal power flow learning to optimize alternating direction method of multipliers(ADMM) long short-term memory variational auto-encoder
原文传递
Learning to optimize:A tutorial for continuous and mixed-integer optimization 被引量:1
3
作者 Xiaohan Chen Jialin Liu Wotao Yin 《Science China Mathematics》 SCIE CSCD 2024年第6期1191-1262,共72页
Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimiz... Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimization problems frequently share common structures,L2O provides a tool to exploit these structures for better or faster solutions.This tutorial dives deep into L2O techniques,introducing how to accelerate optimization algorithms,promptly estimate the solutions,or even reshape the optimization problem itself,making it more adaptive to real-world applications.By considering the prerequisites for successful applications of L2O and the structure of the optimization problems at hand,this tutorial provides a comprehensive guide for practitioners and researchers alike. 展开更多
关键词 AI for mathematics(AI4Math) learning to optimize algorithm unrolling plug-and-play methods differentiable programming machine learning for combinatorial optimization(ML4CO)
原文传递
Gradient-based algorithms for multi-objective bi-level optimization 被引量:1
4
作者 Xinmin Yang Wei Yao +2 位作者 Haian Yin Shangzhi Zeng Jin Zhang 《Science China Mathematics》 SCIE CSCD 2024年第6期1419-1438,共20页
Multi-objective bi-level optimization(MOBLO)addresses nested multi-objective optimization problems common in a range of applications.However,its multi-objective and hierarchical bi-level nature makes it notably comple... Multi-objective bi-level optimization(MOBLO)addresses nested multi-objective optimization problems common in a range of applications.However,its multi-objective and hierarchical bi-level nature makes it notably complex.Gradient-based MOBLO algorithms have recently grown in popularity,as they effectively solve crucial machine learning problems like meta-learning,neural architecture search,and reinforcement learning.Unfortunately,these algorithms depend on solving a sequence of approximation subproblems with high accuracy,resulting in adverse time and memory complexity that lowers their numerical efficiency.To address this issue,we propose a gradient-based algorithm for MOBLO,called gMOBA,which has fewer hyperparameters to tune,making it both simple and efficient.Additionally,we demonstrate the theoretical validity by accomplishing the desirable Pareto stationarity.Numerical experiments confirm the practical efficiency of the proposed method and verify the theoretical results.To accelerate the convergence of gMOBA,we introduce a beneficial L2O(learning to optimize)neural network(called L2O-gMOBA)implemented as the initialization phase of our gMOBA algorithm.Comparative results of numerical experiments are presented to illustrate the performance of L2O-gMOBA. 展开更多
关键词 MULTI-OBJECTIVE bi-level optimization convergence analysis Pareto stationary learning to optimize
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部