The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.Th...The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.The new generation MOO methods should be rooted in automated learning rather than manual design.In this paper,we introduce a new automatic learning paradigm for optimizing MOO problems,and propose a multi-gradient learning to optimize(ML2O)method,which automatically learns a generator(or mappings)from multiple gradients to update directions.As a learning-based method,ML2O acquires knowledge of local landscapes by leveraging information from the current step and incorporates global experience extracted from historical iteration trajectory data.By introducing a new guarding mechanism,we propose a guarded multi-gradient learning to optimize(GML2O)method,and prove that the iterative sequence generated by GML2O converges to a Pareto stationary point.The experimental results demonstrate that our learned optimizer outperforms hand-designed competitors on training the multi-task learning neural network.展开更多
As the scale of power system continues to grow,a fast and accurate distributed optimal power flow solver becomes crucial for the effective dispatch of power system.This paper presents a learning to optimize(L2O)approa...As the scale of power system continues to grow,a fast and accurate distributed optimal power flow solver becomes crucial for the effective dispatch of power system.This paper presents a learning to optimize(L2O)approach to accelerating the distributed optimal power flow solving.The final convergence values of global variables and Lagrange multipliers of the alternating direction method of multipliers(ADMM)are estimated as its warm-start solution.A long short-term memory-variational auto-encoder(LSTM-VAE)model is developed as the core for estimating the convergence value,and the LSTM-VAE assisted ADMM is proposed.The LSTM generates low-dimensional representations of global variables and Lagrange multipliers,while the decoder part of VAE reconstructs the high-dimensional asymptotic convergence values.A novel loss function is designed in the form of a quadratic sum penalty term to incorporate the constraint violations of the Lagrange multipliers.Additionally,a two-stage training data generation strategy is proposed to efficiently generate substantial data within a limited amount of time.The effectiveness of the LSTM-VAE assisted ADMM is evaluated using the modified IEEE 123-bus system,a synthetic 500-bus system,and a 793-bus system.展开更多
Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimiz...Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimization problems frequently share common structures,L2O provides a tool to exploit these structures for better or faster solutions.This tutorial dives deep into L2O techniques,introducing how to accelerate optimization algorithms,promptly estimate the solutions,or even reshape the optimization problem itself,making it more adaptive to real-world applications.By considering the prerequisites for successful applications of L2O and the structure of the optimization problems at hand,this tutorial provides a comprehensive guide for practitioners and researchers alike.展开更多
Multi-objective bi-level optimization(MOBLO)addresses nested multi-objective optimization problems common in a range of applications.However,its multi-objective and hierarchical bi-level nature makes it notably comple...Multi-objective bi-level optimization(MOBLO)addresses nested multi-objective optimization problems common in a range of applications.However,its multi-objective and hierarchical bi-level nature makes it notably complex.Gradient-based MOBLO algorithms have recently grown in popularity,as they effectively solve crucial machine learning problems like meta-learning,neural architecture search,and reinforcement learning.Unfortunately,these algorithms depend on solving a sequence of approximation subproblems with high accuracy,resulting in adverse time and memory complexity that lowers their numerical efficiency.To address this issue,we propose a gradient-based algorithm for MOBLO,called gMOBA,which has fewer hyperparameters to tune,making it both simple and efficient.Additionally,we demonstrate the theoretical validity by accomplishing the desirable Pareto stationarity.Numerical experiments confirm the practical efficiency of the proposed method and verify the theoretical results.To accelerate the convergence of gMOBA,we introduce a beneficial L2O(learning to optimize)neural network(called L2O-gMOBA)implemented as the initialization phase of our gMOBA algorithm.Comparative results of numerical experiments are presented to illustrate the performance of L2O-gMOBA.展开更多
基金supported by the Major Program of National Natural Science Foundation of China(Grant Nos.11991020 and 11991024)National Natural Science Foundation of China(Grant Nos.11971084and 12171060)+4 种基金National Natural Science Foundation of China and Hong Kong Research Grants Council Joint Research Program(Grant No.12261160365)the Team Project of Innovation Leading Talent in Chongqing(Grant No.CQYC20210309536)the Natural Science Foundation of Chongqing of China(Grant No.CSTB2024NSCQLZX0140)the Major Project of Science and Technology Research Rrogram of Chongqing Education Commission of China(Grant No.KJZD-M202300504)the Foundation of Chongqing Normal University(Grant Nos.22XLB005 and 22XLB006)。
文摘The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.The new generation MOO methods should be rooted in automated learning rather than manual design.In this paper,we introduce a new automatic learning paradigm for optimizing MOO problems,and propose a multi-gradient learning to optimize(ML2O)method,which automatically learns a generator(or mappings)from multiple gradients to update directions.As a learning-based method,ML2O acquires knowledge of local landscapes by leveraging information from the current step and incorporates global experience extracted from historical iteration trajectory data.By introducing a new guarding mechanism,we propose a guarded multi-gradient learning to optimize(GML2O)method,and prove that the iterative sequence generated by GML2O converges to a Pareto stationary point.The experimental results demonstrate that our learned optimizer outperforms hand-designed competitors on training the multi-task learning neural network.
基金supported in part by Jiangsu Industry Outlook and Key Technology Research Project(No.BE2023093-2)the Young Elite Scientists Sponsorship Program by CAST(No.2023QNRC001).
文摘As the scale of power system continues to grow,a fast and accurate distributed optimal power flow solver becomes crucial for the effective dispatch of power system.This paper presents a learning to optimize(L2O)approach to accelerating the distributed optimal power flow solving.The final convergence values of global variables and Lagrange multipliers of the alternating direction method of multipliers(ADMM)are estimated as its warm-start solution.A long short-term memory-variational auto-encoder(LSTM-VAE)model is developed as the core for estimating the convergence value,and the LSTM-VAE assisted ADMM is proposed.The LSTM generates low-dimensional representations of global variables and Lagrange multipliers,while the decoder part of VAE reconstructs the high-dimensional asymptotic convergence values.A novel loss function is designed in the form of a quadratic sum penalty term to incorporate the constraint violations of the Lagrange multipliers.Additionally,a two-stage training data generation strategy is proposed to efficiently generate substantial data within a limited amount of time.The effectiveness of the LSTM-VAE assisted ADMM is evaluated using the modified IEEE 123-bus system,a synthetic 500-bus system,and a 793-bus system.
文摘Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimization problems frequently share common structures,L2O provides a tool to exploit these structures for better or faster solutions.This tutorial dives deep into L2O techniques,introducing how to accelerate optimization algorithms,promptly estimate the solutions,or even reshape the optimization problem itself,making it more adaptive to real-world applications.By considering the prerequisites for successful applications of L2O and the structure of the optimization problems at hand,this tutorial provides a comprehensive guide for practitioners and researchers alike.
基金supported by the Major Program of National Natural Science Foundation of China(Grant Nos.11991020 and 11991024)supported by National Natural Science Foundation of China(Grant No.12371305)+2 种基金supported by National Natural Science Foundation of China(Grant No.12222106)Guangdong Basic and Applied Basic Research Foundation(Grant No.2022B1515020082)Shenzhen Science and Technology Program(Grant No.RCYX20200714114700072)。
文摘Multi-objective bi-level optimization(MOBLO)addresses nested multi-objective optimization problems common in a range of applications.However,its multi-objective and hierarchical bi-level nature makes it notably complex.Gradient-based MOBLO algorithms have recently grown in popularity,as they effectively solve crucial machine learning problems like meta-learning,neural architecture search,and reinforcement learning.Unfortunately,these algorithms depend on solving a sequence of approximation subproblems with high accuracy,resulting in adverse time and memory complexity that lowers their numerical efficiency.To address this issue,we propose a gradient-based algorithm for MOBLO,called gMOBA,which has fewer hyperparameters to tune,making it both simple and efficient.Additionally,we demonstrate the theoretical validity by accomplishing the desirable Pareto stationarity.Numerical experiments confirm the practical efficiency of the proposed method and verify the theoretical results.To accelerate the convergence of gMOBA,we introduce a beneficial L2O(learning to optimize)neural network(called L2O-gMOBA)implemented as the initialization phase of our gMOBA algorithm.Comparative results of numerical experiments are presented to illustrate the performance of L2O-gMOBA.