In this paper, a class of augmented Lagrangiaus of Di Pillo and Grippo (DGALs) was considered, for solving equality-constrained problems via unconstrained minimization techniques. The relationship was further discus...In this paper, a class of augmented Lagrangiaus of Di Pillo and Grippo (DGALs) was considered, for solving equality-constrained problems via unconstrained minimization techniques. The relationship was further discussed between the uneonstrained minimizers of DGALs on the product space of problem variables and multipliers, and the solutions of the eonstrained problem and the corresponding values of the Lagrange multipliers. The resulting properties indicate more precisely that this class of DGALs is exact multiplier penalty functions. Therefore, a solution of the equslity-constralned problem and the corresponding values of the Lagrange multipliers can be found by performing a single unconstrained minimization of a DGAL on the product space of problem variables and multipliers.展开更多
Tensor robust principal component analysis(TRPCA) problem aims to separate a low-rank tensor and a sparse tensor from their sum. This problem has recently attracted considerable research attention due to its wide ra...Tensor robust principal component analysis(TRPCA) problem aims to separate a low-rank tensor and a sparse tensor from their sum. This problem has recently attracted considerable research attention due to its wide range of potential applications in computer vision and pattern recognition. In this paper, we propose a new model to deal with the TRPCA problem by an alternation minimization algorithm along with two adaptive rankadjusting strategies. For the underlying low-rank tensor, we simultaneously perform low-rank matrix factorizations to its all-mode matricizations; while for the underlying sparse tensor,a soft-threshold shrinkage scheme is applied. Our method can be used to deal with the separation between either an exact or an approximate low-rank tensor and a sparse one. We established the subsequence convergence of our algorithm in the sense that any limit point of the iterates satisfies the KKT conditions. When the iteration stops, the output will be modified by applying a high-order SVD approach to achieve an exactly low-rank final result as the accurate rank has been calculated. The numerical experiments demonstrate that our method could achieve better results than the compared methods.展开更多
An exact augmented Lagrangian function for the nonlinear nonconvex programming problems with inequality constraints was discussed. Under suitable hypotheses, the relationship was established between the local unconstr...An exact augmented Lagrangian function for the nonlinear nonconvex programming problems with inequality constraints was discussed. Under suitable hypotheses, the relationship was established between the local unconstrained minimizers of the augmented Lagrangian function on the space of problem variables and the local minimizers of the original constrained problem. Furthermore, under some assumptions, the relationship was also established between the global solutions of the augmented Lagrangian function on some compact subset of the space of problem variables and the global solutions of the constrained problem. Therefore, f^om the theoretical point of view, a solution of the inequality constrained problem and the corresponding values of the Lagrange multipliers can be found by the well-known method of multipliers which resort to the unconstrained minimization of the augmented Lagrangian function presented.展开更多
This paper formulates a two-dimensional strip packing problem as a non- linear programming (NLP) problem and establishes the first-order optimality conditions for the NLP problem. A numerical algorithm for solving t...This paper formulates a two-dimensional strip packing problem as a non- linear programming (NLP) problem and establishes the first-order optimality conditions for the NLP problem. A numerical algorithm for solving this NLP problem is given to find exact solutions to strip-packing problems involving up to 10 items. Approximate solutions can be found for big-sized problems by decomposing the set of items into small-sized blocks of which each block adopts the proposed numerical algorithm. Numerical results show that the approximate solutions to big-sized problems obtained by this method are superior to those by NFDH, FFDH and BFDH approaches.展开更多
In this paper, we propose and analyze an accelerated augmented Lagrangian method(denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k^2) whil...In this paper, we propose and analyze an accelerated augmented Lagrangian method(denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k^2) while the convergence rate of the classical augmented Lagrangian method(ALM) is O1 k. Numerical experiments on the linearly constrained 1-2minimization problem are presented to demonstrate the effectiveness of AALM.展开更多
In this paper,we consider online convex optimization(OCO)with time-varying loss and constraint functions.Specifically,the decision-maker chooses sequential decisions based only on past information;meantime,the loss an...In this paper,we consider online convex optimization(OCO)with time-varying loss and constraint functions.Specifically,the decision-maker chooses sequential decisions based only on past information;meantime,the loss and constraint functions are revealed over time.We first develop a class of model-based augmented Lagrangian methods(MALM)for time-varying functional constrained OCO(without feedback delay).Under standard assumptions,we establish sublinear regret and sublinear constraint violation of MALM.Furthermore,we extend MALM to deal with time-varying functional constrained OCO with delayed feedback,in which the feedback information of loss and constraint functions is revealed to decision-maker with delays.Without additional assumptions,we also establish sublinear regret and sublinear constraint violation for the delayed version of MALM.Finally,numerical results for several examples of constrained OCO including online network resource allocation,online logistic regression and online quadratically constrained quadratical program are presented to demonstrate the efficiency of the proposed algorithms.展开更多
In this paper,we analyze the convergence properties of a stochastic augmented Lagrangian method for solving stochastic convex programming problems with inequality constraints.Approximation models for stochastic convex...In this paper,we analyze the convergence properties of a stochastic augmented Lagrangian method for solving stochastic convex programming problems with inequality constraints.Approximation models for stochastic convex programming problems are constructed from stochastic observations of real objective and constraint functions.Based on relations between solutions of the primal problem and solutions of the dual problem,it is proved that the convergence of the algorithm from the perspective of the dual problem.Without assumptions on how these random models are generated,when estimates are merely sufficiently accurate to the real objective and constraint functions with high enough,but fixed,probability,the method converges globally to the optimal solution almost surely.In addition,sufficiently accurate random models are given under different noise assumptions.We also report numerical results that show the good performance of the algorithm for different convex programming problems with several random models.展开更多
In this paper, an approximate augmented Lagrangian function for nonlinear semidefinite programs is introduced. Some basic properties of the approximate augmented Lagrange function such as monotonicity and convexity ar...In this paper, an approximate augmented Lagrangian function for nonlinear semidefinite programs is introduced. Some basic properties of the approximate augmented Lagrange function such as monotonicity and convexity are discussed. Necessary and sufficient conditions for approximate strong duality results are derived. Conditions for an approximate exact penalty representation in the framework of augmented Lagrangian are given. Under certain conditions, it is shown that any limit point of a sequence of stationary points of approximate augmented Lagrangian problems is a KKT point of the original semidefinite program and that a sequence of optimal solutions to augmented Lagrangian problems converges to a solution of the original semidefinite program.展开更多
This paper is concerned with a novel deep learning method for variational problems with essential boundary conditions.To this end,wefirst reformulate the original problem into a minimax problem corresponding to a feas...This paper is concerned with a novel deep learning method for variational problems with essential boundary conditions.To this end,wefirst reformulate the original problem into a minimax problem corresponding to a feasible augmented La-grangian,which can be solved by the augmented Lagrangian method in an infinite dimensional setting.Based on this,by expressing the primal and dual variables with two individual deep neural network functions,we present an augmented Lagrangian deep learning method for which the parameters are trained by the stochastic optimiza-tion method together with a projection technique.Compared to the traditional penalty method,the new method admits two main advantages:i)the choice of the penalty parameter isflexible and robust,and ii)the numerical solution is more accurate in the same magnitude of computational cost.As typical applications,we apply the new ap-proach to solve elliptic problems and(nonlinear)eigenvalue problems with essential boundary conditions,and numerical experiments are presented to show the effective-ness of the new method.展开更多
Recently, dictionary learning(DL) based methods have been introduced to compressed sensing magnetic resonance imaging(CS-MRI), which outperforms pre-defined analytic sparse priors. However, single-scale trained dictio...Recently, dictionary learning(DL) based methods have been introduced to compressed sensing magnetic resonance imaging(CS-MRI), which outperforms pre-defined analytic sparse priors. However, single-scale trained dictionary directly from image patches is incapable of representing image features from multi-scale, multi-directional perspective, which influences the reconstruction performance. In this paper, incorporating the superior multi-scale properties of uniform discrete curvelet transform(UDCT) with the data matching adaptability of trained dictionaries, we propose a flexible sparsity framework to allow sparser representation and prominent hierarchical essential features capture for magnetic resonance(MR) images. Multi-scale decomposition is implemented by using UDCT due to its prominent properties of lower redundancy ratio, hierarchical data structure, and ease of implementation. Each sub-dictionary of different sub-bands is trained independently to form the multi-scale dictionaries. Corresponding to this brand-new sparsity model, we modify the constraint splitting augmented Lagrangian shrinkage algorithm(C-SALSA) as patch-based C-SALSA(PB C-SALSA) to solve the constraint optimization problem of regularized image reconstruction. Experimental results demonstrate that the trained sub-dictionaries at different scales, enforcing sparsity at multiple scales, can then be efficiently used for MRI reconstruction to obtain satisfactory results with further reduced undersampling rate. Multi-scale UDCT dictionaries potentially outperform both single-scale trained dictionaries and multi-scale analytic transforms. Our proposed sparsity model achieves sparser representation for reconstructed data, which results in fast convergence of reconstruction exploiting PB C-SALSA. Simulation results demonstrate that the proposed method outperforms conventional CS-MRI methods in maintaining intrinsic properties, eliminating aliasing, reducing unexpected artifacts, and removing noise. It can achieve comparable performance of reconstruction with the state-of-the-art methods even under substantially high undersampling factors.展开更多
.In this paper,an augmented Lagrangian Uzawa iterative method is developed and analyzed for solving a class of double saddle-point systems with semidefinite(2,2)block.Convergence of the iterativemethod is proved under....In this paper,an augmented Lagrangian Uzawa iterative method is developed and analyzed for solving a class of double saddle-point systems with semidefinite(2,2)block.Convergence of the iterativemethod is proved under the assumption that the double saddle-point problem exists a unique solution.An application of the iterative method to the double saddle-point systems arising from the distributed Lagrange multiplier/fictitious domain(DLM/FD)finite element method for solving elliptic interface problems is also presented,in which the existence and uniqueness of the double saddle-point system is guaranteed by the analysis of the DLM/FD finite element method.Numerical experiments are conducted to validate the theoretical results and to study the performance of the proposed iterative method.展开更多
An augmented Lagrangian trust region method with a bi=object strategy is proposed for solving nonlinear equality constrained optimization, which falls in between penalty-type methods and penalty-free ones. At each ite...An augmented Lagrangian trust region method with a bi=object strategy is proposed for solving nonlinear equality constrained optimization, which falls in between penalty-type methods and penalty-free ones. At each iteration, a trial step is computed by minimizing a quadratic approximation model to the augmented Lagrangian function within a trust region. The model is a standard trust region subproblem for unconstrained optimization and hence can efficiently be solved by many existing methods. To choose the penalty parameter, an auxiliary trust region subproblem is introduced related to the constraint violation. It turns out that the penalty parameter need not be monotonically increasing and will not tend to infinity. A bi-object strategy, which is related to the objective function and the measure of constraint violation, is utilized to decide whether the trial step will be accepted or not. Global convergence of the method is established under mild assumptions. Numerical experiments are made, which illustrate the efficiency of the algorithm on various difficult situations.展开更多
In this paper,we provide some gentle introductions to the recent advance in augmented Lagrangian methods for solving large-scale convex matrix optimization problems(cMOP).Specifically,we reviewed two types of sufficie...In this paper,we provide some gentle introductions to the recent advance in augmented Lagrangian methods for solving large-scale convex matrix optimization problems(cMOP).Specifically,we reviewed two types of sufficient conditions for ensuring the quadratic growth conditions of a class of constrained convex matrix optimization problems regularized by nonsmooth spectral functions.Under a mild quadratic growth condition on the dual of cMOP,we further discussed the R-superlinear convergence of the Karush-Kuhn-Tucker(KKT)residuals of the sequence generated by the augmented Lagrangian methods(ALM)for solving convex matrix optimization problems.Implementation details of the ALM for solving core convex matrix optimization problems are also provided.展开更多
The augmented Lagrangian function and the corresponding augmented Lagrangian method are constructed for solving a class of minimax optimization problems with equality constraints.We prove that,under the linear indepen...The augmented Lagrangian function and the corresponding augmented Lagrangian method are constructed for solving a class of minimax optimization problems with equality constraints.We prove that,under the linear independence constraint qualification and the second-order sufficiency optimality condition for the lower level problem and the second-order sufficiency optimality condition for the minimax problem,for a given multiplier vectorμ,the rate of convergence of the augmented Lagrangian method is linear with respect to||μu-μ^(*)||and the ratio constant is proportional to 1/c when the ratio|μ-μ^(*)||/c is small enough,where c is the penalty parameter that exceeds a threshold c_(*)>O andμ^(*)is the multiplier corresponding to a local minimizer.Moreover,we prove that the sequence of multiplier vectors generated by the augmented Lagrangian method has at least Q-linear convergence if the sequence of penalty parameters(ck)is bounded and the convergence rate is superlinear if(ck)is increasing to infinity.Finally,we use a direct way to establish the rate of convergence of the augmented Lagrangian method for the minimax problem with a quadratic objective function and linear equality constraints.展开更多
One of the surface mining methods is open-pit mining,by which a pit is dug to extract ore or waste downwards from the earth’s surface.In the mining industry,one of the most significant difficulties is long-term produ...One of the surface mining methods is open-pit mining,by which a pit is dug to extract ore or waste downwards from the earth’s surface.In the mining industry,one of the most significant difficulties is long-term production scheduling(LTPS)of the open-pit mines.Deterministic and uncertainty-based approaches are identified as the main strategies,which have been widely used to cope with this problem.Within the last few years,many researchers have highly considered a new computational type,which is less costly,i.e.,meta-heuristic methods,so as to solve the mine design and production scheduling problem.Although the optimality of the final solution cannot be guaranteed,they are able to produce sufficiently good solutions with relatively less computational costs.In the present paper,two hybrid models between augmented Lagrangian relaxation(ALR)and a particle swarm optimization(PSO)and ALR and bat algorithm(BA)are suggested so that the LTPS problem is solved under the condition of grade uncertainty.It is suggested to carry out the ALR method on the LTPS problem to improve its performance and accelerate the convergence.Moreover,the Lagrangian coefficients are updated by using PSO and BA.The presented models have been compared with the outcomes of the ALR-genetic algorithm,the ALR-traditional sub-gradient method,and the conventional method without using the Lagrangian approach.The results indicated that the ALR is considered a more efficient approach which can solve a large-scale problem and make a valid solution.Hence,it is more effectual than the conventional method.Furthermore,the time and cost of computation are diminished by the proposed hybrid strategies.The CPU time using the ALR-BA method is about 7.4%higher than the ALR-PSO approach.展开更多
This paper aims to propose a topology optimization method on generating porous structures comprising multiple materials.The mathematical optimization formulation is established under the constraints of individual volu...This paper aims to propose a topology optimization method on generating porous structures comprising multiple materials.The mathematical optimization formulation is established under the constraints of individual volume fraction of constituent phase or total mass,as well as the local volume fraction of all phases.The original optimization problem with numerous constraints is converted into a box-constrained optimization problem by incorporating all constraints to the augmented Lagrangian function,avoiding the parameter dependence in the conventional aggregation process.Furthermore,the local volume percentage can be precisely satisfied.The effects including the globalmass bound,the influence radius and local volume percentage on final designs are exploited through numerical examples.The numerical results also reveal that porous structures keep a balance between the bulk design and periodic design in terms of the resulting compliance.All results,including those for irregular structures andmultiple volume fraction constraints,demonstrate that the proposedmethod can provide an efficient solution for multiple material infill structures.展开更多
The hydrothermal scheduling in the electric power market becomes difficult because of introducing competition and considering sorts of constraints. An augmented Lagrangian approach is adopted to solve the problem,whic...The hydrothermal scheduling in the electric power market becomes difficult because of introducing competition and considering sorts of constraints. An augmented Lagrangian approach is adopted to solve the problem,which adds to the standard Lagrangian function a quadratic penalty term without changing its dual property,and reduces the oscillation in iterations. According to the theory of large system coordination and decomposition,the problem is divided into hydro sub-problem and thermal sub-problem,which are coordinated by updating the Lagrangian multipliers,then the optimal solution is obtained. Our results for a test system show that the augmented Lagrangian approach can make the problem converge into the optimal solution quickly.展开更多
In this work,we propose a second-order model for image denoising by employing a novel potential function recently developed in Zhu(J Sci Comput 88:46,2021)for the design of a regularization term.Due to this new second...In this work,we propose a second-order model for image denoising by employing a novel potential function recently developed in Zhu(J Sci Comput 88:46,2021)for the design of a regularization term.Due to this new second-order derivative based regularizer,the model is able to alleviate the staircase effect and preserve image contrast.The augmented Lagrangian method(ALM)is utilized to minimize the associated functional and convergence analysis is established for the proposed algorithm.Numerical experiments are presented to demonstrate the features of the proposed model.展开更多
The manuscript presents an augmented Lagrangian—fast projected gradient method (ALFPGM) with an improved scheme of working set selection, pWSS, a decomposition based algorithm for training support vector classificati...The manuscript presents an augmented Lagrangian—fast projected gradient method (ALFPGM) with an improved scheme of working set selection, pWSS, a decomposition based algorithm for training support vector classification machines (SVM). The manuscript describes the ALFPGM algorithm, provides numerical results for training SVM on large data sets, and compares the training times of ALFPGM and Sequential Minimal Minimization algorithms (SMO) from Scikit-learn library. The numerical results demonstrate that ALFPGM with the improved working selection scheme is capable of training SVM with tens of thousands of training examples in a fraction of the training time of some widely adopted SVM tools.展开更多
In this paper,we investigate dual problems for nonconvex set-valued vector optimization via abstract subdifferential.We first introduce a generalized augmented Lagrangian function induced by a coupling vector-valued f...In this paper,we investigate dual problems for nonconvex set-valued vector optimization via abstract subdifferential.We first introduce a generalized augmented Lagrangian function induced by a coupling vector-valued function for set-valued vector optimization problem and construct related set-valued dual map and dual optimization problem on the basic of weak efficiency,which used by the concepts of supremum and infimum of a set.We then establish the weak and strong duality results under this augmented Lagrangian and present sufficient conditions for exact penalization via an abstract subdifferential of the object map.Finally,we define the sub-optimal path related to the dual problem and show that every cluster point of this sub-optimal path is a primal optimal solution of the object optimization problem.In addition,we consider a generalized vector variational inequality as an application of abstract subdifferential.展开更多
文摘In this paper, a class of augmented Lagrangiaus of Di Pillo and Grippo (DGALs) was considered, for solving equality-constrained problems via unconstrained minimization techniques. The relationship was further discussed between the uneonstrained minimizers of DGALs on the product space of problem variables and multipliers, and the solutions of the eonstrained problem and the corresponding values of the Lagrange multipliers. The resulting properties indicate more precisely that this class of DGALs is exact multiplier penalty functions. Therefore, a solution of the equslity-constralned problem and the corresponding values of the Lagrange multipliers can be found by performing a single unconstrained minimization of a DGAL on the product space of problem variables and multipliers.
基金Supported by the National Natural Science Foundation of China(Grant Nos.6157209961320106008+2 种基金91230103)National Science and Technology Major Project(Grant Nos.2013ZX040050212014ZX04001011)
文摘Tensor robust principal component analysis(TRPCA) problem aims to separate a low-rank tensor and a sparse tensor from their sum. This problem has recently attracted considerable research attention due to its wide range of potential applications in computer vision and pattern recognition. In this paper, we propose a new model to deal with the TRPCA problem by an alternation minimization algorithm along with two adaptive rankadjusting strategies. For the underlying low-rank tensor, we simultaneously perform low-rank matrix factorizations to its all-mode matricizations; while for the underlying sparse tensor,a soft-threshold shrinkage scheme is applied. Our method can be used to deal with the separation between either an exact or an approximate low-rank tensor and a sparse one. We established the subsequence convergence of our algorithm in the sense that any limit point of the iterates satisfies the KKT conditions. When the iteration stops, the output will be modified by applying a high-order SVD approach to achieve an exactly low-rank final result as the accurate rank has been calculated. The numerical experiments demonstrate that our method could achieve better results than the compared methods.
文摘An exact augmented Lagrangian function for the nonlinear nonconvex programming problems with inequality constraints was discussed. Under suitable hypotheses, the relationship was established between the local unconstrained minimizers of the augmented Lagrangian function on the space of problem variables and the local minimizers of the original constrained problem. Furthermore, under some assumptions, the relationship was also established between the global solutions of the augmented Lagrangian function on some compact subset of the space of problem variables and the global solutions of the constrained problem. Therefore, f^om the theoretical point of view, a solution of the inequality constrained problem and the corresponding values of the Lagrange multipliers can be found by the well-known method of multipliers which resort to the unconstrained minimization of the augmented Lagrangian function presented.
基金State Foundstion of Ph.D Units of China(2003-05)under Grant 20020141013the NNSF(10471015)of Liaoning Province,China.
文摘This paper formulates a two-dimensional strip packing problem as a non- linear programming (NLP) problem and establishes the first-order optimality conditions for the NLP problem. A numerical algorithm for solving this NLP problem is given to find exact solutions to strip-packing problems involving up to 10 items. Approximate solutions can be found for big-sized problems by decomposing the set of items into small-sized blocks of which each block adopts the proposed numerical algorithm. Numerical results show that the approximate solutions to big-sized problems obtained by this method are superior to those by NFDH, FFDH and BFDH approaches.
基金Supported by Fujian Natural Science Foundation(2016J01005)Strategic Priority Research Program of the Chinese Academy of Sciences(XDB18010202)
文摘In this paper, we propose and analyze an accelerated augmented Lagrangian method(denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k^2) while the convergence rate of the classical augmented Lagrangian method(ALM) is O1 k. Numerical experiments on the linearly constrained 1-2minimization problem are presented to demonstrate the effectiveness of AALM.
基金supported in part by the National Key R&D Program of China(No.2022YFA1004000)the National Natural Science Foundation of China(Nos.11971089 and 12271076).
文摘In this paper,we consider online convex optimization(OCO)with time-varying loss and constraint functions.Specifically,the decision-maker chooses sequential decisions based only on past information;meantime,the loss and constraint functions are revealed over time.We first develop a class of model-based augmented Lagrangian methods(MALM)for time-varying functional constrained OCO(without feedback delay).Under standard assumptions,we establish sublinear regret and sublinear constraint violation of MALM.Furthermore,we extend MALM to deal with time-varying functional constrained OCO with delayed feedback,in which the feedback information of loss and constraint functions is revealed to decision-maker with delays.Without additional assumptions,we also establish sublinear regret and sublinear constraint violation for the delayed version of MALM.Finally,numerical results for several examples of constrained OCO including online network resource allocation,online logistic regression and online quadratically constrained quadratical program are presented to demonstrate the efficiency of the proposed algorithms.
基金supported by the National Key R&D Program of China(Project No.2022YFA1004000)by the Natural Science Foundation of China(Project No.12371298).
文摘In this paper,we analyze the convergence properties of a stochastic augmented Lagrangian method for solving stochastic convex programming problems with inequality constraints.Approximation models for stochastic convex programming problems are constructed from stochastic observations of real objective and constraint functions.Based on relations between solutions of the primal problem and solutions of the dual problem,it is proved that the convergence of the algorithm from the perspective of the dual problem.Without assumptions on how these random models are generated,when estimates are merely sufficiently accurate to the real objective and constraint functions with high enough,but fixed,probability,the method converges globally to the optimal solution almost surely.In addition,sufficiently accurate random models are given under different noise assumptions.We also report numerical results that show the good performance of the algorithm for different convex programming problems with several random models.
基金This work is partially supported by the Postdoctoral Fellowship of The Hong Kong Polytechnic Universitythe Research Grants Council of Hong Kong(PolyU B-Q890)
文摘In this paper, an approximate augmented Lagrangian function for nonlinear semidefinite programs is introduced. Some basic properties of the approximate augmented Lagrange function such as monotonicity and convexity are discussed. Necessary and sufficient conditions for approximate strong duality results are derived. Conditions for an approximate exact penalty representation in the framework of augmented Lagrangian are given. Under certain conditions, it is shown that any limit point of a sequence of stationary points of approximate augmented Lagrangian problems is a KKT point of the original semidefinite program and that a sequence of optimal solutions to augmented Lagrangian problems converges to a solution of the original semidefinite program.
基金supported by the National Key Research and Development Project(Grant No.2020YFA0709800)NSFC(Grant No.12071289)+4 种基金Shanghai Municipal Science and Technology Major Project(2021SHZDZX0102)supported by the National Key R&D Program of China(2020YFA0712000)NSFC(under grant numbers 11822111,11688101)the science challenge project(No.TZ2018001)youth innovation promotion association(CAS).
文摘This paper is concerned with a novel deep learning method for variational problems with essential boundary conditions.To this end,wefirst reformulate the original problem into a minimax problem corresponding to a feasible augmented La-grangian,which can be solved by the augmented Lagrangian method in an infinite dimensional setting.Based on this,by expressing the primal and dual variables with two individual deep neural network functions,we present an augmented Lagrangian deep learning method for which the parameters are trained by the stochastic optimiza-tion method together with a projection technique.Compared to the traditional penalty method,the new method admits two main advantages:i)the choice of the penalty parameter isflexible and robust,and ii)the numerical solution is more accurate in the same magnitude of computational cost.As typical applications,we apply the new ap-proach to solve elliptic problems and(nonlinear)eigenvalue problems with essential boundary conditions,and numerical experiments are presented to show the effective-ness of the new method.
基金Project supported by the National Natural Science Foundation of China(Nos.61175012 and 61201422)the Natural Science Foundation of Gansu Province of China(No.1208RJ-ZA265)+1 种基金the Specialized Research Fund for the Doctoral Program of Higher Education of China(No.2011021111-0026)the Fundamental Research Funds for the Central Universities of China(Nos.lzujbky-2015-108 and lzujbky-2015-197)
文摘Recently, dictionary learning(DL) based methods have been introduced to compressed sensing magnetic resonance imaging(CS-MRI), which outperforms pre-defined analytic sparse priors. However, single-scale trained dictionary directly from image patches is incapable of representing image features from multi-scale, multi-directional perspective, which influences the reconstruction performance. In this paper, incorporating the superior multi-scale properties of uniform discrete curvelet transform(UDCT) with the data matching adaptability of trained dictionaries, we propose a flexible sparsity framework to allow sparser representation and prominent hierarchical essential features capture for magnetic resonance(MR) images. Multi-scale decomposition is implemented by using UDCT due to its prominent properties of lower redundancy ratio, hierarchical data structure, and ease of implementation. Each sub-dictionary of different sub-bands is trained independently to form the multi-scale dictionaries. Corresponding to this brand-new sparsity model, we modify the constraint splitting augmented Lagrangian shrinkage algorithm(C-SALSA) as patch-based C-SALSA(PB C-SALSA) to solve the constraint optimization problem of regularized image reconstruction. Experimental results demonstrate that the trained sub-dictionaries at different scales, enforcing sparsity at multiple scales, can then be efficiently used for MRI reconstruction to obtain satisfactory results with further reduced undersampling rate. Multi-scale UDCT dictionaries potentially outperform both single-scale trained dictionaries and multi-scale analytic transforms. Our proposed sparsity model achieves sparser representation for reconstructed data, which results in fast convergence of reconstruction exploiting PB C-SALSA. Simulation results demonstrate that the proposed method outperforms conventional CS-MRI methods in maintaining intrinsic properties, eliminating aliasing, reducing unexpected artifacts, and removing noise. It can achieve comparable performance of reconstruction with the state-of-the-art methods even under substantially high undersampling factors.
基金supported by the 10 plus 10 project of Tongji University(No.4260141304/004/010).
文摘.In this paper,an augmented Lagrangian Uzawa iterative method is developed and analyzed for solving a class of double saddle-point systems with semidefinite(2,2)block.Convergence of the iterativemethod is proved under the assumption that the double saddle-point problem exists a unique solution.An application of the iterative method to the double saddle-point systems arising from the distributed Lagrange multiplier/fictitious domain(DLM/FD)finite element method for solving elliptic interface problems is also presented,in which the existence and uniqueness of the double saddle-point system is guaranteed by the analysis of the DLM/FD finite element method.Numerical experiments are conducted to validate the theoretical results and to study the performance of the proposed iterative method.
文摘An augmented Lagrangian trust region method with a bi=object strategy is proposed for solving nonlinear equality constrained optimization, which falls in between penalty-type methods and penalty-free ones. At each iteration, a trial step is computed by minimizing a quadratic approximation model to the augmented Lagrangian function within a trust region. The model is a standard trust region subproblem for unconstrained optimization and hence can efficiently be solved by many existing methods. To choose the penalty parameter, an auxiliary trust region subproblem is introduced related to the constraint violation. It turns out that the penalty parameter need not be monotonically increasing and will not tend to infinity. A bi-object strategy, which is related to the objective function and the measure of constraint violation, is utilized to decide whether the trial step will be accepted or not. Global convergence of the method is established under mild assumptions. Numerical experiments are made, which illustrate the efficiency of the algorithm on various difficult situations.
基金Chao Ding’s research was supported by the National Natural Science Foundation of China(Nos.11671387,11531014,and 11688101)Beijing Natural Science Foundation(No.Z190002)+6 种基金Xu-Dong Li’s research was supported by the National Key R&D Program of China(No.2020YFA0711900)the National Natural Science Foundation of China(No.11901107)the Young Elite Scientists Sponsorship Program by CAST(No.2019QNRC001)the Shanghai Sailing Program(No.19YF1402600)the Science and Technology Commission of Shanghai Municipality Project(No.19511120700)Xin-Yuan Zhao’s research was supported by the National Natural Science Foundation of China(No.11871002)the General Program of Science and Technology of Beijing Municipal Education Commission(No.KM201810005004).
文摘In this paper,we provide some gentle introductions to the recent advance in augmented Lagrangian methods for solving large-scale convex matrix optimization problems(cMOP).Specifically,we reviewed two types of sufficient conditions for ensuring the quadratic growth conditions of a class of constrained convex matrix optimization problems regularized by nonsmooth spectral functions.Under a mild quadratic growth condition on the dual of cMOP,we further discussed the R-superlinear convergence of the Karush-Kuhn-Tucker(KKT)residuals of the sequence generated by the augmented Lagrangian methods(ALM)for solving convex matrix optimization problems.Implementation details of the ALM for solving core convex matrix optimization problems are also provided.
基金the National Natural Science Foundation of China(Nos.11991020,11631013,11971372,11991021,11971089 and 11731013)the Strategic Priority Research Program of Chinese Academy of Sciences(No.XDA27000000)Dalian High-Level Talent Innovation Project(No.2020RD09)。
文摘The augmented Lagrangian function and the corresponding augmented Lagrangian method are constructed for solving a class of minimax optimization problems with equality constraints.We prove that,under the linear independence constraint qualification and the second-order sufficiency optimality condition for the lower level problem and the second-order sufficiency optimality condition for the minimax problem,for a given multiplier vectorμ,the rate of convergence of the augmented Lagrangian method is linear with respect to||μu-μ^(*)||and the ratio constant is proportional to 1/c when the ratio|μ-μ^(*)||/c is small enough,where c is the penalty parameter that exceeds a threshold c_(*)>O andμ^(*)is the multiplier corresponding to a local minimizer.Moreover,we prove that the sequence of multiplier vectors generated by the augmented Lagrangian method has at least Q-linear convergence if the sequence of penalty parameters(ck)is bounded and the convergence rate is superlinear if(ck)is increasing to infinity.Finally,we use a direct way to establish the rate of convergence of the augmented Lagrangian method for the minimax problem with a quadratic objective function and linear equality constraints.
文摘One of the surface mining methods is open-pit mining,by which a pit is dug to extract ore or waste downwards from the earth’s surface.In the mining industry,one of the most significant difficulties is long-term production scheduling(LTPS)of the open-pit mines.Deterministic and uncertainty-based approaches are identified as the main strategies,which have been widely used to cope with this problem.Within the last few years,many researchers have highly considered a new computational type,which is less costly,i.e.,meta-heuristic methods,so as to solve the mine design and production scheduling problem.Although the optimality of the final solution cannot be guaranteed,they are able to produce sufficiently good solutions with relatively less computational costs.In the present paper,two hybrid models between augmented Lagrangian relaxation(ALR)and a particle swarm optimization(PSO)and ALR and bat algorithm(BA)are suggested so that the LTPS problem is solved under the condition of grade uncertainty.It is suggested to carry out the ALR method on the LTPS problem to improve its performance and accelerate the convergence.Moreover,the Lagrangian coefficients are updated by using PSO and BA.The presented models have been compared with the outcomes of the ALR-genetic algorithm,the ALR-traditional sub-gradient method,and the conventional method without using the Lagrangian approach.The results indicated that the ALR is considered a more efficient approach which can solve a large-scale problem and make a valid solution.Hence,it is more effectual than the conventional method.Furthermore,the time and cost of computation are diminished by the proposed hybrid strategies.The CPU time using the ALR-BA method is about 7.4%higher than the ALR-PSO approach.
基金This study is financially supported by StateKey Laboratory of Alternate Electrical Power System with Renewable Energy Sources(Grant No.LAPS22012).
文摘This paper aims to propose a topology optimization method on generating porous structures comprising multiple materials.The mathematical optimization formulation is established under the constraints of individual volume fraction of constituent phase or total mass,as well as the local volume fraction of all phases.The original optimization problem with numerous constraints is converted into a box-constrained optimization problem by incorporating all constraints to the augmented Lagrangian function,avoiding the parameter dependence in the conventional aggregation process.Furthermore,the local volume percentage can be precisely satisfied.The effects including the globalmass bound,the influence radius and local volume percentage on final designs are exploited through numerical examples.The numerical results also reveal that porous structures keep a balance between the bulk design and periodic design in terms of the resulting compliance.All results,including those for irregular structures andmultiple volume fraction constraints,demonstrate that the proposedmethod can provide an efficient solution for multiple material infill structures.
基金the Specialized Research Fund for the Doctoral Program of High Education(Grant No.20050213006) the Key Science Research Project of Heilongjiang Province(Grant No.GD07A304).
文摘The hydrothermal scheduling in the electric power market becomes difficult because of introducing competition and considering sorts of constraints. An augmented Lagrangian approach is adopted to solve the problem,which adds to the standard Lagrangian function a quadratic penalty term without changing its dual property,and reduces the oscillation in iterations. According to the theory of large system coordination and decomposition,the problem is divided into hydro sub-problem and thermal sub-problem,which are coordinated by updating the Lagrangian multipliers,then the optimal solution is obtained. Our results for a test system show that the augmented Lagrangian approach can make the problem converge into the optimal solution quickly.
文摘In this work,we propose a second-order model for image denoising by employing a novel potential function recently developed in Zhu(J Sci Comput 88:46,2021)for the design of a regularization term.Due to this new second-order derivative based regularizer,the model is able to alleviate the staircase effect and preserve image contrast.The augmented Lagrangian method(ALM)is utilized to minimize the associated functional and convergence analysis is established for the proposed algorithm.Numerical experiments are presented to demonstrate the features of the proposed model.
文摘The manuscript presents an augmented Lagrangian—fast projected gradient method (ALFPGM) with an improved scheme of working set selection, pWSS, a decomposition based algorithm for training support vector classification machines (SVM). The manuscript describes the ALFPGM algorithm, provides numerical results for training SVM on large data sets, and compares the training times of ALFPGM and Sequential Minimal Minimization algorithms (SMO) from Scikit-learn library. The numerical results demonstrate that ALFPGM with the improved working selection scheme is capable of training SVM with tens of thousands of training examples in a fraction of the training time of some widely adopted SVM tools.
基金supported by National Science Foundation of China(No.11401487)the Education Department of Shaanxi Province(No.17JK0330)+1 种基金the Fundamental Research Funds for the Central Universities(No.300102341101)State Key Laboratory of Rail Transit Engineering Informatization(No.211934210083)。
文摘In this paper,we investigate dual problems for nonconvex set-valued vector optimization via abstract subdifferential.We first introduce a generalized augmented Lagrangian function induced by a coupling vector-valued function for set-valued vector optimization problem and construct related set-valued dual map and dual optimization problem on the basic of weak efficiency,which used by the concepts of supremum and infimum of a set.We then establish the weak and strong duality results under this augmented Lagrangian and present sufficient conditions for exact penalization via an abstract subdifferential of the object map.Finally,we define the sub-optimal path related to the dual problem and show that every cluster point of this sub-optimal path is a primal optimal solution of the object optimization problem.In addition,we consider a generalized vector variational inequality as an application of abstract subdifferential.