This paper mainly focuses on the velocity-constrained consensus problem of discrete-time heterogeneous multi-agent systems with nonconvex constraints and arbitrarily switching topologies,where each agent has first-ord...This paper mainly focuses on the velocity-constrained consensus problem of discrete-time heterogeneous multi-agent systems with nonconvex constraints and arbitrarily switching topologies,where each agent has first-order or second-order dynamics.To solve this problem,a distributed algorithm is proposed based on a contraction operator.By employing the properties of the stochastic matrix,it is shown that all agents’position states could converge to a common point and second-order agents’velocity states could remain in corresponding nonconvex constraint sets and converge to zero as long as the joint communication topology has one directed spanning tree.Finally,the numerical simulation results are provided to verify the effectiveness of the proposed algorithms.展开更多
This paper addresses the distributed nonconvex optimization problem, where both the global cost function and local inequality constraint function are nonconvex. To tackle this issue, the p-power transformation and pen...This paper addresses the distributed nonconvex optimization problem, where both the global cost function and local inequality constraint function are nonconvex. To tackle this issue, the p-power transformation and penalty function techniques are introduced to reframe the nonconvex optimization problem. This ensures that the Hessian matrix of the augmented Lagrangian function becomes local positive definite by choosing appropriate control parameters. A multi-timescale primal-dual method is then devised based on the Karush-Kuhn-Tucker(KKT) point of the reformulated nonconvex problem to attain convergence. The Lyapunov theory guarantees the model's stability in the presence of an undirected and connected communication network. Finally, two nonconvex optimization problems are presented to demonstrate the efficacy of the previously developed method.展开更多
A cautious projection BFGS method is proposed for solving nonconvex unconstrained optimization problems.The global convergence of this method as well as a stronger general convergence result can be proven without a gr...A cautious projection BFGS method is proposed for solving nonconvex unconstrained optimization problems.The global convergence of this method as well as a stronger general convergence result can be proven without a gradient Lipschitz continuity assumption,which is more in line with the actual problems than the existing modified BFGS methods and the traditional BFGS method.Under some additional conditions,the method presented has a superlinear convergence rate,which can be regarded as an extension and supplement of BFGS-type methods with the projection technique.Finally,the effectiveness and application prospects of the proposed method are verified by numerical experiments.展开更多
Impulse noise removal is an important task in image restoration.In this paper,we introduce a general nonsmooth nonconvex model for recovering images degraded by blur and impulsive noise,which can easily include some p...Impulse noise removal is an important task in image restoration.In this paper,we introduce a general nonsmooth nonconvex model for recovering images degraded by blur and impulsive noise,which can easily include some prior information,such as box constraint or low rank,etc.To deal with the nonconvex problem,we employ the proximal linearized minimization algorithm.For the subproblem,we use the alternating direction method of multipliers to solve it.Furthermore,based on the assumption that the objective function satisfies the KurdykaLojasiewicz property,we prove the global convergence of the proposed algorithm.Numerical experiments demonstrate that our method outperforms both the l1TV and Nonconvex TV models in terms of subjective and objective quality measurements.展开更多
In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multiplie...In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multipliers (ADMM), is employed to solve such problems typically, which still requires the assumption of the gradient Lipschitz continuity condition on the objective function to ensure overall convergence from the current knowledge. However, many practical applications do not adhere to the conditions of smoothness. In this study, we justify the convergence of variant Bregman ADMM for the problem with coupling terms to circumvent the issue of the global Lipschitz continuity of the gradient. We demonstrate that the iterative sequence generated by our approach converges to a critical point of the issue when the corresponding function fulfills the Kurdyka-Lojasiewicz inequality and certain assumptions apply. In addition, we illustrate the convergence rate of the algorithm.展开更多
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of...The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.展开更多
In this paper, we consider the convergence of the generalized alternating direction method of multipliers(GADMM) for solving linearly constrained nonconvex minimization model whose objective contains coupled functio...In this paper, we consider the convergence of the generalized alternating direction method of multipliers(GADMM) for solving linearly constrained nonconvex minimization model whose objective contains coupled functions. Under the assumption that the augmented Lagrangian function satisfies the Kurdyka-Lojasiewicz inequality, we prove that the sequence generated by the GADMM converges to a critical point of the augmented Lagrangian function when the penalty parameter in the augmented Lagrangian function is sufficiently large. Moreover, we also present some sufficient conditions guaranteeing the sublinear and linear rate of convergence of the algorithm.展开更多
This paper studies the system stability problems of a class of nonconvex differential inclusions. At first, a basic stability result is obtained by virtue of locally Lipschitz continuous Lyapunov functions. Moreover, ...This paper studies the system stability problems of a class of nonconvex differential inclusions. At first, a basic stability result is obtained by virtue of locally Lipschitz continuous Lyapunov functions. Moreover, a generalized invariance principle and related attraction conditions are proposed and proved to overcome the technical difficulties due to the absence of convexity. In the technical analysis, a novel set-valued derivative is proposed to deal with nonsmooth systems and nonsmooth Lyapunov functions. Additionally, the obtained results are consistent with the existing ones in the case of convex differential inclusions with regular Lyapunov functions. Finally, illustrative examples are given to show the effectiveness of the methods.展开更多
Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recov...Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods.展开更多
In this article, we study the generalized Riemann problem for a scalar non- convex Chapman-Jouguet combustion model in a neighborhood of the origin (t 〉 0) on the (x, t) plane. We focus our attention to the pertu...In this article, we study the generalized Riemann problem for a scalar non- convex Chapman-Jouguet combustion model in a neighborhood of the origin (t 〉 0) on the (x, t) plane. We focus our attention to the perturbation on initial binding energy. The solutions are obtained constructively under the entropy conditions. It can be found that the solutions are essentially different from the corresponding Riemann solutions for some cases. Especially, two important phenomena are observed: the transition from detonation to deflagration followed by a shock, which appears in the numerical simulations [7, 27]; the transition from deflagration to detonation (DDT), which is one of the core problems in gas dynamic combustion.展开更多
In this paper, a simplest scalar nonconvex ZND combustion model with viscosity is considered. The existence of the global solution of the Riemann problem for the combustion model is obtained by using the fixed point t...In this paper, a simplest scalar nonconvex ZND combustion model with viscosity is considered. The existence of the global solution of the Riemann problem for the combustion model is obtained by using the fixed point theorem.展开更多
In recent years,utilizing the low-rank prior information to construct a signal from a small amount of measures has attracted much attention.In this paper,a generalized nonconvex low-rank(GNLR) algorithm for magnetic r...In recent years,utilizing the low-rank prior information to construct a signal from a small amount of measures has attracted much attention.In this paper,a generalized nonconvex low-rank(GNLR) algorithm for magnetic resonance imaging(MRI)reconstruction is proposed,which reconstructs the image from highly under-sampled k-space data.In the algorithm,the nonconvex surrogate function replacing the conventional nuclear norm is utilized to enhance the low-rank property inherent in the reconstructed image.An alternative direction multiplier method(ADMM) is applied to solving the resulting non-convex model.Extensive experimental results have demonstrated that the proposed method can consistently recover MRIs efficiently,and outperforms the current state-of-the-art approaches in terms of higher peak signal-to-noise ratio(PSNR) and lower high-frequency error norm(HFEN) values.展开更多
This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm w...This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm with a switching mechanism to guarantee that all agents eventually converge to an optimal solution point,while their control inputs are constrained in their own nonconvex region.It is worth noting that the mechanism is performed to tackle the coexistence of the nonconvex constraint operator and the optimization gradient term.Based on the dynamic transformation technique,the original nonlinear dynamic system is transformed into an equivalent one with a nonlinear error term.By utilizing the nonnegative matrix theory,it is shown that the optimization problem can be solved when the union of switching communication graphs is jointly strongly connected.Finally,a numerical simulation example is used to demonstrate the acquired theoretical results.展开更多
In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary nu...In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary numerical results show that the PSMQN with the particularly line search conditions are very promising.展开更多
Recently there has been a considerable amount of work on the existence of Tperiodic solutions for Hamiltonian systems with singular potentials, (see [1]—[7],[10], [11], [13], [14]). In this paper we will study the ex...Recently there has been a considerable amount of work on the existence of Tperiodic solutions for Hamiltonian systems with singular potentials, (see [1]—[7],[10], [11], [13], [14]). In this paper we will study the existence of T-periodic solutions for nonconservative second-order dynamical systems展开更多
In this paper,we are mainly devoted to solving fixed point problems in more general nonconvex sets via an interior point homotopy method.Under suitable conditions,a constructive proof is given to prove the existence o...In this paper,we are mainly devoted to solving fixed point problems in more general nonconvex sets via an interior point homotopy method.Under suitable conditions,a constructive proof is given to prove the existence of fixed points,which can lead to an implementable globally convergent algorithm.展开更多
Nonconvex penalties including the smoothly clipped absolute deviation penalty and the minimax concave penalty enjoy the properties of unbiasedness, continuity and sparsity,and the ridge regression can deal with the co...Nonconvex penalties including the smoothly clipped absolute deviation penalty and the minimax concave penalty enjoy the properties of unbiasedness, continuity and sparsity,and the ridge regression can deal with the collinearity problem. Combining the strengths of nonconvex penalties and ridge regression(abbreviated as NPR), we study the oracle property of the NPR estimator in high dimensional settings with highly correlated predictors, where the dimensionality of covariates pn is allowed to increase exponentially with the sample size n. Simulation studies and a real data example are presented to verify the performance of the NPR method.展开更多
In this paper, we consider the socalled k-coloring problem in general case.Firstly, a special quadratic 0-1 programming is constructed to formulate k-coloring problem. Secondly, by use of the equivalence between above...In this paper, we consider the socalled k-coloring problem in general case.Firstly, a special quadratic 0-1 programming is constructed to formulate k-coloring problem. Secondly, by use of the equivalence between above quadratic0-1 programming and its relaxed problem, k-coloring problem is converted intoa class of (continuous) nonconvex quadratic programs, and several theoreticresults are also introduced. Thirdly, linear programming approximate algorithmis quoted and verified for this class of nonconvex quadratic programs. Finally,examining problems which are used to test the algorithm are constructed andsufficient computation experiments are reported.展开更多
The alternating direction method of multipliers(ADMM)and its symmetric version are efficient for minimizing two-block separable problems with linear constraints.However,both ADMM and symmetric ADMM have limited versat...The alternating direction method of multipliers(ADMM)and its symmetric version are efficient for minimizing two-block separable problems with linear constraints.However,both ADMM and symmetric ADMM have limited versatility across various fields due to the requirement that the gradients of differentiable functions exhibit global Lipschitz continuity,a condition that is typically challenging to satisfy in nonconvex optimization problems.Recently,a novel Bregman ADMM that not only eliminates the need for global Lipschitz continuity of the gradient,but also ensures that Bregman ADMM can be degenerated to the classical ADMM has been proposed for two-block nonconvex optimization problems with linear constraints.Building on this,we propose a symmetric Bregman alternating direction method of multipliers,which can be degenerated into the symmetric ADMM and the Bregman ADMM,and thus further degenerated into the classical ADMM.Moreover,when solving separable nonconvex optimization problems,it does not require the global Lipschitz continuity of the gradients of differentiable functions.Furthermore,we demonstrate that under the Kurdyka-Lojasiewicz inequality and certain conditions,the iterative sequence generated by our algorithm converges to a critical point of the problem.In addition,we examine the convergence rate of the algorithm.展开更多
基金2024 Jiangsu Province Youth Science and Technology Talent Support Project2024 Yancheng Key Research and Development Plan(Social Development)projects,“Research and Application of Multi Agent Offline Distributed Trust Perception Virtual Wireless Sensor Network Algorithm”and“Research and Application of a New Type of Fishery Ship Safety Production Monitoring Equipment”。
文摘This paper mainly focuses on the velocity-constrained consensus problem of discrete-time heterogeneous multi-agent systems with nonconvex constraints and arbitrarily switching topologies,where each agent has first-order or second-order dynamics.To solve this problem,a distributed algorithm is proposed based on a contraction operator.By employing the properties of the stochastic matrix,it is shown that all agents’position states could converge to a common point and second-order agents’velocity states could remain in corresponding nonconvex constraint sets and converge to zero as long as the joint communication topology has one directed spanning tree.Finally,the numerical simulation results are provided to verify the effectiveness of the proposed algorithms.
基金supported in part by the National Natural Science Foundation of China(62236002,62403004,62203001,62303009,62136008)the Open Project of Anhui Key Laboratory of Industrial Energy-Saving and Safety,Anhui University(KFKT202405)
文摘This paper addresses the distributed nonconvex optimization problem, where both the global cost function and local inequality constraint function are nonconvex. To tackle this issue, the p-power transformation and penalty function techniques are introduced to reframe the nonconvex optimization problem. This ensures that the Hessian matrix of the augmented Lagrangian function becomes local positive definite by choosing appropriate control parameters. A multi-timescale primal-dual method is then devised based on the Karush-Kuhn-Tucker(KKT) point of the reformulated nonconvex problem to attain convergence. The Lyapunov theory guarantees the model's stability in the presence of an undirected and connected communication network. Finally, two nonconvex optimization problems are presented to demonstrate the efficacy of the previously developed method.
基金supported by the Guangxi Science and Technology base and Talent Project(AD22080047)the National Natural Science Foundation of Guangxi Province(2023GXNFSBA 026063)+1 种基金the Innovation Funds of Chinese University(2021BCF03001)the special foundation for Guangxi Ba Gui Scholars.
文摘A cautious projection BFGS method is proposed for solving nonconvex unconstrained optimization problems.The global convergence of this method as well as a stronger general convergence result can be proven without a gradient Lipschitz continuity assumption,which is more in line with the actual problems than the existing modified BFGS methods and the traditional BFGS method.Under some additional conditions,the method presented has a superlinear convergence rate,which can be regarded as an extension and supplement of BFGS-type methods with the projection technique.Finally,the effectiveness and application prospects of the proposed method are verified by numerical experiments.
基金Supported by the National Natural Science Foundations of China(Grant No.12061045,12031003)the Guangzhou Education Scientific Research Project 2024(Grant No.202315829)the Natural Science Foundation of Jiangxi Province(Grant No.20224ACB211004)。
文摘Impulse noise removal is an important task in image restoration.In this paper,we introduce a general nonsmooth nonconvex model for recovering images degraded by blur and impulsive noise,which can easily include some prior information,such as box constraint or low rank,etc.To deal with the nonconvex problem,we employ the proximal linearized minimization algorithm.For the subproblem,we use the alternating direction method of multipliers to solve it.Furthermore,based on the assumption that the objective function satisfies the KurdykaLojasiewicz property,we prove the global convergence of the proposed algorithm.Numerical experiments demonstrate that our method outperforms both the l1TV and Nonconvex TV models in terms of subjective and objective quality measurements.
文摘In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multipliers (ADMM), is employed to solve such problems typically, which still requires the assumption of the gradient Lipschitz continuity condition on the objective function to ensure overall convergence from the current knowledge. However, many practical applications do not adhere to the conditions of smoothness. In this study, we justify the convergence of variant Bregman ADMM for the problem with coupling terms to circumvent the issue of the global Lipschitz continuity of the gradient. We demonstrate that the iterative sequence generated by our approach converges to a critical point of the issue when the corresponding function fulfills the Kurdyka-Lojasiewicz inequality and certain assumptions apply. In addition, we illustrate the convergence rate of the algorithm.
基金supported by the Knut and Alice Wallenberg Foundationthe Swedish Foundation for Strategic Research+1 种基金the Swedish Research Councilthe National Natural Science Foundation of China(62133003,61991403,61991404,61991400)。
文摘The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.
基金Supported by the National Natural Science Foundation of China(Grant Nos.1157117811801455)the Fundamental Research Funds of China West Normal University(Grant No.17E084)
文摘In this paper, we consider the convergence of the generalized alternating direction method of multipliers(GADMM) for solving linearly constrained nonconvex minimization model whose objective contains coupled functions. Under the assumption that the augmented Lagrangian function satisfies the Kurdyka-Lojasiewicz inequality, we prove that the sequence generated by the GADMM converges to a critical point of the augmented Lagrangian function when the penalty parameter in the augmented Lagrangian function is sufficiently large. Moreover, we also present some sufficient conditions guaranteeing the sublinear and linear rate of convergence of the algorithm.
基金This work was supported by the geijing Natural Science Foundation (No. 4152057), the Natural Science Foundation of China (Nos. 61333001, 61573344), and the China Postdoctoral Science Foundation (No. 2015M581190).
文摘This paper studies the system stability problems of a class of nonconvex differential inclusions. At first, a basic stability result is obtained by virtue of locally Lipschitz continuous Lyapunov functions. Moreover, a generalized invariance principle and related attraction conditions are proposed and proved to overcome the technical difficulties due to the absence of convexity. In the technical analysis, a novel set-valued derivative is proposed to deal with nonsmooth systems and nonsmooth Lyapunov functions. Additionally, the obtained results are consistent with the existing ones in the case of convex differential inclusions with regular Lyapunov functions. Finally, illustrative examples are given to show the effectiveness of the methods.
基金Projects(61173122,61262032) supported by the National Natural Science Foundation of ChinaProjects(11JJ3067,12JJ2038) supported by the Natural Science Foundation of Hunan Province,China
文摘Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods.
基金Supported by NUAA Research Funding (NS2011001)NUAA’S Scientific Fund forthe Introduction of Qualified Personal,NSFC grant 10971130+1 种基金Shanghai Leading Academic Discipline ProjectJ 50101Shanghai Municipal Education Commission of Scientific Research Innovation Project 112284
文摘In this article, we study the generalized Riemann problem for a scalar non- convex Chapman-Jouguet combustion model in a neighborhood of the origin (t 〉 0) on the (x, t) plane. We focus our attention to the perturbation on initial binding energy. The solutions are obtained constructively under the entropy conditions. It can be found that the solutions are essentially different from the corresponding Riemann solutions for some cases. Especially, two important phenomena are observed: the transition from detonation to deflagration followed by a shock, which appears in the numerical simulations [7, 27]; the transition from deflagration to detonation (DDT), which is one of the core problems in gas dynamic combustion.
基金Project supported by the National Natural Science Foundation of China (Grant No.10671120)
文摘In this paper, a simplest scalar nonconvex ZND combustion model with viscosity is considered. The existence of the global solution of the Riemann problem for the combustion model is obtained by using the fixed point theorem.
基金National Natural Science Foundations of China(Nos.61362001,61365013,51165033)the Science and Technology Department of Jiangxi Province of China(Nos.20132BAB211030,20122BAB211015)+1 种基金the Jiangxi Advanced Projects for Postdoctoral Research Funds,China(o.2014KY02)the Innovation Special Fund Project of Nanchang University,China(o.cx2015136)
文摘In recent years,utilizing the low-rank prior information to construct a signal from a small amount of measures has attracted much attention.In this paper,a generalized nonconvex low-rank(GNLR) algorithm for magnetic resonance imaging(MRI)reconstruction is proposed,which reconstructs the image from highly under-sampled k-space data.In the algorithm,the nonconvex surrogate function replacing the conventional nuclear norm is utilized to enhance the low-rank property inherent in the reconstructed image.An alternative direction multiplier method(ADMM) is applied to solving the resulting non-convex model.Extensive experimental results have demonstrated that the proposed method can consistently recover MRIs efficiently,and outperforms the current state-of-the-art approaches in terms of higher peak signal-to-noise ratio(PSNR) and lower high-frequency error norm(HFEN) values.
基金Project supported by the National Engineering Research Center of Rail Transportation Operation and Control System,Beijing Jiaotong University(Grant No.NERC2019K002)。
文摘This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm with a switching mechanism to guarantee that all agents eventually converge to an optimal solution point,while their control inputs are constrained in their own nonconvex region.It is worth noting that the mechanism is performed to tackle the coexistence of the nonconvex constraint operator and the optimization gradient term.Based on the dynamic transformation technique,the original nonlinear dynamic system is transformed into an equivalent one with a nonlinear error term.By utilizing the nonnegative matrix theory,it is shown that the optimization problem can be solved when the union of switching communication graphs is jointly strongly connected.Finally,a numerical simulation example is used to demonstrate the acquired theoretical results.
文摘In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary numerical results show that the PSMQN with the particularly line search conditions are very promising.
基金Permanent address: Institute of Mathematics, Academia Sinica, Beijing 100080, China.
文摘Recently there has been a considerable amount of work on the existence of Tperiodic solutions for Hamiltonian systems with singular potentials, (see [1]—[7],[10], [11], [13], [14]). In this paper we will study the existence of T-periodic solutions for nonconservative second-order dynamical systems
基金Supported by the NNSF of China(11026079)Supported by the Youth Backbone Teacher Foundation of Henan Province(173)
文摘In this paper,we are mainly devoted to solving fixed point problems in more general nonconvex sets via an interior point homotopy method.Under suitable conditions,a constructive proof is given to prove the existence of fixed points,which can lead to an implementable globally convergent algorithm.
基金Supported by the National Natural Science Foundation of China(Grant No.11401340)China Postdoctoral Science Foundation(Grant No.2014M561892)+1 种基金the Foundation of Qufu Normal University(Grant Nos.bsqd2012041xkj201304)
文摘Nonconvex penalties including the smoothly clipped absolute deviation penalty and the minimax concave penalty enjoy the properties of unbiasedness, continuity and sparsity,and the ridge regression can deal with the collinearity problem. Combining the strengths of nonconvex penalties and ridge regression(abbreviated as NPR), we study the oracle property of the NPR estimator in high dimensional settings with highly correlated predictors, where the dimensionality of covariates pn is allowed to increase exponentially with the sample size n. Simulation studies and a real data example are presented to verify the performance of the NPR method.
文摘In this paper, we consider the socalled k-coloring problem in general case.Firstly, a special quadratic 0-1 programming is constructed to formulate k-coloring problem. Secondly, by use of the equivalence between above quadratic0-1 programming and its relaxed problem, k-coloring problem is converted intoa class of (continuous) nonconvex quadratic programs, and several theoreticresults are also introduced. Thirdly, linear programming approximate algorithmis quoted and verified for this class of nonconvex quadratic programs. Finally,examining problems which are used to test the algorithm are constructed andsufficient computation experiments are reported.
文摘The alternating direction method of multipliers(ADMM)and its symmetric version are efficient for minimizing two-block separable problems with linear constraints.However,both ADMM and symmetric ADMM have limited versatility across various fields due to the requirement that the gradients of differentiable functions exhibit global Lipschitz continuity,a condition that is typically challenging to satisfy in nonconvex optimization problems.Recently,a novel Bregman ADMM that not only eliminates the need for global Lipschitz continuity of the gradient,but also ensures that Bregman ADMM can be degenerated to the classical ADMM has been proposed for two-block nonconvex optimization problems with linear constraints.Building on this,we propose a symmetric Bregman alternating direction method of multipliers,which can be degenerated into the symmetric ADMM and the Bregman ADMM,and thus further degenerated into the classical ADMM.Moreover,when solving separable nonconvex optimization problems,it does not require the global Lipschitz continuity of the gradients of differentiable functions.Furthermore,we demonstrate that under the Kurdyka-Lojasiewicz inequality and certain conditions,the iterative sequence generated by our algorithm converges to a critical point of the problem.In addition,we examine the convergence rate of the algorithm.