We proposed the flat-type permanent magnet linear alternator (LA) for free piston linear alternators (FPLAs) instead of the tubular one. Using the finite element method (FEM), we compare these two kinds of LAs. The FE...We proposed the flat-type permanent magnet linear alternator (LA) for free piston linear alternators (FPLAs) instead of the tubular one. Using the finite element method (FEM), we compare these two kinds of LAs. The FEM result shows that the flat-type permanent magnet LA has higher efficiency and larger output specific power than the tubular one, therefore more suitable for FPLAs, and that the alternator design can be optimized with respect to the permanent magnet length as well as the air gap.展开更多
Let K be a proper cone in R^x,let A be an n×n real matrix that satisfies AK(?)K,letb be a given vector of K,and let λbe a given positive real number.The following two lin-ear equations are considered in this pap...Let K be a proper cone in R^x,let A be an n×n real matrix that satisfies AK(?)K,letb be a given vector of K,and let λbe a given positive real number.The following two lin-ear equations are considered in this paper:(i)(λⅠ_n-A)x=b,x∈K,and(ii)(A-λⅠ_n)x=b,x∈K.We obtain several equivalent conditions for the solvability of the first equation.展开更多
We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have...We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have demonstrated their superiority over their convex counterparts. The computational challenge lies in the fact that the proximal mapping associated with non-convex regularization is not easily obtained due to the imposed linear composition. Fortunately, the problem structure allows one to introduce an auxiliary variable and reformulate it as an optimization problem with linear constraints, which can be solved using the Linearized Alternating Direction Method of Multipliers (LADMM). Despite the success of LADMM in practice, it remains unknown whether LADMM is convergent in solving such non-convex compositely regularized optimizations. In this research, we first present a detailed convergence analysis of the LADMM algorithm for solving a non-convex compositely regularized optimization problem with a large class of non-convex penalties. Furthermore, we propose an Adaptive LADMM (AdaLADMM) algorithm with a line-search criterion. Experimental results on different genres of datasets validate the efficacy of the proposed algorithm.展开更多
The proximal alternating linearized minimization(PALM)method suits well for solving blockstructured optimization problems,which are ubiquitous in real applications.In the cases where subproblems do not have closed-for...The proximal alternating linearized minimization(PALM)method suits well for solving blockstructured optimization problems,which are ubiquitous in real applications.In the cases where subproblems do not have closed-form solutions,e.g.,due to complex constraints,infeasible subsolvers are indispensable,giving rise to an infeasible inexact PALM(PALM-I).Numerous efforts have been devoted to analyzing the feasible PALM,while little attention has been paid to the PALM-I.The usage of the PALM-I thus lacks a theoretical guarantee.The essential difficulty of analysis consists in the objective value nonmonotonicity induced by the infeasibility.We study in the present work the convergence properties of the PALM-I.In particular,we construct a surrogate sequence to surmount the nonmonotonicity issue and devise an implementable inexact criterion.Based upon these,we manage to establish the stationarity of any accumulation point,and moreover,show the iterate convergence and the asymptotic convergence rates under the assumption of the Lojasiewicz property.The prominent advantages of the PALM-I on CPU time are illustrated via numerical experiments on problems arising from quantum physics and 3-dimensional anisotropic frictional contact.展开更多
In this paper, we develop a novel alternating linearization method for solving convex minimization whose objective function is the sum of two separable functions. The motivation of the paper is to extend the recent wo...In this paper, we develop a novel alternating linearization method for solving convex minimization whose objective function is the sum of two separable functions. The motivation of the paper is to extend the recent work Goldfarb et al.(2013) to cope with more generic convex minimization. For the proposed method,both the separable objective functions and the auxiliary penalty terms are linearized. Provided that the separable objective functions belong to C1,1(Rn), we prove the O(1/?) arithmetical complexity of the new method. Some preliminary numerical simulations involving image processing and compressive sensing are conducted.展开更多
Sparse subspace clustering(SSC),a seminal clustering method,has demonstrated remarkable performance by effectively solving the data sparsity problem.However,it is not without its limitations.Key among these is the dif...Sparse subspace clustering(SSC),a seminal clustering method,has demonstrated remarkable performance by effectively solving the data sparsity problem.However,it is not without its limitations.Key among these is the difficulty of incremental learning with the original SSC,accompanied by a computationally demanding recalculation process that constrains its scalability to large datasets.Moreover,the conventional SSC framework considers dictionary construction,affinity matrix learning and clustering as separate stages,potentially leading to suboptimal dictionaries and affinity matrices for clustering.To address these challenges,we present a novel clustering approach,called SSCNet,which leverages differentiable programming.Specifically,we redefine and generalize the optimization procedure of the linearized alternating direction method of multipliers(ADMM),framing it as a multi-block deep neural network,where each block corresponds to a linearized ADMM iteration step.This reformulation is used to address the SSC problem.We then use a shallow spectral embedding network as an unambiguous and differentiable module to approximate the eigenvalue decomposition.Finally,we incorporate a self-supervised structure to mitigate the non-differentiability inherent in k-means to achieve the final clustering results.In essence,we assign unique objectives to different modules and jointly optimize all module parameters using stochastic gradient descent.Due to the high efficiency of the optimization process,SSCNet can be easily applied to large-scale datasets.Experimental evaluations on several benchmarks confirm that our method outperforms traditional state-of-the-art approaches.展开更多
基金Project (No. 50806046) supported by the National Natural Science Foundation of China
文摘We proposed the flat-type permanent magnet linear alternator (LA) for free piston linear alternators (FPLAs) instead of the tubular one. Using the finite element method (FEM), we compare these two kinds of LAs. The FEM result shows that the flat-type permanent magnet LA has higher efficiency and larger output specific power than the tubular one, therefore more suitable for FPLAs, and that the alternator design can be optimized with respect to the permanent magnet length as well as the air gap.
文摘Let K be a proper cone in R^x,let A be an n×n real matrix that satisfies AK(?)K,letb be a given vector of K,and let λbe a given positive real number.The following two lin-ear equations are considered in this paper:(i)(λⅠ_n-A)x=b,x∈K,and(ii)(A-λⅠ_n)x=b,x∈K.We obtain several equivalent conditions for the solvability of the first equation.
基金supported by the National Natural Science Foundation of China(Nos.61303264,61202482,and 61202488)Guangxi Cooperative Innovation Center of Cloud Computing and Big Data(No.YD16505)Distinguished Young Scientist Promotion of National University of Defense Technology
文摘We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have demonstrated their superiority over their convex counterparts. The computational challenge lies in the fact that the proximal mapping associated with non-convex regularization is not easily obtained due to the imposed linear composition. Fortunately, the problem structure allows one to introduce an auxiliary variable and reformulate it as an optimization problem with linear constraints, which can be solved using the Linearized Alternating Direction Method of Multipliers (LADMM). Despite the success of LADMM in practice, it remains unknown whether LADMM is convergent in solving such non-convex compositely regularized optimizations. In this research, we first present a detailed convergence analysis of the LADMM algorithm for solving a non-convex compositely regularized optimization problem with a large class of non-convex penalties. Furthermore, we propose an Adaptive LADMM (AdaLADMM) algorithm with a line-search criterion. Experimental results on different genres of datasets validate the efficacy of the proposed algorithm.
基金supported by National Natural Science Foundation of China(Grant Nos.12125108,11971466,11991021,11991020,12021001 and 12288201)Key Research Program of Frontier Sciences,Chinese Academy of Sciences(Grant No.ZDBS-LY-7022)CAS(the Chinese Academy of Sciences)AMSS(Academy of Mathematics and Systems Science)-PolyU(The Hong Kong Polytechnic University)Joint Laboratory of Applied Mathematics.
文摘The proximal alternating linearized minimization(PALM)method suits well for solving blockstructured optimization problems,which are ubiquitous in real applications.In the cases where subproblems do not have closed-form solutions,e.g.,due to complex constraints,infeasible subsolvers are indispensable,giving rise to an infeasible inexact PALM(PALM-I).Numerous efforts have been devoted to analyzing the feasible PALM,while little attention has been paid to the PALM-I.The usage of the PALM-I thus lacks a theoretical guarantee.The essential difficulty of analysis consists in the objective value nonmonotonicity induced by the infeasibility.We study in the present work the convergence properties of the PALM-I.In particular,we construct a surrogate sequence to surmount the nonmonotonicity issue and devise an implementable inexact criterion.Based upon these,we manage to establish the stationarity of any accumulation point,and moreover,show the iterate convergence and the asymptotic convergence rates under the assumption of the Lojasiewicz property.The prominent advantages of the PALM-I on CPU time are illustrated via numerical experiments on problems arising from quantum physics and 3-dimensional anisotropic frictional contact.
基金supported by National Natural Science Foundation of China(Grant Nos.11301055 and 11401315)Natural Science Foundation of Jiangsu Province(Grant No.BK2009397)the Fundamental Research Funds for the Central Universities(Grant No.ZYGX2013J103)
文摘In this paper, we develop a novel alternating linearization method for solving convex minimization whose objective function is the sum of two separable functions. The motivation of the paper is to extend the recent work Goldfarb et al.(2013) to cope with more generic convex minimization. For the proposed method,both the separable objective functions and the auxiliary penalty terms are linearized. Provided that the separable objective functions belong to C1,1(Rn), we prove the O(1/?) arithmetical complexity of the new method. Some preliminary numerical simulations involving image processing and compressive sensing are conducted.
基金supported by the National Natural Science Foundation of China(No.62276004)the major key project of Pengcheng Laboratory,China(No.PCL2021A12)and Qualcomm.
文摘Sparse subspace clustering(SSC),a seminal clustering method,has demonstrated remarkable performance by effectively solving the data sparsity problem.However,it is not without its limitations.Key among these is the difficulty of incremental learning with the original SSC,accompanied by a computationally demanding recalculation process that constrains its scalability to large datasets.Moreover,the conventional SSC framework considers dictionary construction,affinity matrix learning and clustering as separate stages,potentially leading to suboptimal dictionaries and affinity matrices for clustering.To address these challenges,we present a novel clustering approach,called SSCNet,which leverages differentiable programming.Specifically,we redefine and generalize the optimization procedure of the linearized alternating direction method of multipliers(ADMM),framing it as a multi-block deep neural network,where each block corresponds to a linearized ADMM iteration step.This reformulation is used to address the SSC problem.We then use a shallow spectral embedding network as an unambiguous and differentiable module to approximate the eigenvalue decomposition.Finally,we incorporate a self-supervised structure to mitigate the non-differentiability inherent in k-means to achieve the final clustering results.In essence,we assign unique objectives to different modules and jointly optimize all module parameters using stochastic gradient descent.Due to the high efficiency of the optimization process,SSCNet can be easily applied to large-scale datasets.Experimental evaluations on several benchmarks confirm that our method outperforms traditional state-of-the-art approaches.