期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Flat-type permanent magnet linear alternator:A suitable device for a free piston linear alternator 被引量:4
1
作者 Qing-feng LI Jin XIAO Zhen HUANG 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2009年第3期345-352,共8页
We proposed the flat-type permanent magnet linear alternator (LA) for free piston linear alternators (FPLAs) instead of the tubular one. Using the finite element method (FEM), we compare these two kinds of LAs. The FE... We proposed the flat-type permanent magnet linear alternator (LA) for free piston linear alternators (FPLAs) instead of the tubular one. Using the finite element method (FEM), we compare these two kinds of LAs. The FEM result shows that the flat-type permanent magnet LA has higher efficiency and larger output specific power than the tubular one, therefore more suitable for FPLAs, and that the alternator design can be optimized with respect to the permanent magnet length as well as the air gap. 展开更多
关键词 Free piston (FP) linear alternator (LA) Internal combustion engine (ICE) Finite element method (FEM)
原文传递
LINEAR EQUATIONS OVER CONES,COLLATZ-WIELANDT NUMBERS AND ALTERNATING SEQUENCES
2
作者 Bitshun Tam 《Numerical Mathematics A Journal of Chinese Universities(English Series)》 SCIE 2000年第S1期11-11,共1页
Let K be a proper cone in R^x,let A be an n×n real matrix that satisfies AK(?)K,letb be a given vector of K,and let λbe a given positive real number.The following two lin-ear equations are considered in this pap... Let K be a proper cone in R^x,let A be an n×n real matrix that satisfies AK(?)K,letb be a given vector of K,and let λbe a given positive real number.The following two lin-ear equations are considered in this paper:(i)(λⅠ_n-A)x=b,x∈K,and(ii)(A-λⅠ_n)x=b,x∈K.We obtain several equivalent conditions for the solvability of the first equation. 展开更多
关键词 REAL linear EQUATIONS OVER CONES COLLATZ-WIELANDT NUMBERS AND ALTERNATING SEQUENCES In
在线阅读 下载PDF
Adaptive Linearized Alternating Direction Method of Multipliers for Non-Convex Compositely Regularized Optimization Problems 被引量:5
3
作者 Linbo Qiao Bofeng Zhang +1 位作者 Xicheng Lu Jinshu Su 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2017年第3期328-341,共14页
We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have... We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have demonstrated their superiority over their convex counterparts. The computational challenge lies in the fact that the proximal mapping associated with non-convex regularization is not easily obtained due to the imposed linear composition. Fortunately, the problem structure allows one to introduce an auxiliary variable and reformulate it as an optimization problem with linear constraints, which can be solved using the Linearized Alternating Direction Method of Multipliers (LADMM). Despite the success of LADMM in practice, it remains unknown whether LADMM is convergent in solving such non-convex compositely regularized optimizations. In this research, we first present a detailed convergence analysis of the LADMM algorithm for solving a non-convex compositely regularized optimization problem with a large class of non-convex penalties. Furthermore, we propose an Adaptive LADMM (AdaLADMM) algorithm with a line-search criterion. Experimental results on different genres of datasets validate the efficacy of the proposed algorithm. 展开更多
关键词 adaptive linearized alternating direction method of multipliers non-convex compositely regularizedoptimization cappled-ll regularized logistic regression
原文传递
The convergence properties of infeasible inexact proximal alternating linearized minimization 被引量:1
4
作者 Yukuan Hu Xin Liu 《Science China Mathematics》 SCIE CSCD 2023年第10期2385-2410,共26页
The proximal alternating linearized minimization(PALM)method suits well for solving blockstructured optimization problems,which are ubiquitous in real applications.In the cases where subproblems do not have closed-for... The proximal alternating linearized minimization(PALM)method suits well for solving blockstructured optimization problems,which are ubiquitous in real applications.In the cases where subproblems do not have closed-form solutions,e.g.,due to complex constraints,infeasible subsolvers are indispensable,giving rise to an infeasible inexact PALM(PALM-I).Numerous efforts have been devoted to analyzing the feasible PALM,while little attention has been paid to the PALM-I.The usage of the PALM-I thus lacks a theoretical guarantee.The essential difficulty of analysis consists in the objective value nonmonotonicity induced by the infeasibility.We study in the present work the convergence properties of the PALM-I.In particular,we construct a surrogate sequence to surmount the nonmonotonicity issue and devise an implementable inexact criterion.Based upon these,we manage to establish the stationarity of any accumulation point,and moreover,show the iterate convergence and the asymptotic convergence rates under the assumption of the Lojasiewicz property.The prominent advantages of the PALM-I on CPU time are illustrated via numerical experiments on problems arising from quantum physics and 3-dimensional anisotropic frictional contact. 展开更多
关键词 proximal alternating linearized minimization INFEASIBILITY nonmonotonicity surrogate sequence inexact criterion iterate convergence asymptotic convergence rate
原文传递
A proximal alternating linearization method for minimizing the sum of two convex functions
5
作者 ZHANG WenXing CAI XingJu JIA ZeHui 《Science China Mathematics》 SCIE CSCD 2015年第10期2225-2244,共20页
In this paper, we develop a novel alternating linearization method for solving convex minimization whose objective function is the sum of two separable functions. The motivation of the paper is to extend the recent wo... In this paper, we develop a novel alternating linearization method for solving convex minimization whose objective function is the sum of two separable functions. The motivation of the paper is to extend the recent work Goldfarb et al.(2013) to cope with more generic convex minimization. For the proposed method,both the separable objective functions and the auxiliary penalty terms are linearized. Provided that the separable objective functions belong to C1,1(Rn), we prove the O(1/?) arithmetical complexity of the new method. Some preliminary numerical simulations involving image processing and compressive sensing are conducted. 展开更多
关键词 alternating linearization method arithmetical complexity PROXIMAL SEPARABLE image processing
原文传递
SSCNet:learning-based subspace clustering
6
作者 Xingyu Xie Jianlong Wu +1 位作者 Guangcan Liu Zhouchen Lin 《Visual Intelligence》 2024年第1期116-131,共16页
Sparse subspace clustering(SSC),a seminal clustering method,has demonstrated remarkable performance by effectively solving the data sparsity problem.However,it is not without its limitations.Key among these is the dif... Sparse subspace clustering(SSC),a seminal clustering method,has demonstrated remarkable performance by effectively solving the data sparsity problem.However,it is not without its limitations.Key among these is the difficulty of incremental learning with the original SSC,accompanied by a computationally demanding recalculation process that constrains its scalability to large datasets.Moreover,the conventional SSC framework considers dictionary construction,affinity matrix learning and clustering as separate stages,potentially leading to suboptimal dictionaries and affinity matrices for clustering.To address these challenges,we present a novel clustering approach,called SSCNet,which leverages differentiable programming.Specifically,we redefine and generalize the optimization procedure of the linearized alternating direction method of multipliers(ADMM),framing it as a multi-block deep neural network,where each block corresponds to a linearized ADMM iteration step.This reformulation is used to address the SSC problem.We then use a shallow spectral embedding network as an unambiguous and differentiable module to approximate the eigenvalue decomposition.Finally,we incorporate a self-supervised structure to mitigate the non-differentiability inherent in k-means to achieve the final clustering results.In essence,we assign unique objectives to different modules and jointly optimize all module parameters using stochastic gradient descent.Due to the high efficiency of the optimization process,SSCNet can be easily applied to large-scale datasets.Experimental evaluations on several benchmarks confirm that our method outperforms traditional state-of-the-art approaches. 展开更多
关键词 Subspace clustering Learning-based optimization linearized alternating direction method of multipliers(ADMM) Differentiable low-rank decomposition
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部