Accompanied by the advent of current big data ages,the scales of real world optimization problems with many decisive design variables are becoming much larger.Up to date,how to develop new optimization algorithms for ...Accompanied by the advent of current big data ages,the scales of real world optimization problems with many decisive design variables are becoming much larger.Up to date,how to develop new optimization algorithms for these large scale problems and how to expand the scalability of existing optimization algorithms have posed further challenges in the domain of bio-inspired computation.So addressing these complex large scale problems to produce truly useful results is one of the presently hottest topics.As a branch of the swarm intelligence based algorithms,particle swarm optimization (PSO) for coping with large scale problems and its expansively diverse applications have been in rapid development over the last decade years.This reviewpaper mainly presents its recent achievements and trends,and also highlights the existing unsolved challenging problems and key issues with a huge impact in order to encourage further more research in both large scale PSO theories and their applications in the forthcoming years.展开更多
In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices ou...In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.展开更多
In this paper, a new limited memory quasi-Newton method is proposed and developed for solving large-scale linearly equality-constrained nonlinear programming problems. In every iteration, a linear equation subproblem ...In this paper, a new limited memory quasi-Newton method is proposed and developed for solving large-scale linearly equality-constrained nonlinear programming problems. In every iteration, a linear equation subproblem is solved by using the scaled conjugate gradient method. A truncated solution of the subproblem is determined so that computation is decreased. The technique of limited memory is used to update the approximated inverse Hessian matrix of the Lagrangian function. Hence, the new method is able to handle large dense problems. The convergence of the method is analyzed and numerical results are reported.展开更多
提出一种大规模声学边界元法的高效率、高精度GPU并行计算方法.基于Burton-Miller边界积分方程,推导适于GPU的并行计算格式并实现了传统边界元法的GPU加速算法.为提高原型算法的效率,研究GPU数据缓存优化方法.由于GPU的双精度浮点运算...提出一种大规模声学边界元法的高效率、高精度GPU并行计算方法.基于Burton-Miller边界积分方程,推导适于GPU的并行计算格式并实现了传统边界元法的GPU加速算法.为提高原型算法的效率,研究GPU数据缓存优化方法.由于GPU的双精度浮点运算能力较低,为了降低数值误差,研究基于单精度浮点运算实现的doublesingle精度算法.数值算例表明,改进的算法实现了最高89.8%的GPU使用效率,且数值精度与直接使用双精度数相当,而计算时间仅为其1/28,显存消耗也仅为其一半.该方法可在普通PC机(8GB内存,NVIDIA Ge Force 660 Ti显卡)上快速完成自由度超过300万的大规模声学边界元分析,计算速度和内存消耗均优于快速边界元法.展开更多
文摘Accompanied by the advent of current big data ages,the scales of real world optimization problems with many decisive design variables are becoming much larger.Up to date,how to develop new optimization algorithms for these large scale problems and how to expand the scalability of existing optimization algorithms have posed further challenges in the domain of bio-inspired computation.So addressing these complex large scale problems to produce truly useful results is one of the presently hottest topics.As a branch of the swarm intelligence based algorithms,particle swarm optimization (PSO) for coping with large scale problems and its expansively diverse applications have been in rapid development over the last decade years.This reviewpaper mainly presents its recent achievements and trends,and also highlights the existing unsolved challenging problems and key issues with a huge impact in order to encourage further more research in both large scale PSO theories and their applications in the forthcoming years.
基金The research was supported by the State Education Grant for Retumed Scholars
文摘In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.
基金This research is supported by the National Natural Science Foundation of China, LSEC of CAS in Beijingand Natural Science Foun
文摘In this paper, a new limited memory quasi-Newton method is proposed and developed for solving large-scale linearly equality-constrained nonlinear programming problems. In every iteration, a linear equation subproblem is solved by using the scaled conjugate gradient method. A truncated solution of the subproblem is determined so that computation is decreased. The technique of limited memory is used to update the approximated inverse Hessian matrix of the Lagrangian function. Hence, the new method is able to handle large dense problems. The convergence of the method is analyzed and numerical results are reported.
文摘提出一种大规模声学边界元法的高效率、高精度GPU并行计算方法.基于Burton-Miller边界积分方程,推导适于GPU的并行计算格式并实现了传统边界元法的GPU加速算法.为提高原型算法的效率,研究GPU数据缓存优化方法.由于GPU的双精度浮点运算能力较低,为了降低数值误差,研究基于单精度浮点运算实现的doublesingle精度算法.数值算例表明,改进的算法实现了最高89.8%的GPU使用效率,且数值精度与直接使用双精度数相当,而计算时间仅为其1/28,显存消耗也仅为其一半.该方法可在普通PC机(8GB内存,NVIDIA Ge Force 660 Ti显卡)上快速完成自由度超过300万的大规模声学边界元分析,计算速度和内存消耗均优于快速边界元法.