摘要
从优化算法应该具有的共性出发,提出一种全新的算法——学习算法(LA)。该算法记录历史最优解和当前最优解这两组关键历史信息,然后让当前解向这两种最优解聚集(即学习的过程);同时为了不放弃其他区域的搜索,让当前解的一部分完全随机地被重置。该算法原理简单,可调参数少且各参数对算法效能的影响易于掌控。在多最优函数以及复杂函数的最小化测试中,通过与GA、PSO的比较,发现LA确实是一种有效的优化算法,其优化效率并不低于现有算法。数值实验还表明,LA在多最优解问题的寻优中相对GA和PSO具有非常明显的优势。
This paper presented a new algorithm:learning algorithm based the commonness of optimization algorithms.This algorithm recorded the historical optimal solution and the current optimal solution,and then let the current solution converge to these two optimal solution(that was,the learning process),at the same time,in order not to give up the search for other regions,made a part of current solution be replaced randomly.The algorithm had simple theory and small adjustable parameters,and the effect for every parameter to algorithm was easy to control.In the test of multi-optimum function and minimization of complex function,found that compared with GA and PSO,LA was indeed an effective algorithm.Numerical experiments also show that LA has a very distinct advantage in multi-optimum problems compared with GA and PSO.
出处
《计算机应用研究》
CSCD
北大核心
2010年第7期2465-2467,2516,共4页
Application Research of Computers
基金
国家自然科学基金资助项目(60801035)
关键词
学习算法
遗传算法
微粒群算法
learning algorithm(LA)
genetic algorithm(GA)
particle swarm optimization(PSO)