A newly proposed competent population-based optimization algorithm called RUN,which uses the principle of slope variations calculated by applying the Runge Kutta method as the key search mechanism,has gained wider int...A newly proposed competent population-based optimization algorithm called RUN,which uses the principle of slope variations calculated by applying the Runge Kutta method as the key search mechanism,has gained wider interest in solving optimization problems.However,in high-dimensional problems,the search capabilities,convergence speed,and runtime of RUN deteriorate.This work aims at filling this gap by proposing an improved variant of the RUN algorithm called the Adaptive-RUN.Population size plays a vital role in both runtime efficiency and optimization effectiveness of metaheuristic algorithms.Unlike the original RUN where population size is fixed throughout the search process,Adaptive-RUN automatically adjusts population size according to two population size adaptation techniques,which are linear staircase reduction and iterative halving,during the search process to achieve a good balance between exploration and exploitation characteristics.In addition,the proposed methodology employs an adaptive search step size technique to determine a better solution in the early stages of evolution to improve the solution quality,fitness,and convergence speed of the original RUN.Adaptive-RUN performance is analyzed over 23 IEEE CEC-2017 benchmark functions for two cases,where the first one applies linear staircase reduction with adaptive search step size(LSRUN),and the second one applies iterative halving with adaptive search step size(HRUN),with the original RUN.To promote green computing,the carbon footprint metric is included in the performance evaluation in addition to runtime and fitness.Simulation results based on the Friedman andWilcoxon tests revealed that Adaptive-RUN can produce high-quality solutions with lower runtime and carbon footprint values as compared to the original RUN and three recent metaheuristics.Therefore,with its higher computation efficiency,Adaptive-RUN is a much more favorable choice as compared to RUN in time stringent applications.展开更多
Information interaction among particles constitutes a fundamental mechanism in particle swarm optimization( PSO). To address limitations in the efficiency of information interaction and enhance the performance of PSO ...Information interaction among particles constitutes a fundamental mechanism in particle swarm optimization( PSO). To address limitations in the efficiency of information interaction and enhance the performance of PSO in complex optimization landscapes,an elite-sharing and rank-based learning( ESRBL) particle swarm optimization( ESRBL-PSO) framework was proposed in this paper. Departing from the classical PSO framework,where particles primarily interact with the global best information,ESRBL-PSO employs a hierarchical population architecture.Specifically,the original swarm is divided into multiple subpopulations of equal size,each yielding a locally optimal particle( designated as a local elite). These local elites are then aggregated into a shared elite pool,enabling shared information transfer across the entire population. During the updating phase,each particle not only interacts with information within its own subpopulation but also selects a local elite competitively from the shared elite pool for additional information interaction. This dual mechanism amplifies the diversity of information interaction during swarm evolution, facilitating the rapid identification of high-potential regions in expansive search spaces.Furthermore,to mitigate sensitivity to parameters,ESRBL-PSO eliminates all parameters in the particle velocity update process and proposes an adaptive population division strategy. Synergistically,these features enable ESRBLPSO to maintain a balance between exploration diversity and convergence precision,thereby achieving effective optimization across complex problem domains. Finally,extensive experiments executed on CEC2017 benchmark suites verify that ESRBL-PSO exhibits competitive or even superior performance compared to 10 state-of-the-art approaches and maintains robust capability and scalability in handling complex numerical optimization problems.展开更多
文摘A newly proposed competent population-based optimization algorithm called RUN,which uses the principle of slope variations calculated by applying the Runge Kutta method as the key search mechanism,has gained wider interest in solving optimization problems.However,in high-dimensional problems,the search capabilities,convergence speed,and runtime of RUN deteriorate.This work aims at filling this gap by proposing an improved variant of the RUN algorithm called the Adaptive-RUN.Population size plays a vital role in both runtime efficiency and optimization effectiveness of metaheuristic algorithms.Unlike the original RUN where population size is fixed throughout the search process,Adaptive-RUN automatically adjusts population size according to two population size adaptation techniques,which are linear staircase reduction and iterative halving,during the search process to achieve a good balance between exploration and exploitation characteristics.In addition,the proposed methodology employs an adaptive search step size technique to determine a better solution in the early stages of evolution to improve the solution quality,fitness,and convergence speed of the original RUN.Adaptive-RUN performance is analyzed over 23 IEEE CEC-2017 benchmark functions for two cases,where the first one applies linear staircase reduction with adaptive search step size(LSRUN),and the second one applies iterative halving with adaptive search step size(HRUN),with the original RUN.To promote green computing,the carbon footprint metric is included in the performance evaluation in addition to runtime and fitness.Simulation results based on the Friedman andWilcoxon tests revealed that Adaptive-RUN can produce high-quality solutions with lower runtime and carbon footprint values as compared to the original RUN and three recent metaheuristics.Therefore,with its higher computation efficiency,Adaptive-RUN is a much more favorable choice as compared to RUN in time stringent applications.
文摘Information interaction among particles constitutes a fundamental mechanism in particle swarm optimization( PSO). To address limitations in the efficiency of information interaction and enhance the performance of PSO in complex optimization landscapes,an elite-sharing and rank-based learning( ESRBL) particle swarm optimization( ESRBL-PSO) framework was proposed in this paper. Departing from the classical PSO framework,where particles primarily interact with the global best information,ESRBL-PSO employs a hierarchical population architecture.Specifically,the original swarm is divided into multiple subpopulations of equal size,each yielding a locally optimal particle( designated as a local elite). These local elites are then aggregated into a shared elite pool,enabling shared information transfer across the entire population. During the updating phase,each particle not only interacts with information within its own subpopulation but also selects a local elite competitively from the shared elite pool for additional information interaction. This dual mechanism amplifies the diversity of information interaction during swarm evolution, facilitating the rapid identification of high-potential regions in expansive search spaces.Furthermore,to mitigate sensitivity to parameters,ESRBL-PSO eliminates all parameters in the particle velocity update process and proposes an adaptive population division strategy. Synergistically,these features enable ESRBLPSO to maintain a balance between exploration diversity and convergence precision,thereby achieving effective optimization across complex problem domains. Finally,extensive experiments executed on CEC2017 benchmark suites verify that ESRBL-PSO exhibits competitive or even superior performance compared to 10 state-of-the-art approaches and maintains robust capability and scalability in handling complex numerical optimization problems.