A switch from avian-typeα-2,3 to human-typeα-2,6 receptors is an essential element for the initiation of a pandemic from an avian influenza virus.Some H9N2 viruses exhibit a preference for binding to human-typeα-2,...A switch from avian-typeα-2,3 to human-typeα-2,6 receptors is an essential element for the initiation of a pandemic from an avian influenza virus.Some H9N2 viruses exhibit a preference for binding to human-typeα-2,6 receptors.This identifies their potential threat to public health.However,our understanding of the molecular basis for the switch of receptor preference is still limited.In this study,we employed the random forest algorithm to identify the potentially key amino acid sites within hemagglutinin(HA),which are associated with the receptor binding ability of H9N2 avian influenza virus(AIV).Subsequently,these sites were further verified by receptor binding assays.A total of 12 substitutions in the HA protein(N158D,N158S,A160 N,A160D,A160T,T163I,T163V,V190T,V190A,D193 N,D193G,and N231D)were predicted to prefer binding toα-2,6 receptors.Except for the V190T substitution,the other substitutions were demonstrated to display an affinity for preferential binding toα-2,6 receptors by receptor binding assays.Especially,the A160T substitution caused a significant upregulation of immune-response genes and an increased mortality rate in mice.Our findings provide novel insights into understanding the genetic basis of receptor preference of the H9N2 AIV.展开更多
The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because o...The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.展开更多
Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,...Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,the robustness of solutions obtained by these methods faces great challenges when the complexity of the PV model increases.The unstable results will affect the reliable operation and maintenance strategies of PV systems.In response to this challenge,an improved rime optimization algorithm with enhanced exploration and exploitation,termed TERIME,is proposed for robust and accurate parameter identification for various PV models.Specifically,the differential evolution mutation operator is integrated in the exploration phase to enhance the population diversity.Meanwhile,a new exploitation strategy incorporating randomization and neighborhood strategies simultaneously is developed to maintain the balance of exploitation width and depth.The TERIME algorithm is applied to estimate the optimal parameters of the single diode model,double diode model,and triple diode model combined with the Lambert-W function for three PV cell and module types including RTC France,Photo Watt-PWP 201 and S75.According to the statistical analysis in 100 runs,the proposed algorithm achieves more accurate and robust parameter estimations than other techniques to various PV models in varying environmental conditions.All of our source codes are publicly available at https://github.com/dirge1/TERIME.展开更多
This research presents a novel nature-inspired metaheuristic optimization algorithm,called theNarwhale Optimization Algorithm(NWOA).The algorithm draws inspiration from the foraging and prey-hunting strategies of narw...This research presents a novel nature-inspired metaheuristic optimization algorithm,called theNarwhale Optimization Algorithm(NWOA).The algorithm draws inspiration from the foraging and prey-hunting strategies of narwhals,“unicorns of the sea”,particularly the use of their distinctive spiral tusks,which play significant roles in hunting,searching prey,navigation,echolocation,and complex social interaction.Particularly,the NWOA imitates the foraging strategies and techniques of narwhals when hunting for prey but focuses mainly on the cooperative and exploratory behavior shown during group hunting and in the use of their tusks in sensing and locating prey under the Arctic ice.These functions provide a strong assessment basis for investigating the algorithm’s prowess at balancing exploration and exploitation,convergence speed,and solution accuracy.The performance of the NWOA is evaluated on 30 benchmark test functions.A comparison study using the Grey Wolf Optimizer(GWO),Whale Optimization Algorithm(WOA),Perfumer Optimization Algorithm(POA),Candle Flame Optimization(CFO)Algorithm,Particle Swarm Optimization(PSO)Algorithm,and Genetic Algorithm(GA)validates the results.As evidenced in the experimental results,NWOA is capable of yielding competitive outcomes among these well-known optimizers,whereas in several instances.These results suggest thatNWOAhas proven to be an effective and robust optimization tool suitable for solving many different complex optimization problems from the real world.展开更多
Data clustering is an essential technique for analyzing complex datasets and continues to be a central research topic in data analysis.Traditional clustering algorithms,such as K-means,are widely used due to their sim...Data clustering is an essential technique for analyzing complex datasets and continues to be a central research topic in data analysis.Traditional clustering algorithms,such as K-means,are widely used due to their simplicity and efficiency.This paper proposes a novel Spiral Mechanism-Optimized Phasmatodea Population Evolution Algorithm(SPPE)to improve clustering performance.The SPPE algorithm introduces several enhancements to the standard Phasmatodea Population Evolution(PPE)algorithm.Firstly,a Variable Neighborhood Search(VNS)factor is incorporated to strengthen the local search capability and foster population diversity.Secondly,a position update model,incorporating a spiral mechanism,is designed to improve the algorithm’s global exploration and convergence speed.Finally,a dynamic balancing factor,guided by fitness values,adjusts the search process to balance exploration and exploitation effectively.The performance of SPPE is first validated on CEC2013 benchmark functions,where it demonstrates excellent convergence speed and superior optimization results compared to several state-of-the-art metaheuristic algorithms.To further verify its practical applicability,SPPE is combined with the K-means algorithm for data clustering and tested on seven datasets.Experimental results show that SPPE-K-means improves clustering accuracy,reduces dependency on initialization,and outperforms other clustering approaches.This study highlights SPPE’s robustness and efficiency in solving both optimization and clustering challenges,making it a promising tool for complex data analysis tasks.展开更多
Thetraditional first-order reliability method(FORM)often encounters challengeswith non-convergence of results or excessive calculation when analyzing complex engineering problems.To improve the global convergence spee...Thetraditional first-order reliability method(FORM)often encounters challengeswith non-convergence of results or excessive calculation when analyzing complex engineering problems.To improve the global convergence speed of structural reliability analysis,an improved coati optimization algorithm(COA)is proposed in this paper.In this study,the social learning strategy is used to improve the coati optimization algorithm(SL-COA),which improves the convergence speed and robustness of the newheuristic optimization algorithm.Then,the SL-COAis comparedwith the latest heuristic optimization algorithms such as the original COA,whale optimization algorithm(WOA),and osprey optimization algorithm(OOA)in the CEC2005 and CEC2017 test function sets and two engineering optimization design examples.The optimization results show that the proposed SL-COA algorithm has a high competitiveness.Secondly,this study introduces the SL-COA algorithm into the MPP(Most Probable Point)search process based on FORM and constructs a new reliability analysis method.Finally,the proposed reliability analysis method is verified by four mathematical examples and two engineering examples.The results show that the proposed SL-COA-assisted FORM exhibits fast convergence and avoids premature convergence to local optima as demonstrated by its successful application to problems such as composite cylinder design and support bracket analysis.展开更多
Metaheuristic algorithms are pivotal in cloud task scheduling. However, the complexity and uncertainty of the scheduling problem severely limit algorithms. To bypass this circumvent, numerous algorithms have been prop...Metaheuristic algorithms are pivotal in cloud task scheduling. However, the complexity and uncertainty of the scheduling problem severely limit algorithms. To bypass this circumvent, numerous algorithms have been proposed. The Hiking Optimization Algorithm (HOA) have been used in multiple fields. However, HOA suffers from local optimization, slow convergence, and low efficiency of late iteration search when solving cloud task scheduling problems. Thus, this paper proposes an improved HOA called CMOHOA. It collaborates with multi-strategy to improve HOA. Specifically, Chebyshev chaos is introduced to increase population diversity. Then, a hybrid speed update strategy is designed to enhance convergence speed. Meanwhile, an adversarial learning strategy is introduced to enhance the search capability in the late iteration. Different scenarios of scheduling problems are used to test the CMOHOA’s performance. First, CMOHOA was used to solve basic cloud computing task scheduling problems, and the results showed that it reduced the average total cost by 10% or more. Secondly, CMOHOA has been applied to edge fog cloud scheduling problems, and the results show that it reduces the average total scheduling cost by 2% or more. Finally, CMOHOA reduced the average total cost by 7% or more in scheduling problems for information transmission.展开更多
Reliable Cluster Head(CH)selectionbased routing protocols are necessary for increasing the packet transmission efficiency with optimal path discovery that never introduces degradation over the transmission reliability...Reliable Cluster Head(CH)selectionbased routing protocols are necessary for increasing the packet transmission efficiency with optimal path discovery that never introduces degradation over the transmission reliability.In this paper,Hybrid Golden Jackal,and Improved Whale Optimization Algorithm(HGJIWOA)is proposed as an effective and optimal routing protocol that guarantees efficient routing of data packets in the established between the CHs and the movable sink.This HGJIWOA included the phases of Dynamic Lens-Imaging Learning Strategy and Novel Update Rules for determining the reliable route essential for data packets broadcasting attained through fitness measure estimation-based CH selection.The process of CH selection achieved using Golden Jackal Optimization Algorithm(GJOA)completely depends on the factors of maintainability,consistency,trust,delay,and energy.The adopted GJOA algorithm play a dominant role in determining the optimal path of routing depending on the parameter of reduced delay and minimal distance.It further utilized Improved Whale Optimisation Algorithm(IWOA)for forwarding the data from chosen CHs to the BS via optimized route depending on the parameters of energy and distance.It also included a reliable route maintenance process that aids in deciding the selected route through which data need to be transmitted or re-routed.The simulation outcomes of the proposed HGJIWOA mechanism with different sensor nodes confirmed an improved mean throughput of 18.21%,sustained residual energy of 19.64%with minimized end-to-end delay of 21.82%,better than the competitive CH selection approaches.展开更多
The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is n...The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap.展开更多
Nonlinear wavefront shaping is crucial for advancing optical technologies,enabling applications in optical computation,information processing,and imaging.However,a significant challenge is that once a metasurface is f...Nonlinear wavefront shaping is crucial for advancing optical technologies,enabling applications in optical computation,information processing,and imaging.However,a significant challenge is that once a metasurface is fabricated,the nonlinear wavefront it generates is fixed,offering little flexibility.This limitation often necessitates the fabrication of different metasurfaces for different wavefronts,which is both time-consuming and inefficient.To address this,we combine evolutionary algorithms with spatial light modulators(SLMs)to dynamically control wavefronts using a single metasurface,reducing the need for multiple fabrications and enabling the generation of arbitrary nonlinear wavefront patterns without requiring complicated optical alignment.We demonstrate this approach by introducing a genetic algorithm(GA)to manipulate visible wavefronts converted from near-infrared light via third-harmonic generation(THG)in a silicon metasurface.The Si metasurface supports multipolar Mie resonances that strongly enhance light-matter interactions,thereby significantly boosting THG emission at resonant positions.Additionally,the cubic relationship between THG emission and the infrared input reduces noise in the diffractive patterns produced by the SLM.This allows for precise experimental engineering of the nonlinear emission patterns with fewer alignment constraints.Our approach paves the way for self-optimized nonlinear wavefront shaping,advancing optical computation and information processing techniques.展开更多
Industrial linear accelerators often contain many bunches when their pulse widths are extended to microseconds.As they typically operate at low electron energies and high currents,the interactions among bunches cannot...Industrial linear accelerators often contain many bunches when their pulse widths are extended to microseconds.As they typically operate at low electron energies and high currents,the interactions among bunches cannot be neglected.In this study,an algorithm is introduced for calculating the space charge force of a train with infinite bunches.By utilizing the ring charge model and the particle-in-cell(PIC)method and combining analytical and numerical methods,the proposed algorithm efficiently calculates the space charge force of infinite bunches,enabling the accurate design of accelerator parameters and a comprehensive understanding of the space charge force.This is a significant improvement on existing simulation software such as ASTRA and PARMELA that can only handle a single bunch or a small number of bunches.The PIC algorithm is validated in long drift space transport by comparing it with existing models,such as the infinite-bunch,ASTRA single-bunch,and PARMELA several-bunch algorithms.The space charge force calculation results for the external acceleration field are also verified.The reliability of the proposed algorithm provides a foundation for the design and optimization of industrial accelerators.展开更多
Thinning of antenna arrays has been a popular topic for the last several decades.With increasing computational power,this optimization task acquired a new hue.This paper suggests a genetic algorithm as an instrument f...Thinning of antenna arrays has been a popular topic for the last several decades.With increasing computational power,this optimization task acquired a new hue.This paper suggests a genetic algorithm as an instrument for antenna array thinning.The algorithm with a deliberately chosen fitness function allows synthesizing thinned linear antenna arrays with low peak sidelobe level(SLL)while maintaining the half-power beamwidth(HPBW)of a full linear antenna array.Based on results from existing papers in the field and known approaches to antenna array thinning,a classification of thinning types is introduced.The optimal thinning type for a linear thinned antenna array is determined on the basis of a maximum attainable SLL.The effect of thinning coefficient on main directional pattern characteristics,such as peak SLL and HPBW,is discussed for a number of amplitude distributions.展开更多
This study proposes a novel time-synchronization protocol inspired by stochastic gradient algorithms.The clock model of each network node in this synchronizer is configured as a generic adaptive filter where different...This study proposes a novel time-synchronization protocol inspired by stochastic gradient algorithms.The clock model of each network node in this synchronizer is configured as a generic adaptive filter where different stochastic gradient algorithms can be adopted for adaptive clock frequency adjustments.The study analyzes the pairwise synchronization behavior of the protocol and proves the generalized convergence of the synchronization error and clock frequency.A novel closed-form expression is also derived for a generalized asymptotic error variance steady state.Steady and convergence analyses are then presented for the synchronization,with frequency adaptations done using least mean square(LMS),the Newton search,the gradient descent(GraDes),the normalized LMS(N-LMS),and the Sign-Data LMS algorithms.Results obtained from real-time experiments showed a better performance of our protocols as compared to the Average Proportional-Integral Synchronization Protocol(AvgPISync)regarding the impact of quantization error on synchronization accuracy,precision,and convergence time.This generalized approach to time synchronization allows flexibility in selecting a suitable protocol for different wireless sensor network applications.展开更多
Image enhancement utilizes intensity transformation functions to maximize the information content of enhanced images.This paper approaches the topic as an optimization problem and uses the bald eagle search(BES)algori...Image enhancement utilizes intensity transformation functions to maximize the information content of enhanced images.This paper approaches the topic as an optimization problem and uses the bald eagle search(BES)algorithm to achieve optimal results.In our proposed model,gamma correction and Retinex address color cast issues and enhance image edges and details.The final enhanced image is obtained through color balancing.The BES algorithm seeks the optimal solution through the selection,search,and swooping stages.However,it is prone to getting stuck in local optima and converges slowly.To overcome these limitations,we propose an improved BES algorithm(ABES)with enhanced population learning,position updates,and control parameters.ABES is employed to optimize the core parameters of gamma correction and Retinex to improve image quality,and the maximization of information entropy is utilized as the objective function.Real benchmark images are collected to validate its performance.Experimental results demonstrate that ABES outperforms the existing image enhancement methods,including the flower pollination algorithm,the chimp optimization algorithm,particle swarm optimization,and BES,in terms of information entropy,peak signal-to-noise ratio(PSNR),structural similarity index(SSIM),and patch-based contrast quality index(PCQI).ABES demonstrates superior performance both qualitatively and quantitatively,and it helps enhance prominent features and contrast in the images while maintaining the natural appearance of the original images.展开更多
With the rapid development of blockchain technology,the Chinese government has proposed that the commercial use of blockchain services in China should support the national encryption standard,also known as the state s...With the rapid development of blockchain technology,the Chinese government has proposed that the commercial use of blockchain services in China should support the national encryption standard,also known as the state secret algorithm GuoMi algorithm.The original Hyperledger Fabric only supports internationally common encryption algorithms,so it is particularly necessary to enhance support for the national encryption standard.Traditional identity authentication,access control,and security audit technologies have single-point failures,and data can be easily tampered with,leading to trust issues.To address these problems,this paper proposes an optimized and application research plan for Hyperledger Fabric.We study the optimization model of cryptographic components in Hyperledger Fabric,and based on Fabric's pluggable mechanism,we enhance the Fabric architecture with the national encryption standard.In addition,we research key technologies involved in the secure application protocol based on the blockchain.We propose a blockchain-based identity authentication protocol,detailing the design of an identity authentication scheme based on blockchain certificates and Fabric CA,and use a dual-signature method to further improve its security and reliability.Then,we propose a flexible,dynamically configurable real-time access control and security audit mechanism based on blockchain,further enhancing the security of the system.展开更多
With the rapid advancement of medical artificial intelligence(AI)technology,particularly the widespread adoption of AI diagnostic systems,ethical challenges in medical decision-making have garnered increasing attentio...With the rapid advancement of medical artificial intelligence(AI)technology,particularly the widespread adoption of AI diagnostic systems,ethical challenges in medical decision-making have garnered increasing attention.This paper analyzes the limitations of algorithmic ethics in medical decision-making and explores accountability mechanisms,aiming to provide theoretical support for ethically informed medical practices.The study highlights how the opacity of AI algorithms complicates the definition of decision-making responsibility,undermines doctor-patient trust,and affects informed consent.By thoroughly investigating issues such as the algorithmic“black box”problem and data privacy protection,we develop accountability assessment models to address ethical concerns related to medical resource allocation.Furthermore,this research examines the effective implementation of AI diagnostic systems through case studies of both successful and unsuccessful applications,extracting lessons on accountability mechanisms and response strategies.Finally,we emphasize that establishing a transparent accountability framework is crucial for enhancing the ethical standards of medical AI systems and protecting patients’rights and interests.展开更多
The exponential growth in the scale of power systems has led to a significant increase in the complexity of dispatch problem resolution,particularly within multi-area interconnected power grids.This complexity necessi...The exponential growth in the scale of power systems has led to a significant increase in the complexity of dispatch problem resolution,particularly within multi-area interconnected power grids.This complexity necessitates the employment of distributed solution methodologies,which are not only essential but also highly desirable.In the realm of computational modelling,the multi-area economic dispatch problem(MAED)can be formulated as a linearly constrained separable convex optimization problem.The proximal point algorithm(PPA)is particularly adept at addressing such mathematical constructs effectively.This study introduces parallel(PPPA)and serial(SPPA)variants of the PPA as distributed algorithms,specifically designed for the computational modelling of the MAED.The PPA introduces a quadratic term into the objective function,which,while potentially complicating the iterative updates of the algorithm,serves to dampen oscillations near the optimal solution,thereby enhancing the convergence characteristics.Furthermore,the convergence efficiency of the PPA is significantly influenced by the parameter c.To address this parameter sensitivity,this research draws on trend theory from stock market analysis to propose trend theory-driven distributed PPPA and SPPA,thereby enhancing the robustness of the computational models.The computational models proposed in this study are anticipated to exhibit superior performance in terms of convergence behaviour,stability,and robustness with respect to parameter selection,potentially outperforming existing methods such as the alternating direction method of multipliers(ADMM)and Auxiliary Problem Principle(APP)in the computational simulation of power system dispatch problems.The simulation results demonstrate that the trend theory-based PPPA,SPPA,ADMM and APP exhibit significant robustness to the initial value of parameter c,and show superior convergence characteristics compared to the residual balancing ADMM.展开更多
The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamic...The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamically choose heuristics during the search process.Numerous selection hyper-heuristics have different imple-mentation strategies.However,comparisons between them are lacking in the literature,and previous works have not highlighted the beneficial and detrimental implementation methods of different components.The question is how to effectively employ them to produce an efficient search heuristic.Furthermore,the algorithms that competed in the inaugural CHeSC have not been collectively reviewed.This work conducts a review analysis of the top twenty competitors from this competition to identify effective and ineffective strategies influencing algorithmic performance.A summary of the main characteristics and classification of the algorithms is presented.The analysis underlines efficient and inefficient methods in eight key components,including search points,search phases,heuristic selection,move acceptance,feedback,Tabu mechanism,restart mechanism,and low-level heuristic parameter control.This review analyzes the components referencing the competition’s final leaderboard and discusses future research directions for these components.The effective approaches,identified as having the highest quality index,are mixed search point,iterated search phases,relay hybridization selection,threshold acceptance,mixed learning,Tabu heuristics,stochastic restart,and dynamic parameters.Findings are also compared with recent trends in hyper-heuristics.This work enhances the understanding of selection hyper-heuristics,offering valuable insights for researchers and practitioners aiming to develop effective search algorithms for diverse problem domains.展开更多
As vehicular networks grow increasingly complex due to high node mobility and dynamic traffic conditions,efficient clustering mechanisms are vital to ensure stable and scalable communication.Recent studies have emphas...As vehicular networks grow increasingly complex due to high node mobility and dynamic traffic conditions,efficient clustering mechanisms are vital to ensure stable and scalable communication.Recent studies have emphasized the need for adaptive clustering strategies to improve performance in Intelligent Transportation Systems(ITS).This paper presents the Grasshopper Optimization Algorithm for Vehicular Network Clustering(GOAVNET)algorithm,an innovative approach to optimal vehicular clustering in Vehicular Ad-Hoc Networks(VANETs),leveraging the Grasshopper Optimization Algorithm(GOA)to address the critical challenges of traffic congestion and communication inefficiencies in Intelligent Transportation Systems(ITS).The proposed GOA-VNET employs an iterative and interactive optimization mechanism to dynamically adjust node positions and cluster configurations,ensuring robust adaptability to varying vehicular densities and transmission ranges.Key features of GOA-VNET include the utilization of attraction zone,repulsion zone,and comfort zone parameters,which collectively enhance clustering efficiency and minimize congestion within Regions of Interest(ROI).By managing cluster configurations and node densities effectively,GOA-VNET ensures balanced load distribution and seamless data transmission,even in scenarios with high vehicular densities and varying transmission ranges.Comparative evaluations against the Whale Optimization Algorithm(WOA)and Grey Wolf Optimization(GWO)demonstrate that GOA-VNET consistently outperforms these methods by achieving superior clustering efficiency,reducing the number of clusters by up to 10%in high-density scenarios,and improving data transmission reliability.Simulation results reveal that under a 100-600 m transmission range,GOA-VNET achieves an average reduction of 8%-15%in the number of clusters and maintains a 5%-10%improvement in packet delivery ratio(PDR)compared to baseline algorithms.Additionally,the algorithm incorporates a heat transfer-inspired load-balancing mechanism,ensuring equitable distribution of nodes among cluster leaders(CLs)and maintaining a stable network environment.These results validate GOA-VNET as a reliable and scalable solution for VANETs,with significant potential to support next-generation ITS.Future research could further enhance the algorithm by integrating multi-objective optimization techniques and exploring broader applications in complex traffic scenarios.展开更多
This study examines the bidirectional shaping mechanism between short-video algorithms and film narratives within the attention economy.It investigates how algorithmic logic influences cinematic storytelling and how f...This study examines the bidirectional shaping mechanism between short-video algorithms and film narratives within the attention economy.It investigates how algorithmic logic influences cinematic storytelling and how films,in turn,contribute to the aesthetic enhancement of short-video content.Drawing on Communication Accommodation Theory and Berry’s Acculturation Theory,along with case analyses and industry data,this research demonstrates that algorithms push films toward high-stimulus,fast-paced narrative patterns—characterized by increased shot density and structural fragmentation—to capture and retain viewer attention.Conversely,films counter this influence by supplying narratively deep and artistically refined content that elevates short-video aesthetics and encourages critical audience engagement.This dynamic reflects a process of mutual adaptation rather than one-sided dominance.The study concludes that such interaction signifies a broader restructuring of cultural production logic,facilitating cross-media convergence while simultaneously posing risks to cultural diversity due to the prioritization of high-traffic content.Balancing this relationship will require policy support,algorithmic transparency,and strengthened industry self-regulation to preserve artistic integrity and cultural ecosystem diversity.展开更多
基金supported by the National Natural Science Foundation of China(32273037 and 32102636)the Guangdong Major Project of Basic and Applied Basic Research(2020B0301030007)+4 种基金Laboratory of Lingnan Modern Agriculture Project(NT2021007)the Guangdong Science and Technology Innovation Leading Talent Program(2019TX05N098)the 111 Center(D20008)the double first-class discipline promotion project(2023B10564003)the Department of Education of Guangdong Province(2019KZDXM004 and 2019KCXTD001).
文摘A switch from avian-typeα-2,3 to human-typeα-2,6 receptors is an essential element for the initiation of a pandemic from an avian influenza virus.Some H9N2 viruses exhibit a preference for binding to human-typeα-2,6 receptors.This identifies their potential threat to public health.However,our understanding of the molecular basis for the switch of receptor preference is still limited.In this study,we employed the random forest algorithm to identify the potentially key amino acid sites within hemagglutinin(HA),which are associated with the receptor binding ability of H9N2 avian influenza virus(AIV).Subsequently,these sites were further verified by receptor binding assays.A total of 12 substitutions in the HA protein(N158D,N158S,A160 N,A160D,A160T,T163I,T163V,V190T,V190A,D193 N,D193G,and N231D)were predicted to prefer binding toα-2,6 receptors.Except for the V190T substitution,the other substitutions were demonstrated to display an affinity for preferential binding toα-2,6 receptors by receptor binding assays.Especially,the A160T substitution caused a significant upregulation of immune-response genes and an increased mortality rate in mice.Our findings provide novel insights into understanding the genetic basis of receptor preference of the H9N2 AIV.
基金supported in part by the National Key Research and Development Program of China under Grant No.2021YFF0901300in part by the National Natural Science Foundation of China under Grant Nos.62173076 and 72271048.
文摘The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.
基金supported by the National Natural Science Foundation of China[grant number 51775020]the Science Challenge Project[grant number.TZ2018007]+2 种基金the National Natural Science Foundation of China[grant number 62073009]the Postdoctoral Fellowship Program of CPSF[grant number GZC20233365]the Fundamental Research Funds for Central Universities[grant number JKF-20240559].
文摘Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,the robustness of solutions obtained by these methods faces great challenges when the complexity of the PV model increases.The unstable results will affect the reliable operation and maintenance strategies of PV systems.In response to this challenge,an improved rime optimization algorithm with enhanced exploration and exploitation,termed TERIME,is proposed for robust and accurate parameter identification for various PV models.Specifically,the differential evolution mutation operator is integrated in the exploration phase to enhance the population diversity.Meanwhile,a new exploitation strategy incorporating randomization and neighborhood strategies simultaneously is developed to maintain the balance of exploitation width and depth.The TERIME algorithm is applied to estimate the optimal parameters of the single diode model,double diode model,and triple diode model combined with the Lambert-W function for three PV cell and module types including RTC France,Photo Watt-PWP 201 and S75.According to the statistical analysis in 100 runs,the proposed algorithm achieves more accurate and robust parameter estimations than other techniques to various PV models in varying environmental conditions.All of our source codes are publicly available at https://github.com/dirge1/TERIME.
文摘This research presents a novel nature-inspired metaheuristic optimization algorithm,called theNarwhale Optimization Algorithm(NWOA).The algorithm draws inspiration from the foraging and prey-hunting strategies of narwhals,“unicorns of the sea”,particularly the use of their distinctive spiral tusks,which play significant roles in hunting,searching prey,navigation,echolocation,and complex social interaction.Particularly,the NWOA imitates the foraging strategies and techniques of narwhals when hunting for prey but focuses mainly on the cooperative and exploratory behavior shown during group hunting and in the use of their tusks in sensing and locating prey under the Arctic ice.These functions provide a strong assessment basis for investigating the algorithm’s prowess at balancing exploration and exploitation,convergence speed,and solution accuracy.The performance of the NWOA is evaluated on 30 benchmark test functions.A comparison study using the Grey Wolf Optimizer(GWO),Whale Optimization Algorithm(WOA),Perfumer Optimization Algorithm(POA),Candle Flame Optimization(CFO)Algorithm,Particle Swarm Optimization(PSO)Algorithm,and Genetic Algorithm(GA)validates the results.As evidenced in the experimental results,NWOA is capable of yielding competitive outcomes among these well-known optimizers,whereas in several instances.These results suggest thatNWOAhas proven to be an effective and robust optimization tool suitable for solving many different complex optimization problems from the real world.
文摘Data clustering is an essential technique for analyzing complex datasets and continues to be a central research topic in data analysis.Traditional clustering algorithms,such as K-means,are widely used due to their simplicity and efficiency.This paper proposes a novel Spiral Mechanism-Optimized Phasmatodea Population Evolution Algorithm(SPPE)to improve clustering performance.The SPPE algorithm introduces several enhancements to the standard Phasmatodea Population Evolution(PPE)algorithm.Firstly,a Variable Neighborhood Search(VNS)factor is incorporated to strengthen the local search capability and foster population diversity.Secondly,a position update model,incorporating a spiral mechanism,is designed to improve the algorithm’s global exploration and convergence speed.Finally,a dynamic balancing factor,guided by fitness values,adjusts the search process to balance exploration and exploitation effectively.The performance of SPPE is first validated on CEC2013 benchmark functions,where it demonstrates excellent convergence speed and superior optimization results compared to several state-of-the-art metaheuristic algorithms.To further verify its practical applicability,SPPE is combined with the K-means algorithm for data clustering and tested on seven datasets.Experimental results show that SPPE-K-means improves clustering accuracy,reduces dependency on initialization,and outperforms other clustering approaches.This study highlights SPPE’s robustness and efficiency in solving both optimization and clustering challenges,making it a promising tool for complex data analysis tasks.
基金funded by the National Key Research and Development Program(Grant No.2022YFB3706904).
文摘Thetraditional first-order reliability method(FORM)often encounters challengeswith non-convergence of results or excessive calculation when analyzing complex engineering problems.To improve the global convergence speed of structural reliability analysis,an improved coati optimization algorithm(COA)is proposed in this paper.In this study,the social learning strategy is used to improve the coati optimization algorithm(SL-COA),which improves the convergence speed and robustness of the newheuristic optimization algorithm.Then,the SL-COAis comparedwith the latest heuristic optimization algorithms such as the original COA,whale optimization algorithm(WOA),and osprey optimization algorithm(OOA)in the CEC2005 and CEC2017 test function sets and two engineering optimization design examples.The optimization results show that the proposed SL-COA algorithm has a high competitiveness.Secondly,this study introduces the SL-COA algorithm into the MPP(Most Probable Point)search process based on FORM and constructs a new reliability analysis method.Finally,the proposed reliability analysis method is verified by four mathematical examples and two engineering examples.The results show that the proposed SL-COA-assisted FORM exhibits fast convergence and avoids premature convergence to local optima as demonstrated by its successful application to problems such as composite cylinder design and support bracket analysis.
基金supported by the National Natural Science Foundation of China (52275480)the Guizhou Provincial Science and Technology Program of Qiankehe Zhongdi Guiding ([2023]02)+1 种基金the Guizhou Provincial Science and Technology Program of Qiankehe Platform Talent Project (GCC[2023]001)the Guizhou Provincial Science and Technology Project of Qiankehe Platform Project (KXJZ[2024]002).
文摘Metaheuristic algorithms are pivotal in cloud task scheduling. However, the complexity and uncertainty of the scheduling problem severely limit algorithms. To bypass this circumvent, numerous algorithms have been proposed. The Hiking Optimization Algorithm (HOA) have been used in multiple fields. However, HOA suffers from local optimization, slow convergence, and low efficiency of late iteration search when solving cloud task scheduling problems. Thus, this paper proposes an improved HOA called CMOHOA. It collaborates with multi-strategy to improve HOA. Specifically, Chebyshev chaos is introduced to increase population diversity. Then, a hybrid speed update strategy is designed to enhance convergence speed. Meanwhile, an adversarial learning strategy is introduced to enhance the search capability in the late iteration. Different scenarios of scheduling problems are used to test the CMOHOA’s performance. First, CMOHOA was used to solve basic cloud computing task scheduling problems, and the results showed that it reduced the average total cost by 10% or more. Secondly, CMOHOA has been applied to edge fog cloud scheduling problems, and the results show that it reduces the average total scheduling cost by 2% or more. Finally, CMOHOA reduced the average total cost by 7% or more in scheduling problems for information transmission.
文摘Reliable Cluster Head(CH)selectionbased routing protocols are necessary for increasing the packet transmission efficiency with optimal path discovery that never introduces degradation over the transmission reliability.In this paper,Hybrid Golden Jackal,and Improved Whale Optimization Algorithm(HGJIWOA)is proposed as an effective and optimal routing protocol that guarantees efficient routing of data packets in the established between the CHs and the movable sink.This HGJIWOA included the phases of Dynamic Lens-Imaging Learning Strategy and Novel Update Rules for determining the reliable route essential for data packets broadcasting attained through fitness measure estimation-based CH selection.The process of CH selection achieved using Golden Jackal Optimization Algorithm(GJOA)completely depends on the factors of maintainability,consistency,trust,delay,and energy.The adopted GJOA algorithm play a dominant role in determining the optimal path of routing depending on the parameter of reduced delay and minimal distance.It further utilized Improved Whale Optimisation Algorithm(IWOA)for forwarding the data from chosen CHs to the BS via optimized route depending on the parameters of energy and distance.It also included a reliable route maintenance process that aids in deciding the selected route through which data need to be transmitted or re-routed.The simulation outcomes of the proposed HGJIWOA mechanism with different sensor nodes confirmed an improved mean throughput of 18.21%,sustained residual energy of 19.64%with minimized end-to-end delay of 21.82%,better than the competitive CH selection approaches.
文摘The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap.
基金support from the Biotechnology and Biological Council Doctoral Training Programme(BBSRC DTP)the support from the Royal Society and Wolfson Foundation(RSWF\FT\191022).
文摘Nonlinear wavefront shaping is crucial for advancing optical technologies,enabling applications in optical computation,information processing,and imaging.However,a significant challenge is that once a metasurface is fabricated,the nonlinear wavefront it generates is fixed,offering little flexibility.This limitation often necessitates the fabrication of different metasurfaces for different wavefronts,which is both time-consuming and inefficient.To address this,we combine evolutionary algorithms with spatial light modulators(SLMs)to dynamically control wavefronts using a single metasurface,reducing the need for multiple fabrications and enabling the generation of arbitrary nonlinear wavefront patterns without requiring complicated optical alignment.We demonstrate this approach by introducing a genetic algorithm(GA)to manipulate visible wavefronts converted from near-infrared light via third-harmonic generation(THG)in a silicon metasurface.The Si metasurface supports multipolar Mie resonances that strongly enhance light-matter interactions,thereby significantly boosting THG emission at resonant positions.Additionally,the cubic relationship between THG emission and the infrared input reduces noise in the diffractive patterns produced by the SLM.This allows for precise experimental engineering of the nonlinear emission patterns with fewer alignment constraints.Our approach paves the way for self-optimized nonlinear wavefront shaping,advancing optical computation and information processing techniques.
基金supported by the National Key Research and Development Program(No.2022YFC2402300)。
文摘Industrial linear accelerators often contain many bunches when their pulse widths are extended to microseconds.As they typically operate at low electron energies and high currents,the interactions among bunches cannot be neglected.In this study,an algorithm is introduced for calculating the space charge force of a train with infinite bunches.By utilizing the ring charge model and the particle-in-cell(PIC)method and combining analytical and numerical methods,the proposed algorithm efficiently calculates the space charge force of infinite bunches,enabling the accurate design of accelerator parameters and a comprehensive understanding of the space charge force.This is a significant improvement on existing simulation software such as ASTRA and PARMELA that can only handle a single bunch or a small number of bunches.The PIC algorithm is validated in long drift space transport by comparing it with existing models,such as the infinite-bunch,ASTRA single-bunch,and PARMELA several-bunch algorithms.The space charge force calculation results for the external acceleration field are also verified.The reliability of the proposed algorithm provides a foundation for the design and optimization of industrial accelerators.
文摘Thinning of antenna arrays has been a popular topic for the last several decades.With increasing computational power,this optimization task acquired a new hue.This paper suggests a genetic algorithm as an instrument for antenna array thinning.The algorithm with a deliberately chosen fitness function allows synthesizing thinned linear antenna arrays with low peak sidelobe level(SLL)while maintaining the half-power beamwidth(HPBW)of a full linear antenna array.Based on results from existing papers in the field and known approaches to antenna array thinning,a classification of thinning types is introduced.The optimal thinning type for a linear thinned antenna array is determined on the basis of a maximum attainable SLL.The effect of thinning coefficient on main directional pattern characteristics,such as peak SLL and HPBW,is discussed for a number of amplitude distributions.
基金funded by Universiti Putra Malaysia under a Geran Putra Inisiatif(GPI)research grant with reference to GP-GPI/2023/9762100.
文摘This study proposes a novel time-synchronization protocol inspired by stochastic gradient algorithms.The clock model of each network node in this synchronizer is configured as a generic adaptive filter where different stochastic gradient algorithms can be adopted for adaptive clock frequency adjustments.The study analyzes the pairwise synchronization behavior of the protocol and proves the generalized convergence of the synchronization error and clock frequency.A novel closed-form expression is also derived for a generalized asymptotic error variance steady state.Steady and convergence analyses are then presented for the synchronization,with frequency adaptations done using least mean square(LMS),the Newton search,the gradient descent(GraDes),the normalized LMS(N-LMS),and the Sign-Data LMS algorithms.Results obtained from real-time experiments showed a better performance of our protocols as compared to the Average Proportional-Integral Synchronization Protocol(AvgPISync)regarding the impact of quantization error on synchronization accuracy,precision,and convergence time.This generalized approach to time synchronization allows flexibility in selecting a suitable protocol for different wireless sensor network applications.
基金supported by the Research on theKey Technology of Damage Identification Method of Dam Concrete Structure based on Transformer Image Processing(242102521031)the project Research on Situational Awareness and Behavior Anomaly Prediction of Social Media Based on Multimodal Time Series Graph(232102520004)Key Scientific Research Project of Higher Education Institutions in Henan Province(25B520019).
文摘Image enhancement utilizes intensity transformation functions to maximize the information content of enhanced images.This paper approaches the topic as an optimization problem and uses the bald eagle search(BES)algorithm to achieve optimal results.In our proposed model,gamma correction and Retinex address color cast issues and enhance image edges and details.The final enhanced image is obtained through color balancing.The BES algorithm seeks the optimal solution through the selection,search,and swooping stages.However,it is prone to getting stuck in local optima and converges slowly.To overcome these limitations,we propose an improved BES algorithm(ABES)with enhanced population learning,position updates,and control parameters.ABES is employed to optimize the core parameters of gamma correction and Retinex to improve image quality,and the maximization of information entropy is utilized as the objective function.Real benchmark images are collected to validate its performance.Experimental results demonstrate that ABES outperforms the existing image enhancement methods,including the flower pollination algorithm,the chimp optimization algorithm,particle swarm optimization,and BES,in terms of information entropy,peak signal-to-noise ratio(PSNR),structural similarity index(SSIM),and patch-based contrast quality index(PCQI).ABES demonstrates superior performance both qualitatively and quantitatively,and it helps enhance prominent features and contrast in the images while maintaining the natural appearance of the original images.
基金supported by Fujian Provincial Social Science Foundation Public Security Theory Research Project(FJ2023TWGA004)Education and Scientific Research Special Project of Fujian Provincial Department of Finance(Research on the Application of Blockchain Technology in Prison Law Enforcement Management),under National Key R&D Program of China(2020YFB1005500)。
文摘With the rapid development of blockchain technology,the Chinese government has proposed that the commercial use of blockchain services in China should support the national encryption standard,also known as the state secret algorithm GuoMi algorithm.The original Hyperledger Fabric only supports internationally common encryption algorithms,so it is particularly necessary to enhance support for the national encryption standard.Traditional identity authentication,access control,and security audit technologies have single-point failures,and data can be easily tampered with,leading to trust issues.To address these problems,this paper proposes an optimized and application research plan for Hyperledger Fabric.We study the optimization model of cryptographic components in Hyperledger Fabric,and based on Fabric's pluggable mechanism,we enhance the Fabric architecture with the national encryption standard.In addition,we research key technologies involved in the secure application protocol based on the blockchain.We propose a blockchain-based identity authentication protocol,detailing the design of an identity authentication scheme based on blockchain certificates and Fabric CA,and use a dual-signature method to further improve its security and reliability.Then,we propose a flexible,dynamically configurable real-time access control and security audit mechanism based on blockchain,further enhancing the security of the system.
文摘With the rapid advancement of medical artificial intelligence(AI)technology,particularly the widespread adoption of AI diagnostic systems,ethical challenges in medical decision-making have garnered increasing attention.This paper analyzes the limitations of algorithmic ethics in medical decision-making and explores accountability mechanisms,aiming to provide theoretical support for ethically informed medical practices.The study highlights how the opacity of AI algorithms complicates the definition of decision-making responsibility,undermines doctor-patient trust,and affects informed consent.By thoroughly investigating issues such as the algorithmic“black box”problem and data privacy protection,we develop accountability assessment models to address ethical concerns related to medical resource allocation.Furthermore,this research examines the effective implementation of AI diagnostic systems through case studies of both successful and unsuccessful applications,extracting lessons on accountability mechanisms and response strategies.Finally,we emphasize that establishing a transparent accountability framework is crucial for enhancing the ethical standards of medical AI systems and protecting patients’rights and interests.
基金funded by Guangxi Science and Technology Base and Talent Special Project,grant number GuiKeAD20159077Foundation of Guilin University of Technology,grant number GLUTQD2018001.
文摘The exponential growth in the scale of power systems has led to a significant increase in the complexity of dispatch problem resolution,particularly within multi-area interconnected power grids.This complexity necessitates the employment of distributed solution methodologies,which are not only essential but also highly desirable.In the realm of computational modelling,the multi-area economic dispatch problem(MAED)can be formulated as a linearly constrained separable convex optimization problem.The proximal point algorithm(PPA)is particularly adept at addressing such mathematical constructs effectively.This study introduces parallel(PPPA)and serial(SPPA)variants of the PPA as distributed algorithms,specifically designed for the computational modelling of the MAED.The PPA introduces a quadratic term into the objective function,which,while potentially complicating the iterative updates of the algorithm,serves to dampen oscillations near the optimal solution,thereby enhancing the convergence characteristics.Furthermore,the convergence efficiency of the PPA is significantly influenced by the parameter c.To address this parameter sensitivity,this research draws on trend theory from stock market analysis to propose trend theory-driven distributed PPPA and SPPA,thereby enhancing the robustness of the computational models.The computational models proposed in this study are anticipated to exhibit superior performance in terms of convergence behaviour,stability,and robustness with respect to parameter selection,potentially outperforming existing methods such as the alternating direction method of multipliers(ADMM)and Auxiliary Problem Principle(APP)in the computational simulation of power system dispatch problems.The simulation results demonstrate that the trend theory-based PPPA,SPPA,ADMM and APP exhibit significant robustness to the initial value of parameter c,and show superior convergence characteristics compared to the residual balancing ADMM.
基金funded by Ministry of Higher Education(MoHE)Malaysia,under Transdisciplinary Research Grant Scheme(TRGS/1/2019/UKM/01/4/2).
文摘The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamically choose heuristics during the search process.Numerous selection hyper-heuristics have different imple-mentation strategies.However,comparisons between them are lacking in the literature,and previous works have not highlighted the beneficial and detrimental implementation methods of different components.The question is how to effectively employ them to produce an efficient search heuristic.Furthermore,the algorithms that competed in the inaugural CHeSC have not been collectively reviewed.This work conducts a review analysis of the top twenty competitors from this competition to identify effective and ineffective strategies influencing algorithmic performance.A summary of the main characteristics and classification of the algorithms is presented.The analysis underlines efficient and inefficient methods in eight key components,including search points,search phases,heuristic selection,move acceptance,feedback,Tabu mechanism,restart mechanism,and low-level heuristic parameter control.This review analyzes the components referencing the competition’s final leaderboard and discusses future research directions for these components.The effective approaches,identified as having the highest quality index,are mixed search point,iterated search phases,relay hybridization selection,threshold acceptance,mixed learning,Tabu heuristics,stochastic restart,and dynamic parameters.Findings are also compared with recent trends in hyper-heuristics.This work enhances the understanding of selection hyper-heuristics,offering valuable insights for researchers and practitioners aiming to develop effective search algorithms for diverse problem domains.
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2024-00337489Development of Data Drift Management Technology to Overcome Performance Degradation of AI Analysis Models).
文摘As vehicular networks grow increasingly complex due to high node mobility and dynamic traffic conditions,efficient clustering mechanisms are vital to ensure stable and scalable communication.Recent studies have emphasized the need for adaptive clustering strategies to improve performance in Intelligent Transportation Systems(ITS).This paper presents the Grasshopper Optimization Algorithm for Vehicular Network Clustering(GOAVNET)algorithm,an innovative approach to optimal vehicular clustering in Vehicular Ad-Hoc Networks(VANETs),leveraging the Grasshopper Optimization Algorithm(GOA)to address the critical challenges of traffic congestion and communication inefficiencies in Intelligent Transportation Systems(ITS).The proposed GOA-VNET employs an iterative and interactive optimization mechanism to dynamically adjust node positions and cluster configurations,ensuring robust adaptability to varying vehicular densities and transmission ranges.Key features of GOA-VNET include the utilization of attraction zone,repulsion zone,and comfort zone parameters,which collectively enhance clustering efficiency and minimize congestion within Regions of Interest(ROI).By managing cluster configurations and node densities effectively,GOA-VNET ensures balanced load distribution and seamless data transmission,even in scenarios with high vehicular densities and varying transmission ranges.Comparative evaluations against the Whale Optimization Algorithm(WOA)and Grey Wolf Optimization(GWO)demonstrate that GOA-VNET consistently outperforms these methods by achieving superior clustering efficiency,reducing the number of clusters by up to 10%in high-density scenarios,and improving data transmission reliability.Simulation results reveal that under a 100-600 m transmission range,GOA-VNET achieves an average reduction of 8%-15%in the number of clusters and maintains a 5%-10%improvement in packet delivery ratio(PDR)compared to baseline algorithms.Additionally,the algorithm incorporates a heat transfer-inspired load-balancing mechanism,ensuring equitable distribution of nodes among cluster leaders(CLs)and maintaining a stable network environment.These results validate GOA-VNET as a reliable and scalable solution for VANETs,with significant potential to support next-generation ITS.Future research could further enhance the algorithm by integrating multi-objective optimization techniques and exploring broader applications in complex traffic scenarios.
文摘This study examines the bidirectional shaping mechanism between short-video algorithms and film narratives within the attention economy.It investigates how algorithmic logic influences cinematic storytelling and how films,in turn,contribute to the aesthetic enhancement of short-video content.Drawing on Communication Accommodation Theory and Berry’s Acculturation Theory,along with case analyses and industry data,this research demonstrates that algorithms push films toward high-stimulus,fast-paced narrative patterns—characterized by increased shot density and structural fragmentation—to capture and retain viewer attention.Conversely,films counter this influence by supplying narratively deep and artistically refined content that elevates short-video aesthetics and encourages critical audience engagement.This dynamic reflects a process of mutual adaptation rather than one-sided dominance.The study concludes that such interaction signifies a broader restructuring of cultural production logic,facilitating cross-media convergence while simultaneously posing risks to cultural diversity due to the prioritization of high-traffic content.Balancing this relationship will require policy support,algorithmic transparency,and strengthened industry self-regulation to preserve artistic integrity and cultural ecosystem diversity.