In this paper,we established a class of parallel algorithm for solving low-rank tensor completion problem.The main idea is that N singular value decompositions are implemented in N different processors for each slice ...In this paper,we established a class of parallel algorithm for solving low-rank tensor completion problem.The main idea is that N singular value decompositions are implemented in N different processors for each slice matrix under unfold operator,and then the fold operator is used to form the next iteration tensor such that the computing time can be decreased.In theory,we analyze the global convergence of the algorithm.In numerical experiment,the simulation data and real image inpainting are carried out.Experiment results show the parallel algorithm outperform its original algorithm in CPU times under the same precision.展开更多
This research presents a novel nature-inspired metaheuristic optimization algorithm,called theNarwhale Optimization Algorithm(NWOA).The algorithm draws inspiration from the foraging and prey-hunting strategies of narw...This research presents a novel nature-inspired metaheuristic optimization algorithm,called theNarwhale Optimization Algorithm(NWOA).The algorithm draws inspiration from the foraging and prey-hunting strategies of narwhals,“unicorns of the sea”,particularly the use of their distinctive spiral tusks,which play significant roles in hunting,searching prey,navigation,echolocation,and complex social interaction.Particularly,the NWOA imitates the foraging strategies and techniques of narwhals when hunting for prey but focuses mainly on the cooperative and exploratory behavior shown during group hunting and in the use of their tusks in sensing and locating prey under the Arctic ice.These functions provide a strong assessment basis for investigating the algorithm’s prowess at balancing exploration and exploitation,convergence speed,and solution accuracy.The performance of the NWOA is evaluated on 30 benchmark test functions.A comparison study using the Grey Wolf Optimizer(GWO),Whale Optimization Algorithm(WOA),Perfumer Optimization Algorithm(POA),Candle Flame Optimization(CFO)Algorithm,Particle Swarm Optimization(PSO)Algorithm,and Genetic Algorithm(GA)validates the results.As evidenced in the experimental results,NWOA is capable of yielding competitive outcomes among these well-known optimizers,whereas in several instances.These results suggest thatNWOAhas proven to be an effective and robust optimization tool suitable for solving many different complex optimization problems from the real world.展开更多
The advent of microgrids in modern energy systems heralds a promising era of resilience,sustainability,and efficiency.Within the realm of grid-tied microgrids,the selection of an optimal optimization algorithm is crit...The advent of microgrids in modern energy systems heralds a promising era of resilience,sustainability,and efficiency.Within the realm of grid-tied microgrids,the selection of an optimal optimization algorithm is critical for effective energy management,particularly in economic dispatching.This study compares the performance of Particle Swarm Optimization(PSO)and Genetic Algorithms(GA)in microgrid energy management systems,implemented using MATLAB tools.Through a comprehensive review of the literature and sim-ulations conducted in MATLAB,the study analyzes performance metrics,convergence speed,and the overall efficacy of GA and PSO,with a focus on economic dispatching tasks.Notably,a significant distinction emerges between the cost curves generated by the two algo-rithms for microgrid operation,with the PSO algorithm consistently resulting in lower costs due to its effective economic dispatching capabilities.Specifically,the utilization of the PSO approach could potentially lead to substantial savings on the power bill,amounting to approximately$15.30 in this evaluation.Thefindings provide insights into the strengths and limitations of each algorithm within the complex dynamics of grid-tied microgrids,thereby assisting stakeholders and researchers in arriving at informed decisions.This study contributes to the discourse on sustainable energy management by offering actionable guidance for the advancement of grid-tied micro-grid technologies through MATLAB-implemented optimization algorithms.展开更多
The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation ...The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation and independent task scheduling aim to deliver short response time services demanded by the IoT devices and performed by fog servers.The heterogeneity of the IoT-Fog resources and the huge amount of data that needs to be processed by the IoT-Fog tasks make scheduling fog computing tasks a challenging problem.This study proposes an Adaptive Firefly Algorithm(AFA)for dependent task scheduling in IoT-Fog computing.The proposed AFA is a modified version of the standard Firefly Algorithm(FA),considering the execution times of the submitted tasks,the impact of synchronization requirements,and the communication time between dependent tasks.As IoT-Fog computing depends mainly on distributed fog node servers that receive tasks in a dynamic manner,tackling the communications and synchronization issues between dependent tasks is becoming a challenging problem.The proposed AFA aims to address the dynamic nature of IoT-Fog computing environments.The proposed AFA mechanism considers a dynamic light absorption coefficient to control the decrease in attractiveness over iterations.The proposed AFA mechanism performance was benchmarked against the standard Firefly Algorithm(FA),Puma Optimizer(PO),Genetic Algorithm(GA),and Ant Colony Optimization(ACO)through simulations under light,typical,and heavy workload scenarios.In heavy workloads,the proposed AFA mechanism obtained the shortest average execution time,968.98 ms compared to 970.96,1352.87,1247.28,and 1773.62 of FA,PO,GA,and ACO,respectively.The simulation results demonstrate the proposed AFA’s ability to rapidly converge to optimal solutions,emphasizing its adaptability and efficiency in typical and heavy workloads.展开更多
The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is n...The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap.展开更多
Multi-firmware comparison techniques can improve efficiency when auditing firmwares in bulk.How-ever,the problem of matching functions between multiple firmwares has not been studied before.This paper proposes a multi...Multi-firmware comparison techniques can improve efficiency when auditing firmwares in bulk.How-ever,the problem of matching functions between multiple firmwares has not been studied before.This paper proposes a multi-firmware comparison method based on evolutionary algorithms and trusted base points.We first model the multi-firmware comparison as a multi-sequence matching problem.Then,we propose an adaptation function and a population generation method based on trusted base points.Finally,we apply an evolutionary algorithm to find the optimal result.At the same time,we design the similarity of matching results as an evaluation metric to measure the effect of multi-firmware comparison.The experiments show that the proposed method outperforms Bindiff and the string-based method.Precisely,the similarity between the matching results of the proposed method and Bindiff matching results is 61%,and the similarity between the matching results of the proposed method and the string-based method is 62.8%.By sampling and manual verification,the accuracy of the matching results of the proposed method can be about 66.4%.展开更多
Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status ...Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status of land covers in Hung Yen province of Vietnam using Landsat 8 OLI satellite images,a free data source with reasonable spatial and temporal resolution.The results of the study show that all three algorithms presented good classification for five basic types of land cover including Rice land,Water bodies,Perennial vegetation,Annual vegetation,Built-up areas as their overall accuracy and Kappa coefficient were greater than 80%and 0.8,respectively.Among the three algorithms,SVM achieved the highest accuracy as its overall accuracy was 86%and the Kappa coefficient was 0.88.Land cover classification based on the SVM algorithm shows that Built-up areas cover the largest area with nearly 31,495 ha,accounting for more than 33.8%of the total natural area,followed by Rice land and Perennial vegetation which cover an area of over 30,767 ha(33%)and 15,637 ha(16.8%),respectively.Water bodies and Annual vegetation cover the smallest areas with 8,820(9.5%)ha and 6,302 ha(6.8%),respectively.The results of this study can be used for land use management and planning as well as other natural resource and environmental management purposes in the province.展开更多
As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that a...As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that at least one machine is not eligible for at least one job.PBPMSP and scheduling problems with machine eligibility are frequently considered;however,PBPMSP with machine eligibility is seldom explored.This study investigates PBPMSP with machine eligibility in fabric dyeing and presents a novel shuffled frog-leaping algorithm with competition(CSFLA)to minimize makespan.In CSFLA,the initial population is produced in a heuristic and random way,and the competitive search of memeplexes comprises two phases.Competition between any two memeplexes is done in the first phase,then iteration times are adjusted based on competition,and search strategies are adjusted adaptively based on the evolution quality of memeplexes in the second phase.An adaptive population shuffling is given.Computational experiments are conducted on 100 instances.The computational results showed that the new strategies of CSFLA are effective and that CSFLA has promising advantages in solving the considered PBPMSP.展开更多
The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resource...The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.展开更多
With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability ...With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability to complex scenarios,while traditional Vertical Handoff(VHO)algorithms fail to fully address the unique challenges of UAV-H systems,including high-speed mobility and limited computational resources.To bridge this gap,this paper proposes a heterogeneous network architecture integrating 5th Generation Mobile Communication Technology(5G)cellular networks and self-organizing mesh networks for UAV-H clusters,accompanied by a novel VHO algorithm.The proposed algorithm leverages Multi-Attribute Decision-Making(MADM)theory combined with Genetic Algorithm(GA)optimization,incorporating edge computing to enable real-time decision-making and offload computational tasks efficiently.By constructing a utility function through attribute and weight matrices,the algorithm ensures UAV-H clusters dynamically select the optimal network access with the highest utility value.Simulation results demonstrate that the proposed method reduces network handoff times by 26.13%compared to the Decision Tree VHO(DT-VHO),effectively mitigating the ping-pong effect,and enhancing total system throughput by 19.99%under the same conditions.In terms of handoff delay,it outperforms the Artificial Neural Network VHO(ANN-VHO),significantly improving the Quality of Service(QoS).Finally,real-world hardware platform experiments validate the algorithm’s feasibility and superior performance in practical UAV-H cluster operations.This work provides a robust solution for seamless network connectivity in high-mobility UAV clusters,offering critical support for emerging applications requiring reliable and efficient wireless communication.展开更多
Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device ...Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device energy utilization.To tackle these challenges,this study proposes an optimal scheduling model for energy storage power plants based on edge computing and the improved whale optimization algorithm(IWOA).The proposed model designs an edge computing framework,transferring a large share of data processing and storage tasks to the network edge.This architecture effectively reduces transmission costs by minimizing data travel time.In addition,the model considers demand response strategies and builds an objective function based on the minimization of the sum of electricity purchase cost and operation cost.The IWOA enhances the optimization process by utilizing adaptive weight adjustments and an optimal neighborhood perturbation strategy,preventing the algorithm from converging to suboptimal solutions.Experimental results demonstrate that the proposed scheduling model maximizes the flexibility of the energy storage plant,facilitating efficient charging and discharging.It successfully achieves peak shaving and valley filling for both electrical and heat loads,promoting the effective utilization of renewable energy sources.The edge-computing framework significantly reduces transmission delays between energy devices.Furthermore,IWOA outperforms traditional algorithms in optimizing the objective function.展开更多
Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensi...Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensive applications in law enforcement and the commercial domain,and the rapid advancement of practical technologies.Despite the significant advancements,modern recognition algorithms still struggle in real-world conditions such as varying lighting conditions,occlusion,and diverse facial postures.In such scenarios,human perception is still well above the capabilities of present technology.Using the systematic mapping study,this paper presents an in-depth review of face detection algorithms and face recognition algorithms,presenting a detailed survey of advancements made between 2015 and 2024.We analyze key methodologies,highlighting their strengths and restrictions in the application context.Additionally,we examine various datasets used for face detection/recognition datasets focusing on the task-specific applications,size,diversity,and complexity.By analyzing these algorithms and datasets,this survey works as a valuable resource for researchers,identifying the research gap in the field of face detection and recognition and outlining potential directions for future research.展开更多
In this paper,we propose a new full-Newton step feasible interior-point algorithm for the special weighted linear complementarity problems.The proposed algorithm employs the technique of algebraic equivalent transform...In this paper,we propose a new full-Newton step feasible interior-point algorithm for the special weighted linear complementarity problems.The proposed algorithm employs the technique of algebraic equivalent transformation to derive the search direction.It is shown that the proximity measure reduces quadratically at each iteration.Moreover,the iteration bound of the algorithm is as good as the best-known polynomial complexity for these types of problems.Furthermore,numerical results are presented to show the efficiency of the proposed algorithm.展开更多
Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection me...Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.展开更多
This study addresses the critical challenge of reconfiguration in unbalanced power distribution networks(UPDNs),focusing on the complex 123-Bus test system.Three scenarios are investigated:(1)simultaneous power loss r...This study addresses the critical challenge of reconfiguration in unbalanced power distribution networks(UPDNs),focusing on the complex 123-Bus test system.Three scenarios are investigated:(1)simultaneous power loss reduction and voltage profile improvement,(2)minimization of voltage and current unbalance indices under various operational cases,and(3)multi-objective optimization using Pareto front analysis to concurrently optimize voltage unbalance index,active power loss,and current unbalance index.Unlike previous research that oftensimplified system components,this work maintains all equipment,including capacitor banks,transformers,and voltage regulators,to ensure realistic results.The study evaluates twelve metaheuristic algorithms to solve the reconfiguration problem(RecPrb)in UPDNs.A comprehensive statistical analysis is conducted to identify the most efficient algorithm for solving the RecPrb in the 123-Bus UPDN,employing multiple performance metrics and comparative techniques.The Artificial Hummingbird Algorithm emerges as the top-performing algorithm and is subsequently applied to address a multi-objective optimization challenge in the 123-Bus UPDN.This research contributes valuable insights for network operators and researchers in selecting suitable algorithms for specific reconfiguration scenarios,advancing the field of UPDN optimization and management.展开更多
Heuristic optimization algorithms have been widely used in solving complex optimization problems in various fields such as engineering,economics,and computer science.These algorithms are designed to find high-quality ...Heuristic optimization algorithms have been widely used in solving complex optimization problems in various fields such as engineering,economics,and computer science.These algorithms are designed to find high-quality solutions efficiently by balancing exploration of the search space and exploitation of promising solutions.While heuristic optimization algorithms vary in their specific details,they often exhibit common patterns that are essential to their effectiveness.This paper aims to analyze and explore common patterns in heuristic optimization algorithms.Through a comprehensive review of the literature,we identify the patterns that are commonly observed in these algorithms,including initialization,local search,diversity maintenance,adaptation,and stochasticity.For each pattern,we describe the motivation behind it,its implementation,and its impact on the search process.To demonstrate the utility of our analysis,we identify these patterns in multiple heuristic optimization algorithms.For each case study,we analyze how the patterns are implemented in the algorithm and how they contribute to its performance.Through these case studies,we show how our analysis can be used to understand the behavior of heuristic optimization algorithms and guide the design of new algorithms.Our analysis reveals that patterns in heuristic optimization algorithms are essential to their effectiveness.By understanding and incorporating these patterns into the design of new algorithms,researchers can develop more efficient and effective optimization algorithms.展开更多
The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level m...The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level model and technology called Spatial Grasp for dealing with large distributed systems,which can provide spatial vision,awareness,management,control,and even consciousness.The technology description includes its key Spatial Grasp Language(SGL),self-evolution of recursive SGL scenarios,and implementation of SGL interpreter converting distributed networked systems into powerful spatial engines.Examples of typical spatial scenarios in SGL include finding shortest path tree and shortest path between network nodes,collecting proper information throughout the whole world,elimination of multiple targets by intelligent teams of chasers,and withstanding cyber attacks in distributed networked systems.Also this paper compares Spatial Grasp model with traditional algorithms,confirming universality of the former for any spatial systems,while the latter just tools for concrete applications.展开更多
This study investigated hypoxia-inducible factor(HIF)-1α-mediated proteomic changes in post-slaughter Tan sheep skeletal muscle and identified energy metabolism biomarkers using the competitive adaptive reweighted sa...This study investigated hypoxia-inducible factor(HIF)-1α-mediated proteomic changes in post-slaughter Tan sheep skeletal muscle and identified energy metabolism biomarkers using the competitive adaptive reweighted sampling(CARS)algorithm.HIF-1αinhibition during early storage attenuated pH decline and significantly increased total colour change(ΔE)(P<0.05)while reducing myofibril fragmentation compared with controls.Proteomic profiling identified 257 differentially expressed proteins enriched in adenosine 5’-monophosphate(AMP)-activated protein kinase(AMPK),glycolysis,and HIF-1 signalling pathways.CARS analysis highlighted lactate dehydrogenase A(LDHA),phosphoglycerate kinase 1(PGK1;glycolytic enzyme),heat shock protein beta-6(HSPB6),and heat shock protein 90 kDa beta 1(HSP90B1)as key energy metabolism biomarkers.The results suggested that HIF-1 stabilised ATP production under hypoxia conditions by suppressing glycogen synthesis,enhancing glycolysis,modulating HSP activity to preserve cellular homeostasis,and influencing cytoskeletal proteins,thereby affecting meat quality.These results provide novel insights into post-mortem muscle energy metabolism regulation and potential targets for meat quality optimisation.展开更多
基金Supported by National Nature Science Foundation(12371381)Nature Science Foundation of Shanxi(202403021222270)。
文摘In this paper,we established a class of parallel algorithm for solving low-rank tensor completion problem.The main idea is that N singular value decompositions are implemented in N different processors for each slice matrix under unfold operator,and then the fold operator is used to form the next iteration tensor such that the computing time can be decreased.In theory,we analyze the global convergence of the algorithm.In numerical experiment,the simulation data and real image inpainting are carried out.Experiment results show the parallel algorithm outperform its original algorithm in CPU times under the same precision.
文摘This research presents a novel nature-inspired metaheuristic optimization algorithm,called theNarwhale Optimization Algorithm(NWOA).The algorithm draws inspiration from the foraging and prey-hunting strategies of narwhals,“unicorns of the sea”,particularly the use of their distinctive spiral tusks,which play significant roles in hunting,searching prey,navigation,echolocation,and complex social interaction.Particularly,the NWOA imitates the foraging strategies and techniques of narwhals when hunting for prey but focuses mainly on the cooperative and exploratory behavior shown during group hunting and in the use of their tusks in sensing and locating prey under the Arctic ice.These functions provide a strong assessment basis for investigating the algorithm’s prowess at balancing exploration and exploitation,convergence speed,and solution accuracy.The performance of the NWOA is evaluated on 30 benchmark test functions.A comparison study using the Grey Wolf Optimizer(GWO),Whale Optimization Algorithm(WOA),Perfumer Optimization Algorithm(POA),Candle Flame Optimization(CFO)Algorithm,Particle Swarm Optimization(PSO)Algorithm,and Genetic Algorithm(GA)validates the results.As evidenced in the experimental results,NWOA is capable of yielding competitive outcomes among these well-known optimizers,whereas in several instances.These results suggest thatNWOAhas proven to be an effective and robust optimization tool suitable for solving many different complex optimization problems from the real world.
文摘The advent of microgrids in modern energy systems heralds a promising era of resilience,sustainability,and efficiency.Within the realm of grid-tied microgrids,the selection of an optimal optimization algorithm is critical for effective energy management,particularly in economic dispatching.This study compares the performance of Particle Swarm Optimization(PSO)and Genetic Algorithms(GA)in microgrid energy management systems,implemented using MATLAB tools.Through a comprehensive review of the literature and sim-ulations conducted in MATLAB,the study analyzes performance metrics,convergence speed,and the overall efficacy of GA and PSO,with a focus on economic dispatching tasks.Notably,a significant distinction emerges between the cost curves generated by the two algo-rithms for microgrid operation,with the PSO algorithm consistently resulting in lower costs due to its effective economic dispatching capabilities.Specifically,the utilization of the PSO approach could potentially lead to substantial savings on the power bill,amounting to approximately$15.30 in this evaluation.Thefindings provide insights into the strengths and limitations of each algorithm within the complex dynamics of grid-tied microgrids,thereby assisting stakeholders and researchers in arriving at informed decisions.This study contributes to the discourse on sustainable energy management by offering actionable guidance for the advancement of grid-tied micro-grid technologies through MATLAB-implemented optimization algorithms.
基金the Deanship of Graduate Studies and Scientific Research at Najran University for funding this work under the Easy Funding Program grant code(NU/EFP/SERC/13/166).
文摘The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation and independent task scheduling aim to deliver short response time services demanded by the IoT devices and performed by fog servers.The heterogeneity of the IoT-Fog resources and the huge amount of data that needs to be processed by the IoT-Fog tasks make scheduling fog computing tasks a challenging problem.This study proposes an Adaptive Firefly Algorithm(AFA)for dependent task scheduling in IoT-Fog computing.The proposed AFA is a modified version of the standard Firefly Algorithm(FA),considering the execution times of the submitted tasks,the impact of synchronization requirements,and the communication time between dependent tasks.As IoT-Fog computing depends mainly on distributed fog node servers that receive tasks in a dynamic manner,tackling the communications and synchronization issues between dependent tasks is becoming a challenging problem.The proposed AFA aims to address the dynamic nature of IoT-Fog computing environments.The proposed AFA mechanism considers a dynamic light absorption coefficient to control the decrease in attractiveness over iterations.The proposed AFA mechanism performance was benchmarked against the standard Firefly Algorithm(FA),Puma Optimizer(PO),Genetic Algorithm(GA),and Ant Colony Optimization(ACO)through simulations under light,typical,and heavy workload scenarios.In heavy workloads,the proposed AFA mechanism obtained the shortest average execution time,968.98 ms compared to 970.96,1352.87,1247.28,and 1773.62 of FA,PO,GA,and ACO,respectively.The simulation results demonstrate the proposed AFA’s ability to rapidly converge to optimal solutions,emphasizing its adaptability and efficiency in typical and heavy workloads.
文摘The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap.
文摘Multi-firmware comparison techniques can improve efficiency when auditing firmwares in bulk.How-ever,the problem of matching functions between multiple firmwares has not been studied before.This paper proposes a multi-firmware comparison method based on evolutionary algorithms and trusted base points.We first model the multi-firmware comparison as a multi-sequence matching problem.Then,we propose an adaptation function and a population generation method based on trusted base points.Finally,we apply an evolutionary algorithm to find the optimal result.At the same time,we design the similarity of matching results as an evaluation metric to measure the effect of multi-firmware comparison.The experiments show that the proposed method outperforms Bindiff and the string-based method.Precisely,the similarity between the matching results of the proposed method and Bindiff matching results is 61%,and the similarity between the matching results of the proposed method and the string-based method is 62.8%.By sampling and manual verification,the accuracy of the matching results of the proposed method can be about 66.4%.
文摘Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status of land covers in Hung Yen province of Vietnam using Landsat 8 OLI satellite images,a free data source with reasonable spatial and temporal resolution.The results of the study show that all three algorithms presented good classification for five basic types of land cover including Rice land,Water bodies,Perennial vegetation,Annual vegetation,Built-up areas as their overall accuracy and Kappa coefficient were greater than 80%and 0.8,respectively.Among the three algorithms,SVM achieved the highest accuracy as its overall accuracy was 86%and the Kappa coefficient was 0.88.Land cover classification based on the SVM algorithm shows that Built-up areas cover the largest area with nearly 31,495 ha,accounting for more than 33.8%of the total natural area,followed by Rice land and Perennial vegetation which cover an area of over 30,767 ha(33%)and 15,637 ha(16.8%),respectively.Water bodies and Annual vegetation cover the smallest areas with 8,820(9.5%)ha and 6,302 ha(6.8%),respectively.The results of this study can be used for land use management and planning as well as other natural resource and environmental management purposes in the province.
基金supported by the National Natural Science Foundation of China(Grant Number 61573264).
文摘As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that at least one machine is not eligible for at least one job.PBPMSP and scheduling problems with machine eligibility are frequently considered;however,PBPMSP with machine eligibility is seldom explored.This study investigates PBPMSP with machine eligibility in fabric dyeing and presents a novel shuffled frog-leaping algorithm with competition(CSFLA)to minimize makespan.In CSFLA,the initial population is produced in a heuristic and random way,and the competitive search of memeplexes comprises two phases.Competition between any two memeplexes is done in the first phase,then iteration times are adjusted based on competition,and search strategies are adjusted adaptively based on the evolution quality of memeplexes in the second phase.An adaptive population shuffling is given.Computational experiments are conducted on 100 instances.The computational results showed that the new strategies of CSFLA are effective and that CSFLA has promising advantages in solving the considered PBPMSP.
文摘The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.
基金supported by the Key R&D Plan of Shandong Province(Major Science and Technology Innovation Project)No.2023CXGC0107012024 City-University Integrated Development Strategic Engineering Project No.JNSX2024066.
文摘With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability to complex scenarios,while traditional Vertical Handoff(VHO)algorithms fail to fully address the unique challenges of UAV-H systems,including high-speed mobility and limited computational resources.To bridge this gap,this paper proposes a heterogeneous network architecture integrating 5th Generation Mobile Communication Technology(5G)cellular networks and self-organizing mesh networks for UAV-H clusters,accompanied by a novel VHO algorithm.The proposed algorithm leverages Multi-Attribute Decision-Making(MADM)theory combined with Genetic Algorithm(GA)optimization,incorporating edge computing to enable real-time decision-making and offload computational tasks efficiently.By constructing a utility function through attribute and weight matrices,the algorithm ensures UAV-H clusters dynamically select the optimal network access with the highest utility value.Simulation results demonstrate that the proposed method reduces network handoff times by 26.13%compared to the Decision Tree VHO(DT-VHO),effectively mitigating the ping-pong effect,and enhancing total system throughput by 19.99%under the same conditions.In terms of handoff delay,it outperforms the Artificial Neural Network VHO(ANN-VHO),significantly improving the Quality of Service(QoS).Finally,real-world hardware platform experiments validate the algorithm’s feasibility and superior performance in practical UAV-H cluster operations.This work provides a robust solution for seamless network connectivity in high-mobility UAV clusters,offering critical support for emerging applications requiring reliable and efficient wireless communication.
基金supported by the Changzhou Science and Technology Support Project(CE20235045)Open Subject of Jiangsu Province Key Laboratory of Power Transmission and Distribution(2021JSSPD12)+1 种基金Talent Projects of Jiangsu University of Technology(KYY20018)Postgraduate Research&Practice Innovation Program of Jiangsu Province(SJCX23_1633).
文摘Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device energy utilization.To tackle these challenges,this study proposes an optimal scheduling model for energy storage power plants based on edge computing and the improved whale optimization algorithm(IWOA).The proposed model designs an edge computing framework,transferring a large share of data processing and storage tasks to the network edge.This architecture effectively reduces transmission costs by minimizing data travel time.In addition,the model considers demand response strategies and builds an objective function based on the minimization of the sum of electricity purchase cost and operation cost.The IWOA enhances the optimization process by utilizing adaptive weight adjustments and an optimal neighborhood perturbation strategy,preventing the algorithm from converging to suboptimal solutions.Experimental results demonstrate that the proposed scheduling model maximizes the flexibility of the energy storage plant,facilitating efficient charging and discharging.It successfully achieves peak shaving and valley filling for both electrical and heat loads,promoting the effective utilization of renewable energy sources.The edge-computing framework significantly reduces transmission delays between energy devices.Furthermore,IWOA outperforms traditional algorithms in optimizing the objective function.
文摘Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensive applications in law enforcement and the commercial domain,and the rapid advancement of practical technologies.Despite the significant advancements,modern recognition algorithms still struggle in real-world conditions such as varying lighting conditions,occlusion,and diverse facial postures.In such scenarios,human perception is still well above the capabilities of present technology.Using the systematic mapping study,this paper presents an in-depth review of face detection algorithms and face recognition algorithms,presenting a detailed survey of advancements made between 2015 and 2024.We analyze key methodologies,highlighting their strengths and restrictions in the application context.Additionally,we examine various datasets used for face detection/recognition datasets focusing on the task-specific applications,size,diversity,and complexity.By analyzing these algorithms and datasets,this survey works as a valuable resource for researchers,identifying the research gap in the field of face detection and recognition and outlining potential directions for future research.
基金Supported by the Optimisation Theory and Algorithm Research Team(Grant No.23kytdzd004)University Science Research Project of Anhui Province(Grant No.2024AH050631)the General Programs for Young Teacher Cultivation of Educational Commission of Anhui Province(Grant No.YQYB2023090).
文摘In this paper,we propose a new full-Newton step feasible interior-point algorithm for the special weighted linear complementarity problems.The proposed algorithm employs the technique of algebraic equivalent transformation to derive the search direction.It is shown that the proximity measure reduces quadratically at each iteration.Moreover,the iteration bound of the algorithm is as good as the best-known polynomial complexity for these types of problems.Furthermore,numerical results are presented to show the efficiency of the proposed algorithm.
基金the Deanship of Scientific Research at King Khalid University for funding this work through large group under grant number(GRP.2/663/46).
文摘Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.
基金supported by the Scientific and Technological Research Council of Turkey(TUBITAK)under Grant No.124E002(1001-Project).
文摘This study addresses the critical challenge of reconfiguration in unbalanced power distribution networks(UPDNs),focusing on the complex 123-Bus test system.Three scenarios are investigated:(1)simultaneous power loss reduction and voltage profile improvement,(2)minimization of voltage and current unbalance indices under various operational cases,and(3)multi-objective optimization using Pareto front analysis to concurrently optimize voltage unbalance index,active power loss,and current unbalance index.Unlike previous research that oftensimplified system components,this work maintains all equipment,including capacitor banks,transformers,and voltage regulators,to ensure realistic results.The study evaluates twelve metaheuristic algorithms to solve the reconfiguration problem(RecPrb)in UPDNs.A comprehensive statistical analysis is conducted to identify the most efficient algorithm for solving the RecPrb in the 123-Bus UPDN,employing multiple performance metrics and comparative techniques.The Artificial Hummingbird Algorithm emerges as the top-performing algorithm and is subsequently applied to address a multi-objective optimization challenge in the 123-Bus UPDN.This research contributes valuable insights for network operators and researchers in selecting suitable algorithms for specific reconfiguration scenarios,advancing the field of UPDN optimization and management.
文摘Heuristic optimization algorithms have been widely used in solving complex optimization problems in various fields such as engineering,economics,and computer science.These algorithms are designed to find high-quality solutions efficiently by balancing exploration of the search space and exploitation of promising solutions.While heuristic optimization algorithms vary in their specific details,they often exhibit common patterns that are essential to their effectiveness.This paper aims to analyze and explore common patterns in heuristic optimization algorithms.Through a comprehensive review of the literature,we identify the patterns that are commonly observed in these algorithms,including initialization,local search,diversity maintenance,adaptation,and stochasticity.For each pattern,we describe the motivation behind it,its implementation,and its impact on the search process.To demonstrate the utility of our analysis,we identify these patterns in multiple heuristic optimization algorithms.For each case study,we analyze how the patterns are implemented in the algorithm and how they contribute to its performance.Through these case studies,we show how our analysis can be used to understand the behavior of heuristic optimization algorithms and guide the design of new algorithms.Our analysis reveals that patterns in heuristic optimization algorithms are essential to their effectiveness.By understanding and incorporating these patterns into the design of new algorithms,researchers can develop more efficient and effective optimization algorithms.
文摘The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level model and technology called Spatial Grasp for dealing with large distributed systems,which can provide spatial vision,awareness,management,control,and even consciousness.The technology description includes its key Spatial Grasp Language(SGL),self-evolution of recursive SGL scenarios,and implementation of SGL interpreter converting distributed networked systems into powerful spatial engines.Examples of typical spatial scenarios in SGL include finding shortest path tree and shortest path between network nodes,collecting proper information throughout the whole world,elimination of multiple targets by intelligent teams of chasers,and withstanding cyber attacks in distributed networked systems.Also this paper compares Spatial Grasp model with traditional algorithms,confirming universality of the former for any spatial systems,while the latter just tools for concrete applications.
基金supported by the Innovation Research Group Project of the National Natural Science Foundation of China(No.32260555).
文摘This study investigated hypoxia-inducible factor(HIF)-1α-mediated proteomic changes in post-slaughter Tan sheep skeletal muscle and identified energy metabolism biomarkers using the competitive adaptive reweighted sampling(CARS)algorithm.HIF-1αinhibition during early storage attenuated pH decline and significantly increased total colour change(ΔE)(P<0.05)while reducing myofibril fragmentation compared with controls.Proteomic profiling identified 257 differentially expressed proteins enriched in adenosine 5’-monophosphate(AMP)-activated protein kinase(AMPK),glycolysis,and HIF-1 signalling pathways.CARS analysis highlighted lactate dehydrogenase A(LDHA),phosphoglycerate kinase 1(PGK1;glycolytic enzyme),heat shock protein beta-6(HSPB6),and heat shock protein 90 kDa beta 1(HSP90B1)as key energy metabolism biomarkers.The results suggested that HIF-1 stabilised ATP production under hypoxia conditions by suppressing glycogen synthesis,enhancing glycolysis,modulating HSP activity to preserve cellular homeostasis,and influencing cytoskeletal proteins,thereby affecting meat quality.These results provide novel insights into post-mortem muscle energy metabolism regulation and potential targets for meat quality optimisation.