In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task schedul...In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.展开更多
Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device ...Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device energy utilization.To tackle these challenges,this study proposes an optimal scheduling model for energy storage power plants based on edge computing and the improved whale optimization algorithm(IWOA).The proposed model designs an edge computing framework,transferring a large share of data processing and storage tasks to the network edge.This architecture effectively reduces transmission costs by minimizing data travel time.In addition,the model considers demand response strategies and builds an objective function based on the minimization of the sum of electricity purchase cost and operation cost.The IWOA enhances the optimization process by utilizing adaptive weight adjustments and an optimal neighborhood perturbation strategy,preventing the algorithm from converging to suboptimal solutions.Experimental results demonstrate that the proposed scheduling model maximizes the flexibility of the energy storage plant,facilitating efficient charging and discharging.It successfully achieves peak shaving and valley filling for both electrical and heat loads,promoting the effective utilization of renewable energy sources.The edge-computing framework significantly reduces transmission delays between energy devices.Furthermore,IWOA outperforms traditional algorithms in optimizing the objective function.展开更多
The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation ...The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation and independent task scheduling aim to deliver short response time services demanded by the IoT devices and performed by fog servers.The heterogeneity of the IoT-Fog resources and the huge amount of data that needs to be processed by the IoT-Fog tasks make scheduling fog computing tasks a challenging problem.This study proposes an Adaptive Firefly Algorithm(AFA)for dependent task scheduling in IoT-Fog computing.The proposed AFA is a modified version of the standard Firefly Algorithm(FA),considering the execution times of the submitted tasks,the impact of synchronization requirements,and the communication time between dependent tasks.As IoT-Fog computing depends mainly on distributed fog node servers that receive tasks in a dynamic manner,tackling the communications and synchronization issues between dependent tasks is becoming a challenging problem.The proposed AFA aims to address the dynamic nature of IoT-Fog computing environments.The proposed AFA mechanism considers a dynamic light absorption coefficient to control the decrease in attractiveness over iterations.The proposed AFA mechanism performance was benchmarked against the standard Firefly Algorithm(FA),Puma Optimizer(PO),Genetic Algorithm(GA),and Ant Colony Optimization(ACO)through simulations under light,typical,and heavy workload scenarios.In heavy workloads,the proposed AFA mechanism obtained the shortest average execution time,968.98 ms compared to 970.96,1352.87,1247.28,and 1773.62 of FA,PO,GA,and ACO,respectively.The simulation results demonstrate the proposed AFA’s ability to rapidly converge to optimal solutions,emphasizing its adaptability and efficiency in typical and heavy workloads.展开更多
The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is n...The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap.展开更多
With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability ...With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability to complex scenarios,while traditional Vertical Handoff(VHO)algorithms fail to fully address the unique challenges of UAV-H systems,including high-speed mobility and limited computational resources.To bridge this gap,this paper proposes a heterogeneous network architecture integrating 5th Generation Mobile Communication Technology(5G)cellular networks and self-organizing mesh networks for UAV-H clusters,accompanied by a novel VHO algorithm.The proposed algorithm leverages Multi-Attribute Decision-Making(MADM)theory combined with Genetic Algorithm(GA)optimization,incorporating edge computing to enable real-time decision-making and offload computational tasks efficiently.By constructing a utility function through attribute and weight matrices,the algorithm ensures UAV-H clusters dynamically select the optimal network access with the highest utility value.Simulation results demonstrate that the proposed method reduces network handoff times by 26.13%compared to the Decision Tree VHO(DT-VHO),effectively mitigating the ping-pong effect,and enhancing total system throughput by 19.99%under the same conditions.In terms of handoff delay,it outperforms the Artificial Neural Network VHO(ANN-VHO),significantly improving the Quality of Service(QoS).Finally,real-world hardware platform experiments validate the algorithm’s feasibility and superior performance in practical UAV-H cluster operations.This work provides a robust solution for seamless network connectivity in high-mobility UAV clusters,offering critical support for emerging applications requiring reliable and efficient wireless communication.展开更多
The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resource...The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.展开更多
Fog Computing(FC)provides processing and storage resources at the edge of the Internet of Things(IoT).By doing so,FC can help reduce latency and improve reliability of IoT networks.The energy consumption of servers an...Fog Computing(FC)provides processing and storage resources at the edge of the Internet of Things(IoT).By doing so,FC can help reduce latency and improve reliability of IoT networks.The energy consumption of servers and computing resources is one of the factors that directly affect conservation costs in fog environments.Energy consumption can be reduced by efficacious scheduling methods so that tasks are offloaded on the best possible resources.To deal with this problem,a binary model based on the combination of the Krill Herd Algorithm(KHA)and the Artificial Hummingbird Algorithm(AHA)is introduced as Binary KHA-AHA(BAHA-KHA).KHA is used to improve AHA.Also,the BAHA-KHA local optimal problem for task scheduling in FC environments is solved using the dynamic voltage and frequency scaling(DVFS)method.The Heterogeneous Earliest Finish Time(HEFT)method is used to discover the order of task flow execution.The goal of the BAHA-KHA model is to minimize the number of resources,the communication between dependent tasks,and reduce energy consumption.In this paper,the FC environment is considered to address the workflow scheduling issue to reduce energy consumption and minimize makespan on fog resources.The results were tested on five different workflows(Montage,CyberShake,LIGO,SIPHT,and Epigenomics).The evaluations show that the BAHA-KHA model has the best performance in comparison with the AHA,KHA,PSO and GA algorithms.The BAHA-KHA model has reduced the makespan rate by about 18%and the energy consumption by about 24%in comparison with GA.This is a preview of subscription content,log in via an institution to check access.展开更多
Based on the current cloud computing resources security distribution model’s problem that the optimization effect is not high and the convergence is not good, this paper puts forward a cloud computing resources secur...Based on the current cloud computing resources security distribution model’s problem that the optimization effect is not high and the convergence is not good, this paper puts forward a cloud computing resources security distribution model based on improved artificial firefly algorithm. First of all, according to characteristics of the artificial fireflies swarm algorithm and the complex method, it incorporates the ideas of complex method into the artificial firefly algorithm, uses the complex method to guide the search of artificial fireflies in population, and then introduces local search operator in the firefly mobile mechanism, in order to improve the searching efficiency and convergence precision of algorithm. Simulation results show that, the cloud computing resources security distribution model based on improved artificial firefly algorithm proposed in this paper has good convergence effect and optimum efficiency.展开更多
Aimed at the problems of small gradient, low learning rate, slow convergence error when the DBN using back-propagation process to fix the network connection weight and bias, proposing a new algorithm that combines wit...Aimed at the problems of small gradient, low learning rate, slow convergence error when the DBN using back-propagation process to fix the network connection weight and bias, proposing a new algorithm that combines with multi-innovation theory to improve standard DBN algorithm, that is the multi-innovation DBN(MI-DBN). It sets up a new model of back-propagation process in DBN algorithm, making the use of single innovation in previous algorithm extend to the use of innovation of the preceding multiple period, thus increasing convergence rate of error largely. To study the application of the algorithm in the social computing, and recognize the meaningful information about the handwritten numbers in social networking images. This paper compares MI-DBN algorithm with other representative classifiers through experiments. The result shows that MI-DBN algorithm, comparing with other representative classifiers, has a faster convergence rate and a smaller error for MNIST dataset recognition. And handwritten numbers on the image also have a precise degree of recognition.展开更多
Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and v...Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials.展开更多
In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-base...In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.展开更多
Quantum computing offers unprecedented computational power, enabling simultaneous computations beyond traditional computers. Quantum computers differ significantly from classical computers, necessitating a distinct ap...Quantum computing offers unprecedented computational power, enabling simultaneous computations beyond traditional computers. Quantum computers differ significantly from classical computers, necessitating a distinct approach to algorithm design, which involves taming quantum mechanical phenomena. This paper extends the numbering of computable programs to be applied in the quantum computing context. Numbering computable programs is a theoretical computer science concept that assigns unique numbers to individual programs or algorithms. Common methods include Gödel numbering which encodes programs as strings of symbols or characters, often used in formal systems and mathematical logic. Based on the proposed numbering approach, this paper presents a mechanism to explore the set of possible quantum algorithms. The proposed approach is able to construct useful circuits such as Quantum Key Distribution BB84 protocol, which enables sender and receiver to establish a secure cryptographic key via a quantum channel. The proposed approach facilitates the process of exploring and constructing quantum algorithms.展开更多
Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple dat...Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.展开更多
The exponential growth of Internet of Things(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet...The exponential growth of Internet of Things(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.展开更多
Due to their exceptional programmability,DNA molecules are widely employed in the design of molecular circuits for applications such as DNA computing,DNA storage and cancer diagnosis and treatment.The quality of DNA s...Due to their exceptional programmability,DNA molecules are widely employed in the design of molecular circuits for applications such as DNA computing,DNA storage and cancer diagnosis and treatment.The quality of DNA sequences directly determines the reliability of these molecular circuits.However,existing DNA encoding algorithms suffer from limitations such as reliance on Hamming distance and conflicts among multiple objectives,resulting in insufficient stability of the generated sequences.To address these issues,this paper proposes a thermodynamics-based multi-objective evolutionary optimisation algorithm(TEMOA).The core innovations of the proposed algorithm are as follows:First,a thermodynamics-based DNA encoding modelling strategy(TDEMS)is introduced,which simplifies the encoding process and significantly improves the sequence quality by incorporating thermodynamic stability constraints.Second,two diversity optimisation strategies—the diversity assessment strategy(DAS)and the front equalisation nondominated sorting(FENS)strategy—are designed to enhance the algorithm's global search capability.Finally,a flexible fitness function design is incorporated to accommodate diverse user requirements.Experimental results demonstrate that TEMOA is more effective than state-of-the-art methods on challenging multi-objective optimisation problems,whereas the DNA sequences generated by TEMOA exhibit greater reliability compared to those produced by traditional DNA encoding algorithms.展开更多
Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves ...Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets.展开更多
With the rapid advancements in technology and science,optimization theory and algorithms have become increasingly important.A wide range of real-world problems is classified as optimization challenges,and meta-heurist...With the rapid advancements in technology and science,optimization theory and algorithms have become increasingly important.A wide range of real-world problems is classified as optimization challenges,and meta-heuristic algorithms have shown remarkable effectiveness in solving these challenges across diverse domains,such as machine learning,process control,and engineering design,showcasing their capability to address complex optimization problems.The Stochastic Fractal Search(SFS)algorithm is one of the most popular meta-heuristic optimization methods inspired by the fractal growth patterns of natural materials.Since its introduction by Hamid Salimi in 2015,SFS has garnered significant attention from researchers and has been applied to diverse optimization problems acrossmultiple disciplines.Its popularity can be attributed to several factors,including its simplicity,practical computational efficiency,ease of implementation,rapid convergence,high effectiveness,and ability to address singleandmulti-objective optimization problems,often outperforming other established algorithms.This review paper offers a comprehensive and detailed analysis of the SFS algorithm,covering its standard version,modifications,hybridization,and multi-objective implementations.The paper also examines several SFS applications across diverse domains,including power and energy systems,image processing,machine learning,wireless sensor networks,environmental modeling,economics and finance,and numerous engineering challenges.Furthermore,the paper critically evaluates the SFS algorithm’s performance,benchmarking its effectiveness against recently published meta-heuristic algorithms.In conclusion,the review highlights key findings and suggests potential directions for future developments and modifications of the SFS algorithm.展开更多
Artificial intelligence(AI)has evolved at an unprecedented pace in recent years.This rapid advancement includes algorithmic breakthroughs,cross-disciplinary integration,and diverse applications—driven by growing comp...Artificial intelligence(AI)has evolved at an unprecedented pace in recent years.This rapid advancement includes algorithmic breakthroughs,cross-disciplinary integration,and diverse applications—driven by growing computational power,massive datasets,and collaborative global research.This special issue of Emerging Artificial Intelligence Technologies and Applications was conceived to provide a platformfor cuttingedge AI research communication,developing novel methodologies,cross-domain applications,and critical advancements in addressing real-world challenges.Over the past months,we have witnessed a remarkable diversity of submissions,reflecting the global trend of AI innovation.Below,we synthesize the key insights from these works,highlighting their collective contribution to advancing AI’s theoretical frontiers and practical applications.展开更多
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2503).
文摘In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.
基金supported by the Changzhou Science and Technology Support Project(CE20235045)Open Subject of Jiangsu Province Key Laboratory of Power Transmission and Distribution(2021JSSPD12)+1 种基金Talent Projects of Jiangsu University of Technology(KYY20018)Postgraduate Research&Practice Innovation Program of Jiangsu Province(SJCX23_1633).
文摘Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device energy utilization.To tackle these challenges,this study proposes an optimal scheduling model for energy storage power plants based on edge computing and the improved whale optimization algorithm(IWOA).The proposed model designs an edge computing framework,transferring a large share of data processing and storage tasks to the network edge.This architecture effectively reduces transmission costs by minimizing data travel time.In addition,the model considers demand response strategies and builds an objective function based on the minimization of the sum of electricity purchase cost and operation cost.The IWOA enhances the optimization process by utilizing adaptive weight adjustments and an optimal neighborhood perturbation strategy,preventing the algorithm from converging to suboptimal solutions.Experimental results demonstrate that the proposed scheduling model maximizes the flexibility of the energy storage plant,facilitating efficient charging and discharging.It successfully achieves peak shaving and valley filling for both electrical and heat loads,promoting the effective utilization of renewable energy sources.The edge-computing framework significantly reduces transmission delays between energy devices.Furthermore,IWOA outperforms traditional algorithms in optimizing the objective function.
基金the Deanship of Graduate Studies and Scientific Research at Najran University for funding this work under the Easy Funding Program grant code(NU/EFP/SERC/13/166).
文摘The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation and independent task scheduling aim to deliver short response time services demanded by the IoT devices and performed by fog servers.The heterogeneity of the IoT-Fog resources and the huge amount of data that needs to be processed by the IoT-Fog tasks make scheduling fog computing tasks a challenging problem.This study proposes an Adaptive Firefly Algorithm(AFA)for dependent task scheduling in IoT-Fog computing.The proposed AFA is a modified version of the standard Firefly Algorithm(FA),considering the execution times of the submitted tasks,the impact of synchronization requirements,and the communication time between dependent tasks.As IoT-Fog computing depends mainly on distributed fog node servers that receive tasks in a dynamic manner,tackling the communications and synchronization issues between dependent tasks is becoming a challenging problem.The proposed AFA aims to address the dynamic nature of IoT-Fog computing environments.The proposed AFA mechanism considers a dynamic light absorption coefficient to control the decrease in attractiveness over iterations.The proposed AFA mechanism performance was benchmarked against the standard Firefly Algorithm(FA),Puma Optimizer(PO),Genetic Algorithm(GA),and Ant Colony Optimization(ACO)through simulations under light,typical,and heavy workload scenarios.In heavy workloads,the proposed AFA mechanism obtained the shortest average execution time,968.98 ms compared to 970.96,1352.87,1247.28,and 1773.62 of FA,PO,GA,and ACO,respectively.The simulation results demonstrate the proposed AFA’s ability to rapidly converge to optimal solutions,emphasizing its adaptability and efficiency in typical and heavy workloads.
文摘The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap.
基金supported by the Key R&D Plan of Shandong Province(Major Science and Technology Innovation Project)No.2023CXGC0107012024 City-University Integrated Development Strategic Engineering Project No.JNSX2024066.
文摘With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability to complex scenarios,while traditional Vertical Handoff(VHO)algorithms fail to fully address the unique challenges of UAV-H systems,including high-speed mobility and limited computational resources.To bridge this gap,this paper proposes a heterogeneous network architecture integrating 5th Generation Mobile Communication Technology(5G)cellular networks and self-organizing mesh networks for UAV-H clusters,accompanied by a novel VHO algorithm.The proposed algorithm leverages Multi-Attribute Decision-Making(MADM)theory combined with Genetic Algorithm(GA)optimization,incorporating edge computing to enable real-time decision-making and offload computational tasks efficiently.By constructing a utility function through attribute and weight matrices,the algorithm ensures UAV-H clusters dynamically select the optimal network access with the highest utility value.Simulation results demonstrate that the proposed method reduces network handoff times by 26.13%compared to the Decision Tree VHO(DT-VHO),effectively mitigating the ping-pong effect,and enhancing total system throughput by 19.99%under the same conditions.In terms of handoff delay,it outperforms the Artificial Neural Network VHO(ANN-VHO),significantly improving the Quality of Service(QoS).Finally,real-world hardware platform experiments validate the algorithm’s feasibility and superior performance in practical UAV-H cluster operations.This work provides a robust solution for seamless network connectivity in high-mobility UAV clusters,offering critical support for emerging applications requiring reliable and efficient wireless communication.
文摘The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.
文摘Fog Computing(FC)provides processing and storage resources at the edge of the Internet of Things(IoT).By doing so,FC can help reduce latency and improve reliability of IoT networks.The energy consumption of servers and computing resources is one of the factors that directly affect conservation costs in fog environments.Energy consumption can be reduced by efficacious scheduling methods so that tasks are offloaded on the best possible resources.To deal with this problem,a binary model based on the combination of the Krill Herd Algorithm(KHA)and the Artificial Hummingbird Algorithm(AHA)is introduced as Binary KHA-AHA(BAHA-KHA).KHA is used to improve AHA.Also,the BAHA-KHA local optimal problem for task scheduling in FC environments is solved using the dynamic voltage and frequency scaling(DVFS)method.The Heterogeneous Earliest Finish Time(HEFT)method is used to discover the order of task flow execution.The goal of the BAHA-KHA model is to minimize the number of resources,the communication between dependent tasks,and reduce energy consumption.In this paper,the FC environment is considered to address the workflow scheduling issue to reduce energy consumption and minimize makespan on fog resources.The results were tested on five different workflows(Montage,CyberShake,LIGO,SIPHT,and Epigenomics).The evaluations show that the BAHA-KHA model has the best performance in comparison with the AHA,KHA,PSO and GA algorithms.The BAHA-KHA model has reduced the makespan rate by about 18%and the energy consumption by about 24%in comparison with GA.This is a preview of subscription content,log in via an institution to check access.
文摘Based on the current cloud computing resources security distribution model’s problem that the optimization effect is not high and the convergence is not good, this paper puts forward a cloud computing resources security distribution model based on improved artificial firefly algorithm. First of all, according to characteristics of the artificial fireflies swarm algorithm and the complex method, it incorporates the ideas of complex method into the artificial firefly algorithm, uses the complex method to guide the search of artificial fireflies in population, and then introduces local search operator in the firefly mobile mechanism, in order to improve the searching efficiency and convergence precision of algorithm. Simulation results show that, the cloud computing resources security distribution model based on improved artificial firefly algorithm proposed in this paper has good convergence effect and optimum efficiency.
文摘Aimed at the problems of small gradient, low learning rate, slow convergence error when the DBN using back-propagation process to fix the network connection weight and bias, proposing a new algorithm that combines with multi-innovation theory to improve standard DBN algorithm, that is the multi-innovation DBN(MI-DBN). It sets up a new model of back-propagation process in DBN algorithm, making the use of single innovation in previous algorithm extend to the use of innovation of the preceding multiple period, thus increasing convergence rate of error largely. To study the application of the algorithm in the social computing, and recognize the meaningful information about the handwritten numbers in social networking images. This paper compares MI-DBN algorithm with other representative classifiers through experiments. The result shows that MI-DBN algorithm, comparing with other representative classifiers, has a faster convergence rate and a smaller error for MNIST dataset recognition. And handwritten numbers on the image also have a precise degree of recognition.
基金supported by the Major Project for the Integration of ScienceEducation and Industry (Grant No.2025ZDZX02)。
文摘Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials.
基金Shanxi Province Higher Education Science and Technology Innovation Fund Project(2022-676)Shanxi Soft Science Program Research Fund Project(2016041008-6)。
文摘In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.
文摘Quantum computing offers unprecedented computational power, enabling simultaneous computations beyond traditional computers. Quantum computers differ significantly from classical computers, necessitating a distinct approach to algorithm design, which involves taming quantum mechanical phenomena. This paper extends the numbering of computable programs to be applied in the quantum computing context. Numbering computable programs is a theoretical computer science concept that assigns unique numbers to individual programs or algorithms. Common methods include Gödel numbering which encodes programs as strings of symbols or characters, often used in formal systems and mathematical logic. Based on the proposed numbering approach, this paper presents a mechanism to explore the set of possible quantum algorithms. The proposed approach is able to construct useful circuits such as Quantum Key Distribution BB84 protocol, which enables sender and receiver to establish a secure cryptographic key via a quantum channel. The proposed approach facilitates the process of exploring and constructing quantum algorithms.
文摘Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R909),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of Internet of Things(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.
基金supported by the National Major Scientific Instrument and Equipment Development Project of National Natural Science Foundation of China(62427811,62272115).
文摘Due to their exceptional programmability,DNA molecules are widely employed in the design of molecular circuits for applications such as DNA computing,DNA storage and cancer diagnosis and treatment.The quality of DNA sequences directly determines the reliability of these molecular circuits.However,existing DNA encoding algorithms suffer from limitations such as reliance on Hamming distance and conflicts among multiple objectives,resulting in insufficient stability of the generated sequences.To address these issues,this paper proposes a thermodynamics-based multi-objective evolutionary optimisation algorithm(TEMOA).The core innovations of the proposed algorithm are as follows:First,a thermodynamics-based DNA encoding modelling strategy(TDEMS)is introduced,which simplifies the encoding process and significantly improves the sequence quality by incorporating thermodynamic stability constraints.Second,two diversity optimisation strategies—the diversity assessment strategy(DAS)and the front equalisation nondominated sorting(FENS)strategy—are designed to enhance the algorithm's global search capability.Finally,a flexible fitness function design is incorporated to accommodate diverse user requirements.Experimental results demonstrate that TEMOA is more effective than state-of-the-art methods on challenging multi-objective optimisation problems,whereas the DNA sequences generated by TEMOA exhibit greater reliability compared to those produced by traditional DNA encoding algorithms.
基金funded by Multimedia University(Ref:MMU/RMC/PostDoc/NEW/2024/9804).
文摘Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets.
基金supported by Prince Sattam bin Abdulaziz University for funding this research work through the project number(2024/RV/06).
文摘With the rapid advancements in technology and science,optimization theory and algorithms have become increasingly important.A wide range of real-world problems is classified as optimization challenges,and meta-heuristic algorithms have shown remarkable effectiveness in solving these challenges across diverse domains,such as machine learning,process control,and engineering design,showcasing their capability to address complex optimization problems.The Stochastic Fractal Search(SFS)algorithm is one of the most popular meta-heuristic optimization methods inspired by the fractal growth patterns of natural materials.Since its introduction by Hamid Salimi in 2015,SFS has garnered significant attention from researchers and has been applied to diverse optimization problems acrossmultiple disciplines.Its popularity can be attributed to several factors,including its simplicity,practical computational efficiency,ease of implementation,rapid convergence,high effectiveness,and ability to address singleandmulti-objective optimization problems,often outperforming other established algorithms.This review paper offers a comprehensive and detailed analysis of the SFS algorithm,covering its standard version,modifications,hybridization,and multi-objective implementations.The paper also examines several SFS applications across diverse domains,including power and energy systems,image processing,machine learning,wireless sensor networks,environmental modeling,economics and finance,and numerous engineering challenges.Furthermore,the paper critically evaluates the SFS algorithm’s performance,benchmarking its effectiveness against recently published meta-heuristic algorithms.In conclusion,the review highlights key findings and suggests potential directions for future developments and modifications of the SFS algorithm.
文摘Artificial intelligence(AI)has evolved at an unprecedented pace in recent years.This rapid advancement includes algorithmic breakthroughs,cross-disciplinary integration,and diverse applications—driven by growing computational power,massive datasets,and collaborative global research.This special issue of Emerging Artificial Intelligence Technologies and Applications was conceived to provide a platformfor cuttingedge AI research communication,developing novel methodologies,cross-domain applications,and critical advancements in addressing real-world challenges.Over the past months,we have witnessed a remarkable diversity of submissions,reflecting the global trend of AI innovation.Below,we synthesize the key insights from these works,highlighting their collective contribution to advancing AI’s theoretical frontiers and practical applications.