Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
Algorithm research of task scheduling is one of the key techniques in grid computing. This paper firstly describes a DAG task scheduling model used in grid computing environment, secondly discusses generational schedu...Algorithm research of task scheduling is one of the key techniques in grid computing. This paper firstly describes a DAG task scheduling model used in grid computing environment, secondly discusses generational scheduling (GS) and communication inclusion generational scheduling (CIGS) algorithms. Finally, an improved CIGS algorithm is proposed to use in grid computing environment, and it has been proved effectively.展开更多
Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems...Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.展开更多
The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resource...The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.展开更多
Due to the intense data flow in expanding Internet of Things(IoT)applications,a heavy processing cost and workload on the fog-cloud side become inevitable.One of the most critical challenges is optimal task scheduling...Due to the intense data flow in expanding Internet of Things(IoT)applications,a heavy processing cost and workload on the fog-cloud side become inevitable.One of the most critical challenges is optimal task scheduling.Since this is an NP-hard problem type,a metaheuristic approach can be a good option.This study introduces a novel enhancement to the Artificial Rabbits Optimization(ARO)algorithm by integrating Chaotic maps and Levy flight strategies(CLARO).This dual approach addresses the limitations of standard ARO in terms of population diversity and convergence speed.It is designed for task scheduling in fog-cloud environments,optimizing energy consumption,makespan,and execution time simultaneously three critical parameters often treated individually in prior works.Unlike conventional single-objective methods,the proposed approach incorporates a multi-objective fitness function that dynamically adjusts the weight of each parameter,resulting in better resource allocation and load balancing.In analysis,a real-world dataset,the Open-source Google Cloud Jobs Dataset(GoCJ_Dataset),is used for performance measurement,and analyses are performed on three considered parameters.Comparisons are applied with well-known algorithms:GWO,SCSO,PSO,WOA,and ARO to indicate the reliability of the proposed method.In this regard,performance evaluation is performed by assigning these tasks to Virtual Machines(VMs)in the resource pool.Simulations are performed on 90 base cases and 30 scenarios for each evaluation parameter.The results indicated that the proposed algorithm achieved the best makespan performance in 80% of cases,ranked first in execution time in 61%of cases,and performed best in the final parameter in 69% of cases.In addition,according to the obtained results based on the defined fitness function,the proposed method(CLARO)is 2.52%better than ARO,3.95%better than SCSO,5.06%better than GWO,8.15%better than PSO,and 9.41%better than WOA.展开更多
More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud com...More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.展开更多
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u...A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.展开更多
Cloud computing has rapidly evolved into a critical technology,seamlessly integrating into various aspects of daily life.As user demand for cloud services continues to surge,the need for efficient virtualization and r...Cloud computing has rapidly evolved into a critical technology,seamlessly integrating into various aspects of daily life.As user demand for cloud services continues to surge,the need for efficient virtualization and resource management becomes paramount.At the core of this efficiency lies task scheduling,a complex process that determines how tasks are allocated and executed across cloud resources.While extensive research has been conducted in the area of task scheduling,optimizing multiple objectives simultaneously remains a significant challenge due to the NP(Non-deterministic Polynomial)Complete nature of the problem.This study aims to address these challenges by providing a comprehensive review and experimental analysis of task scheduling approaches,with a particular focus on hybrid techniques that offer promising solutions.Utilizing the CloudSim simulation toolkit,we evaluated the performance of three hybrid algorithms:Estimation of Distribution Algorithm-Genetic Algorithm(EDA-GA),Hybrid Genetic Algorithm-Ant Colony Optimization(HGA-ACO),and Improved Discrete Particle Swarm Optimization(IDPSO).Our experimental results demonstrate that these hybrid methods significantly outperform traditional standalone algorithms in reducing Makespan,which is a critical measure of task completion time.Notably,the IDPSO algorithm exhibited superior performance,achieving a Makespan of just 0.64 milliseconds for a set of 150 tasks.These findings underscore the potential of hybrid algorithms to enhance task scheduling efficiency in cloud computing environments.This paper concludes with a discussion of the implications of our findings and offers recommendations for future research aimed at further improving task scheduling strategies,particularly in the context of increasingly complex and dynamic cloud environments.展开更多
To solve the deadlock problem of tasks that the interdependence between tasks fails to consider during the course of resource assignment and task scheduling based on the heuristics algorithm, an improved ant colony sy...To solve the deadlock problem of tasks that the interdependence between tasks fails to consider during the course of resource assignment and task scheduling based on the heuristics algorithm, an improved ant colony system (ACS) based algorithm is proposed. First, how to map the resource assignment and task scheduling (RATS) problem into the optimization selection problem of task resource assignment graph (TRAG) and to add the semaphore mechanism in the optimal TRAG to solve deadlocks are explained. Secondly, how to utilize the grid pheromone system model to realize the algorithm based on ACS is explicated. This refers to the construction of TRAG by the random selection of appropriate resources for each task by the user agent and the optimization of TRAG through the positive feedback and distributed parallel computing mechanism of the ACS. Simulation results show that the proposed algorithm is effective and efficient in solving the deadlock problem.展开更多
Task scheduling in cloud computing environments is a multi-objective optimization problem, which is NP hard. It is also a challenging problem to find an appropriate trade-off among resource utilization, energy consump...Task scheduling in cloud computing environments is a multi-objective optimization problem, which is NP hard. It is also a challenging problem to find an appropriate trade-off among resource utilization, energy consumption and Quality of Service(QoS) requirements under the changing environment and diverse tasks. Considering both processing time and transmission time, a PSO-based Adaptive Multi-objective Task Scheduling(AMTS) Strategy is proposed in this paper. First, the task scheduling problem is formulated. Then, a task scheduling policy is advanced to get the optimal resource utilization, task completion time, average cost and average energy consumption. In order to maintain the particle diversity, the adaptive acceleration coefficient is adopted. Experimental results show that the improved PSO algorithm can obtain quasi-optimal solutions for the cloud task scheduling problem.展开更多
Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the ...Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the task scheduling problem has emerged as a critical analytical topic in cloud computing.The primary goal of scheduling tasks is to distribute tasks to available processors to construct the shortest possible schedule without breaching precedence restrictions.Assignments and schedules of tasks substantially influence system operation in a heterogeneous multiprocessor system.The diverse processes inside the heuristic-based task scheduling method will result in varying makespan in the heterogeneous computing system.As a result,an intelligent scheduling algorithm should efficiently determine the priority of every subtask based on the resources necessary to lower the makespan.This research introduced a novel efficient scheduling task method in cloud computing systems based on the cooperation search algorithm to tackle an essential task and schedule a heterogeneous cloud computing problem.The basic idea of thismethod is to use the advantages of meta-heuristic algorithms to get the optimal solution.We assess our algorithm’s performance by running it through three scenarios with varying numbers of tasks.The findings demonstrate that the suggested technique beats existingmethods NewGenetic Algorithm(NGA),Genetic Algorithm(GA),Whale Optimization Algorithm(WOA),Gravitational Search Algorithm(GSA),and Hybrid Heuristic and Genetic(HHG)by 7.9%,2.1%,8.8%,7.7%,3.4%respectively according to makespan.展开更多
With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficienc...With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficiencies werefound during data processing through the cloud centric architecture. The data generated by terminals at the edge of the network is an urgent problem to be solved atpresent. In 5 g environments, edge computing can better meet the needs of lowdelay and wide connection applications, and support the fast request of terminalusers. However, edge computing only has the edge layer computing advantage,and it is difficult to achieve global resource scheduling and configuration, whichmay lead to the problems of low resource utilization rate, long task processingdelay and unbalanced system load, so as to lead to affect the service quality ofusers. To solve this problem, this paper studies task scheduling and resource collaboration based on a Cloud-Edge-Terminal collaborative architecture, proposes agenetic simulated annealing fusion algorithm, called GSA-EDGE, to achieve taskscheduling and resource allocation, and designs a series of experiments to verifythe effectiveness of the GSA-EDGE algorithm. The experimental results showthat the proposed method can reduce the time delay of task processing comparedwith the local task processing method and the task average allocation method.展开更多
With the continuous evolution of smart grid and global energy interconnection technology,amount of intelligent terminals have been connected to power grid,which can be used for providing resource services as edge node...With the continuous evolution of smart grid and global energy interconnection technology,amount of intelligent terminals have been connected to power grid,which can be used for providing resource services as edge nodes.Traditional cloud computing can be used to provide storage services and task computing services in the power grid,but it faces challenges such as resource bottlenecks,time delays,and limited network bandwidth resources.Edge computing is an effective supplement for cloud computing,because it can provide users with local computing services with lower latency.However,because the resources in a single edge node are limited,resource-intensive tasks need to be divided into many subtasks and then assigned to different edge nodes by resource cooperation.Making task scheduling more efficient is an important issue.In this paper,a two-layer resource management scheme is proposed based on the concept of edge computing.In addition,a new task scheduling algorithm named GA-EC(Genetic Algorithm for Edge Computing)is put forth,based on a genetic algorithm,that can dynamically schedule tasks according to different scheduling goals.The simulation shows that the proposed algorithm has a beneficial effect on energy consumption and load balancing,and reduces time delay.展开更多
Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of t...Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of the tasks among the computing resources for their execution,and an inefficient task scheduling algorithm results in under-or over-utilization of the resources,which in turn leads to degradation of the services.Therefore,in the proposed work,load balancing is considered as an important criterion for task scheduling in a cloud computing environment as it can help in reducing the overhead in the critical decision-oriented process.In this paper,we propose an adaptive genetic algorithm-based load balancing(GALB)-aware task scheduling technique that not only results in better utilization of resources but also helps in optimizing the values of key performance indicators such as makespan,performance improvement ratio,and degree of imbalance.The concept of adaptive crossover and mutation is used in this work which results in better adaptation for the fittest individual of the current generation and prevents them from the elimination.CloudSim simulator has been used to carry out the simulations and obtained results establish that the proposed GALB algorithm performs better for all the key indicators and outperforms its peers which are taken into the consideration.展开更多
Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,...Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,scheduling and executing large-scale computing tasks efficiently and allocating resources to tasks reasonably are becoming a quite challenging problem.To improve both task execution and resource utilization efficiency,we present a task scheduling algorithm with resource attribute selection,which can select the optimal node to execute a task according to its resource requirements and the fitness between the resource node and the task.Experiment results show that there is significant improvement in execution throughput and resource utilization compared with the other three algorithms and four scheduling frameworks.In the scheduling algorithm comparison,the throughput is 77%higher than Min-Min algorithm and the resource utilization can reach 91%.In the scheduling framework comparison,the throughput(with work-stealing)is at least 30%higher than the other frameworks and the resource utilization reaches 94%.The scheduling algorithm can make a good model for practical MTC applications.展开更多
The solution strategy of the heuristic algorithm is pre-set and has good performance in the conventional cloud resource scheduling process.However,for complex and dynamic cloud service scheduling tasks,due to the diff...The solution strategy of the heuristic algorithm is pre-set and has good performance in the conventional cloud resource scheduling process.However,for complex and dynamic cloud service scheduling tasks,due to the difference in service attributes,the solution efficiency of a single strategy is low for such problems.In this paper,we presents a hyper-heuristic algorithm based on reinforcement learning(HHRL)to optimize the completion time of the task sequence.Firstly,In the reward table setting stage of HHRL,we introduce population diversity and integrate maximum time to comprehensively deter-mine the task scheduling and the selection of low-level heuristic strategies.Secondly,a task computational complexity estimation method integrated with linear regression is proposed to influence task scheduling priorities.Besides,we propose a high-quality candidate solution migration method to ensure the continuity and diversity of the solving process.Compared with HHSA,ACO,GA,F-PSO,etc,HHRL can quickly obtain task complexity,select appropriate heuristic strategies for task scheduling,search for the the best makspan and have stronger disturbance detection ability for population diversity.展开更多
In cloud computing system,it is a hot and hard issue to find the optimal task scheduling method that makes the processing cost and the running time minimum. In order to deal with the task assignment,a task interaction...In cloud computing system,it is a hot and hard issue to find the optimal task scheduling method that makes the processing cost and the running time minimum. In order to deal with the task assignment,a task interaction graph was used to analyze the task scheduling; a modeling for task assignment was formulated and a particle swarm optimization (PSO)algorithm embedded in the variable neighborhood search (VNS) to optimize the task scheduling was proposed. The experimental results show that the method is more effective than the PSO in processing cost,transferring cost, and running time. When the task is more complex,the effect is much better. So,the algorithm can resolve the task scheduling in cloud computing and it is feasible,valid,and efficient.展开更多
In this paper combined with the advantages of genetic algorithm and simulated annealing, brings forward a parallel genetic simulated annealing hybrid algorithm (PGSAHA) and applied to solve task scheduling problem i...In this paper combined with the advantages of genetic algorithm and simulated annealing, brings forward a parallel genetic simulated annealing hybrid algorithm (PGSAHA) and applied to solve task scheduling problem in grid computing. It first generates a new group of individuals through genetic operation such as reproduction, crossover, mutation, etc, and than simulated anneals independently all the generated individuals respectively. When the temperature in the process of cooling no longer falls, the result is the optimal solution on the whole. From the analysis and experiment result, it is concluded that this algorithm is superior to genetic algorithm and simulated annealing.展开更多
Deploying task caching at edge servers has become an effectiveway to handle compute-intensive and latency-sensitive tasks on the industrialinternet. However, how to select the task scheduling location to reduce taskde...Deploying task caching at edge servers has become an effectiveway to handle compute-intensive and latency-sensitive tasks on the industrialinternet. However, how to select the task scheduling location to reduce taskdelay and cost while ensuring the data security and reliable communicationof edge computing remains a challenge. To solve this problem, this paperestablishes a task scheduling model with joint blockchain and task cachingin the industrial internet and designs a novel blockchain-assisted cachingmechanism to enhance system security. In this paper, the task schedulingproblem, which couples the task scheduling decision, task caching decision,and blockchain reward, is formulated as the minimum weighted cost problemunder delay constraints. This is a mixed integer nonlinear problem, which isproved to be nonconvex and NP-hard. To solve the optimal solution, thispaper proposes a task scheduling strategy algorithm based on an improvedgenetic algorithm (IGA-TSPA) by improving the genetic algorithm initializationand mutation operations to reduce the size of the initial solutionspace and enhance the optimal solution convergence speed. In addition,an Improved Least Frequently Used algorithm is proposed to improve thecontent hit rate. Simulation results show that IGA-TSPA has a faster optimalsolution-solving ability and shorter running time compared with the existingedge computing scheduling algorithms. The established task scheduling modelnot only saves 62.19% of system overhead consumption in comparison withlocal computing but also has great significance in protecting data security,reducing task processing delay, and reducing system cost.展开更多
Heterogeneous computing (HC) environment utilizes diverse resources with different computational capabilities to solve computing-intensive applications having diverse computational requirements and constraints. The ta...Heterogeneous computing (HC) environment utilizes diverse resources with different computational capabilities to solve computing-intensive applications having diverse computational requirements and constraints. The task assignment problem in HC environment can be formally defined as for a given set of tasks and machines, assigning tasks to machines to achieve the minimum makespan. In this paper we propose a new task scheduling heuristic, high standard deviation first (HSTDF), which considers the standard deviation of the expected execution time of a task as a selection criterion. Standard deviation of the ex- pected execution time of a task represents the amount of variation in task execution time on different machines. Our conclusion is that tasks having high standard deviation must be assigned first for scheduling. A large number of experiments were carried out to check the effectiveness of the proposed heuristic in different scenarios, and the comparison with the existing heuristics (Max-min, Sufferage, Segmented Min-average, Segmented Min-min, and Segmented Max-min) clearly reveals that the proposed heuristic outperforms all existing heuristics in terms of average makespan.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
文摘Algorithm research of task scheduling is one of the key techniques in grid computing. This paper firstly describes a DAG task scheduling model used in grid computing environment, secondly discusses generational scheduling (GS) and communication inclusion generational scheduling (CIGS) algorithms. Finally, an improved CIGS algorithm is proposed to use in grid computing environment, and it has been proved effectively.
基金supported by the National Natural Science Foundation of China(61571149,62001139)the Initiation Fund for Postdoctoral Research in Heilongjiang Province(LBH-Q19098)the Natural Science Foundation of Heilongjiang Province(LH2020F0178).
文摘Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.
文摘The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.
基金the Deanship of Postgraduate Studies and Scientific Research at Majmaah University for funding this research work through the project number(R-2025-1567).
文摘Due to the intense data flow in expanding Internet of Things(IoT)applications,a heavy processing cost and workload on the fog-cloud side become inevitable.One of the most critical challenges is optimal task scheduling.Since this is an NP-hard problem type,a metaheuristic approach can be a good option.This study introduces a novel enhancement to the Artificial Rabbits Optimization(ARO)algorithm by integrating Chaotic maps and Levy flight strategies(CLARO).This dual approach addresses the limitations of standard ARO in terms of population diversity and convergence speed.It is designed for task scheduling in fog-cloud environments,optimizing energy consumption,makespan,and execution time simultaneously three critical parameters often treated individually in prior works.Unlike conventional single-objective methods,the proposed approach incorporates a multi-objective fitness function that dynamically adjusts the weight of each parameter,resulting in better resource allocation and load balancing.In analysis,a real-world dataset,the Open-source Google Cloud Jobs Dataset(GoCJ_Dataset),is used for performance measurement,and analyses are performed on three considered parameters.Comparisons are applied with well-known algorithms:GWO,SCSO,PSO,WOA,and ARO to indicate the reliability of the proposed method.In this regard,performance evaluation is performed by assigning these tasks to Virtual Machines(VMs)in the resource pool.Simulations are performed on 90 base cases and 30 scenarios for each evaluation parameter.The results indicated that the proposed algorithm achieved the best makespan performance in 80% of cases,ranked first in execution time in 61%of cases,and performed best in the final parameter in 69% of cases.In addition,according to the obtained results based on the defined fitness function,the proposed method(CLARO)is 2.52%better than ARO,3.95%better than SCSO,5.06%better than GWO,8.15%better than PSO,and 9.41%better than WOA.
基金in part by the Hubei Natural Science and Research Project under Grant 2020418in part by the 2021 Light of Taihu Science and Technology Projectin part by the 2022 Wuxi Science and Technology Innovation and Entrepreneurship Program.
文摘More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.
基金This work was supported by the National Key Research and Development Program of China(2021YFB2900603)the National Natural Science Foundation of China(61831008).
文摘A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.
文摘Cloud computing has rapidly evolved into a critical technology,seamlessly integrating into various aspects of daily life.As user demand for cloud services continues to surge,the need for efficient virtualization and resource management becomes paramount.At the core of this efficiency lies task scheduling,a complex process that determines how tasks are allocated and executed across cloud resources.While extensive research has been conducted in the area of task scheduling,optimizing multiple objectives simultaneously remains a significant challenge due to the NP(Non-deterministic Polynomial)Complete nature of the problem.This study aims to address these challenges by providing a comprehensive review and experimental analysis of task scheduling approaches,with a particular focus on hybrid techniques that offer promising solutions.Utilizing the CloudSim simulation toolkit,we evaluated the performance of three hybrid algorithms:Estimation of Distribution Algorithm-Genetic Algorithm(EDA-GA),Hybrid Genetic Algorithm-Ant Colony Optimization(HGA-ACO),and Improved Discrete Particle Swarm Optimization(IDPSO).Our experimental results demonstrate that these hybrid methods significantly outperform traditional standalone algorithms in reducing Makespan,which is a critical measure of task completion time.Notably,the IDPSO algorithm exhibited superior performance,achieving a Makespan of just 0.64 milliseconds for a set of 150 tasks.These findings underscore the potential of hybrid algorithms to enhance task scheduling efficiency in cloud computing environments.This paper concludes with a discussion of the implications of our findings and offers recommendations for future research aimed at further improving task scheduling strategies,particularly in the context of increasingly complex and dynamic cloud environments.
文摘To solve the deadlock problem of tasks that the interdependence between tasks fails to consider during the course of resource assignment and task scheduling based on the heuristics algorithm, an improved ant colony system (ACS) based algorithm is proposed. First, how to map the resource assignment and task scheduling (RATS) problem into the optimization selection problem of task resource assignment graph (TRAG) and to add the semaphore mechanism in the optimal TRAG to solve deadlocks are explained. Secondly, how to utilize the grid pheromone system model to realize the algorithm based on ACS is explicated. This refers to the construction of TRAG by the random selection of appropriate resources for each task by the user agent and the optimization of TRAG through the positive feedback and distributed parallel computing mechanism of the ACS. Simulation results show that the proposed algorithm is effective and efficient in solving the deadlock problem.
基金partially been sponsored by the National Science Foundation of China(No.61572355,61272093,610172063)Tianjin Research Program of Application Foundation and Advanced Technology under grant No.15JCYBJC15700
文摘Task scheduling in cloud computing environments is a multi-objective optimization problem, which is NP hard. It is also a challenging problem to find an appropriate trade-off among resource utilization, energy consumption and Quality of Service(QoS) requirements under the changing environment and diverse tasks. Considering both processing time and transmission time, a PSO-based Adaptive Multi-objective Task Scheduling(AMTS) Strategy is proposed in this paper. First, the task scheduling problem is formulated. Then, a task scheduling policy is advanced to get the optimal resource utilization, task completion time, average cost and average energy consumption. In order to maintain the particle diversity, the adaptive acceleration coefficient is adopted. Experimental results show that the improved PSO algorithm can obtain quasi-optimal solutions for the cloud task scheduling problem.
文摘Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the task scheduling problem has emerged as a critical analytical topic in cloud computing.The primary goal of scheduling tasks is to distribute tasks to available processors to construct the shortest possible schedule without breaching precedence restrictions.Assignments and schedules of tasks substantially influence system operation in a heterogeneous multiprocessor system.The diverse processes inside the heuristic-based task scheduling method will result in varying makespan in the heterogeneous computing system.As a result,an intelligent scheduling algorithm should efficiently determine the priority of every subtask based on the resources necessary to lower the makespan.This research introduced a novel efficient scheduling task method in cloud computing systems based on the cooperation search algorithm to tackle an essential task and schedule a heterogeneous cloud computing problem.The basic idea of thismethod is to use the advantages of meta-heuristic algorithms to get the optimal solution.We assess our algorithm’s performance by running it through three scenarios with varying numbers of tasks.The findings demonstrate that the suggested technique beats existingmethods NewGenetic Algorithm(NGA),Genetic Algorithm(GA),Whale Optimization Algorithm(WOA),Gravitational Search Algorithm(GSA),and Hybrid Heuristic and Genetic(HHG)by 7.9%,2.1%,8.8%,7.7%,3.4%respectively according to makespan.
基金supported by the Social Science Foundation of Hebei Province(No.HB19JL007)the Education technology Foundation of the Ministry of Education(No.2017A01020)the Natural Science Foundation of Hebei Province(F2021207005).
文摘With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficiencies werefound during data processing through the cloud centric architecture. The data generated by terminals at the edge of the network is an urgent problem to be solved atpresent. In 5 g environments, edge computing can better meet the needs of lowdelay and wide connection applications, and support the fast request of terminalusers. However, edge computing only has the edge layer computing advantage,and it is difficult to achieve global resource scheduling and configuration, whichmay lead to the problems of low resource utilization rate, long task processingdelay and unbalanced system load, so as to lead to affect the service quality ofusers. To solve this problem, this paper studies task scheduling and resource collaboration based on a Cloud-Edge-Terminal collaborative architecture, proposes agenetic simulated annealing fusion algorithm, called GSA-EDGE, to achieve taskscheduling and resource allocation, and designs a series of experiments to verifythe effectiveness of the GSA-EDGE algorithm. The experimental results showthat the proposed method can reduce the time delay of task processing comparedwith the local task processing method and the task average allocation method.
基金This work was supported by the“National Key Research and Development Program of China”(No.2020YFB0905900).
文摘With the continuous evolution of smart grid and global energy interconnection technology,amount of intelligent terminals have been connected to power grid,which can be used for providing resource services as edge nodes.Traditional cloud computing can be used to provide storage services and task computing services in the power grid,but it faces challenges such as resource bottlenecks,time delays,and limited network bandwidth resources.Edge computing is an effective supplement for cloud computing,because it can provide users with local computing services with lower latency.However,because the resources in a single edge node are limited,resource-intensive tasks need to be divided into many subtasks and then assigned to different edge nodes by resource cooperation.Making task scheduling more efficient is an important issue.In this paper,a two-layer resource management scheme is proposed based on the concept of edge computing.In addition,a new task scheduling algorithm named GA-EC(Genetic Algorithm for Edge Computing)is put forth,based on a genetic algorithm,that can dynamically schedule tasks according to different scheduling goals.The simulation shows that the proposed algorithm has a beneficial effect on energy consumption and load balancing,and reduces time delay.
文摘Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of the tasks among the computing resources for their execution,and an inefficient task scheduling algorithm results in under-or over-utilization of the resources,which in turn leads to degradation of the services.Therefore,in the proposed work,load balancing is considered as an important criterion for task scheduling in a cloud computing environment as it can help in reducing the overhead in the critical decision-oriented process.In this paper,we propose an adaptive genetic algorithm-based load balancing(GALB)-aware task scheduling technique that not only results in better utilization of resources but also helps in optimizing the values of key performance indicators such as makespan,performance improvement ratio,and degree of imbalance.The concept of adaptive crossover and mutation is used in this work which results in better adaptation for the fittest individual of the current generation and prevents them from the elimination.CloudSim simulator has been used to carry out the simulations and obtained results establish that the proposed GALB algorithm performs better for all the key indicators and outperforms its peers which are taken into the consideration.
基金ACKNOWLEDGEMENTS The authors would like to thank the reviewers for their detailed reviews and constructive comments, which have helped improve the quality of this paper. The research has been partly supported by National Natural Science Foundation of China No. 61272528 and No. 61034005, and the Central University Fund (ID-ZYGX2013J073).
文摘Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,scheduling and executing large-scale computing tasks efficiently and allocating resources to tasks reasonably are becoming a quite challenging problem.To improve both task execution and resource utilization efficiency,we present a task scheduling algorithm with resource attribute selection,which can select the optimal node to execute a task according to its resource requirements and the fitness between the resource node and the task.Experiment results show that there is significant improvement in execution throughput and resource utilization compared with the other three algorithms and four scheduling frameworks.In the scheduling algorithm comparison,the throughput is 77%higher than Min-Min algorithm and the resource utilization can reach 91%.In the scheduling framework comparison,the throughput(with work-stealing)is at least 30%higher than the other frameworks and the resource utilization reaches 94%.The scheduling algorithm can make a good model for practical MTC applications.
基金supported in part by the National Key R&D Program of China under Grant 2017YFB1302400the Jinan“20 New Colleges and Universities”Funded Scientific Research Leader Studio under Grant 2021GXRC079+2 种基金the Major Agricultural Applied Technological Innovation Projects of Shandong Province underGrant SD2019NJ014the Shandong Natural Science Foundation under Grant ZR2019MF064the Beijing Advanced Innovation Center for Intelligent Robots and Systems under Grant 2019IRS19.
文摘The solution strategy of the heuristic algorithm is pre-set and has good performance in the conventional cloud resource scheduling process.However,for complex and dynamic cloud service scheduling tasks,due to the difference in service attributes,the solution efficiency of a single strategy is low for such problems.In this paper,we presents a hyper-heuristic algorithm based on reinforcement learning(HHRL)to optimize the completion time of the task sequence.Firstly,In the reward table setting stage of HHRL,we introduce population diversity and integrate maximum time to comprehensively deter-mine the task scheduling and the selection of low-level heuristic strategies.Secondly,a task computational complexity estimation method integrated with linear regression is proposed to influence task scheduling priorities.Besides,we propose a high-quality candidate solution migration method to ensure the continuity and diversity of the solving process.Compared with HHSA,ACO,GA,F-PSO,etc,HHRL can quickly obtain task complexity,select appropriate heuristic strategies for task scheduling,search for the the best makspan and have stronger disturbance detection ability for population diversity.
基金National Natural Science Foundation of China(No.61271114)The Key Programs of Science and Technology Research of He'nan Education Committee,China(No.12A520006)
文摘In cloud computing system,it is a hot and hard issue to find the optimal task scheduling method that makes the processing cost and the running time minimum. In order to deal with the task assignment,a task interaction graph was used to analyze the task scheduling; a modeling for task assignment was formulated and a particle swarm optimization (PSO)algorithm embedded in the variable neighborhood search (VNS) to optimize the task scheduling was proposed. The experimental results show that the method is more effective than the PSO in processing cost,transferring cost, and running time. When the task is more complex,the effect is much better. So,the algorithm can resolve the task scheduling in cloud computing and it is feasible,valid,and efficient.
基金Supported by the National Basic ResearchProgramof China (973 Program2003CB314804)
文摘In this paper combined with the advantages of genetic algorithm and simulated annealing, brings forward a parallel genetic simulated annealing hybrid algorithm (PGSAHA) and applied to solve task scheduling problem in grid computing. It first generates a new group of individuals through genetic operation such as reproduction, crossover, mutation, etc, and than simulated anneals independently all the generated individuals respectively. When the temperature in the process of cooling no longer falls, the result is the optimal solution on the whole. From the analysis and experiment result, it is concluded that this algorithm is superior to genetic algorithm and simulated annealing.
基金supported by theCommunication Soft Science Program of Ministry of Industry and Information Technology of China (No.2022-R-43)the Natural Science Basic Research Program of Shaanxi (No.2021JQ-719)Graduate Innovation Fund of Xi’an University of Posts and Telecommunications (No.CXJJZL2021014).
文摘Deploying task caching at edge servers has become an effectiveway to handle compute-intensive and latency-sensitive tasks on the industrialinternet. However, how to select the task scheduling location to reduce taskdelay and cost while ensuring the data security and reliable communicationof edge computing remains a challenge. To solve this problem, this paperestablishes a task scheduling model with joint blockchain and task cachingin the industrial internet and designs a novel blockchain-assisted cachingmechanism to enhance system security. In this paper, the task schedulingproblem, which couples the task scheduling decision, task caching decision,and blockchain reward, is formulated as the minimum weighted cost problemunder delay constraints. This is a mixed integer nonlinear problem, which isproved to be nonconvex and NP-hard. To solve the optimal solution, thispaper proposes a task scheduling strategy algorithm based on an improvedgenetic algorithm (IGA-TSPA) by improving the genetic algorithm initializationand mutation operations to reduce the size of the initial solutionspace and enhance the optimal solution convergence speed. In addition,an Improved Least Frequently Used algorithm is proposed to improve thecontent hit rate. Simulation results show that IGA-TSPA has a faster optimalsolution-solving ability and shorter running time compared with the existingedge computing scheduling algorithms. The established task scheduling modelnot only saves 62.19% of system overhead consumption in comparison withlocal computing but also has great significance in protecting data security,reducing task processing delay, and reducing system cost.
基金Project supported by the National Natural Science Foundation of China (No. 60703012)the National Basic Research Program (973) of China (No. 2006CB303000)the Heilongjiang Provincial Scientific and Technological Special Fund for Young Scholars (No. QC06C033),China
文摘Heterogeneous computing (HC) environment utilizes diverse resources with different computational capabilities to solve computing-intensive applications having diverse computational requirements and constraints. The task assignment problem in HC environment can be formally defined as for a given set of tasks and machines, assigning tasks to machines to achieve the minimum makespan. In this paper we propose a new task scheduling heuristic, high standard deviation first (HSTDF), which considers the standard deviation of the expected execution time of a task as a selection criterion. Standard deviation of the ex- pected execution time of a task represents the amount of variation in task execution time on different machines. Our conclusion is that tasks having high standard deviation must be assigned first for scheduling. A large number of experiments were carried out to check the effectiveness of the proposed heuristic in different scenarios, and the comparison with the existing heuristics (Max-min, Sufferage, Segmented Min-average, Segmented Min-min, and Segmented Max-min) clearly reveals that the proposed heuristic outperforms all existing heuristics in terms of average makespan.