Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.Howev...Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.展开更多
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c...The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.展开更多
In response to the challenges faced by unmanned swarms in mountain obstacle-breaching missions within complex terrains,such as poor task-resource coupling,lengthy solution generation times,and poor inter-platform coll...In response to the challenges faced by unmanned swarms in mountain obstacle-breaching missions within complex terrains,such as poor task-resource coupling,lengthy solution generation times,and poor inter-platform collaboration,an unmanned swarm scheduling strategy tailored is proposed for mountain obstacle-breaching missions.Initially,by formalizing the descriptions of obstacle breaching operations,the swarm,and obstacle targets,an optimization model is constructed with the objectives of expected global benefit,timeliness,and task completion degree.A meta-task decomposition and reassembly strategy is then introduced to more precisely match the capabilities of unmanned platforms with task requirements.Additionally,a meta-task decomposition optimization model and a meta-task allocation operator are incorporated to achieve efficient allocation of swarm resources and collaborative scheduling.Simulation results demonstrate that the model can accurately generate reasonable and feasible obstacle breaching execution plans for unmanned swarms based on specific task requirements and environmental conditions.Moreover,compared to conventional strategies,the proposed strategy enhances task completion degree and expected returns while reducing the execution time of the plans.展开更多
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
Due to the intense data flow in expanding Internet of Things(IoT)applications,a heavy processing cost and workload on the fog-cloud side become inevitable.One of the most critical challenges is optimal task scheduling...Due to the intense data flow in expanding Internet of Things(IoT)applications,a heavy processing cost and workload on the fog-cloud side become inevitable.One of the most critical challenges is optimal task scheduling.Since this is an NP-hard problem type,a metaheuristic approach can be a good option.This study introduces a novel enhancement to the Artificial Rabbits Optimization(ARO)algorithm by integrating Chaotic maps and Levy flight strategies(CLARO).This dual approach addresses the limitations of standard ARO in terms of population diversity and convergence speed.It is designed for task scheduling in fog-cloud environments,optimizing energy consumption,makespan,and execution time simultaneously three critical parameters often treated individually in prior works.Unlike conventional single-objective methods,the proposed approach incorporates a multi-objective fitness function that dynamically adjusts the weight of each parameter,resulting in better resource allocation and load balancing.In analysis,a real-world dataset,the Open-source Google Cloud Jobs Dataset(GoCJ_Dataset),is used for performance measurement,and analyses are performed on three considered parameters.Comparisons are applied with well-known algorithms:GWO,SCSO,PSO,WOA,and ARO to indicate the reliability of the proposed method.In this regard,performance evaluation is performed by assigning these tasks to Virtual Machines(VMs)in the resource pool.Simulations are performed on 90 base cases and 30 scenarios for each evaluation parameter.The results indicated that the proposed algorithm achieved the best makespan performance in 80% of cases,ranked first in execution time in 61%of cases,and performed best in the final parameter in 69% of cases.In addition,according to the obtained results based on the defined fitness function,the proposed method(CLARO)is 2.52%better than ARO,3.95%better than SCSO,5.06%better than GWO,8.15%better than PSO,and 9.41%better than WOA.展开更多
With the rapid advancement of satellite communication technologies,space information networks(SINs)have become essential infrastructure for complex service delivery and cross-domain task coordination,facilitating the ...With the rapid advancement of satellite communication technologies,space information networks(SINs)have become essential infrastructure for complex service delivery and cross-domain task coordination,facilitating the transition toward an intent-driven task-oriented coordination paradigm across the space,ground,and user segments.This study presents a novel intent-driven task-oriented network(IDTN)framework to address task scheduling and resource allocation challenges in SINs.The scheduling problem is formulated as a three-sided matching game that incorporates the preference attributes of entities across all network segments.To manage the variability of random task arrivals and dynamic resources,a context-aware linear upper-confidence-bound online learning mechanism is integrated to reduce decision-making uncertainty.Simulation results demonstrate the effectiveness of the proposed IDTN framework.Compared with conventional baseline methods,the framework achieves significant performance improvements,including a 4.4%-28.9%increase in average system reward,a 6.2%-34.5%improvement in resource utilization,and a 5.6%-35.7%enhancement in user satisfaction.The proposed framework is expected to facilitate the integration and orchestration of space-based platforms.展开更多
The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resource...The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.展开更多
Metaheuristic algorithms are pivotal in cloud task scheduling. However, the complexity and uncertainty of the scheduling problem severely limit algorithms. To bypass this circumvent, numerous algorithms have been prop...Metaheuristic algorithms are pivotal in cloud task scheduling. However, the complexity and uncertainty of the scheduling problem severely limit algorithms. To bypass this circumvent, numerous algorithms have been proposed. The Hiking Optimization Algorithm (HOA) have been used in multiple fields. However, HOA suffers from local optimization, slow convergence, and low efficiency of late iteration search when solving cloud task scheduling problems. Thus, this paper proposes an improved HOA called CMOHOA. It collaborates with multi-strategy to improve HOA. Specifically, Chebyshev chaos is introduced to increase population diversity. Then, a hybrid speed update strategy is designed to enhance convergence speed. Meanwhile, an adversarial learning strategy is introduced to enhance the search capability in the late iteration. Different scenarios of scheduling problems are used to test the CMOHOA’s performance. First, CMOHOA was used to solve basic cloud computing task scheduling problems, and the results showed that it reduced the average total cost by 10% or more. Secondly, CMOHOA has been applied to edge fog cloud scheduling problems, and the results show that it reduces the average total scheduling cost by 2% or more. Finally, CMOHOA reduced the average total cost by 7% or more in scheduling problems for information transmission.展开更多
With the widespread adoption of unmanned aerial vehicle(UAV)technology,task scheduling for UAV swarms has become a crucial approach to improve operational efficiency.Most existing studies oversimplify the operational ...With the widespread adoption of unmanned aerial vehicle(UAV)technology,task scheduling for UAV swarms has become a crucial approach to improve operational efficiency.Most existing studies oversimplify the operational process rules of UAVs,making it difficult to accurately characterize the adaptability differences of UAVs to various tasks under practical operational constraints.To address this limitation,this paper proposes a UAV swarm task scheduling problem with limited communication range(UAVS-LCR)and establishes an integer programming model for its formal description.For solving this problem,a multi-neighborhood iterative local search(MNILS)algorithm is designed,which adopts a doubly linked list solution representation method to reduce the computational complexity of basic neighborhood operations.This algorithm generates high-quality initial solutions via a greedy construction strategy,combines insertion search,multi-swap search and the two-opt operator to enable alternating exploration across multiple neighborhoods,and incorporates a simulated annealing mechanism to balance search efficiency and solution diversity.This method can provide an effective solution for various application scenarios including wide-area UAV inspection and heterogeneous UAV collaborative operations.Experimental results on 12 power grid maintenance test instances demonstrate that the MNILS algorithm significantly outperforms the genetic algorithm,the artificial bee colony algorithm,the ant colony optimization algorithm and the variable neighborhood search algorithm in terms of both solution quality and scalability for large-scale problems.展开更多
Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems...Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.展开更多
Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus o...Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus on enabling congestion control to minimize network transmission delays through flexible power control.To effectively solve the congestion problem,we propose a distributed cross-layer scheduling algorithm,which is empowered by graph-based multi-agent deep reinforcement learning.The transmit power is adaptively adjusted in real-time by our algorithm based only on local information(i.e.,channel state information and queue length)and local communication(i.e.,information exchanged with neighbors).Moreover,the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network.In the evaluation,we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states,and demonstrate the adaptability and stability in different topologies.The method is general and can be extended to various types of topologies.展开更多
IEEE 802.16e based WiMAX networks promise a desirable available quality of service for mobile users and scheduling algorithms provide the best effective use of network resources in it. In this paper, we propose a nove...IEEE 802.16e based WiMAX networks promise a desirable available quality of service for mobile users and scheduling algorithms provide the best effective use of network resources in it. In this paper, we propose a novel cross-layer scheduling algorithm for OFDMA-based WiMAX networks. Our scheme employs a priority function at the MAC layer and a slot allocation policy at physical layer and by interaction between these two layers specifies the best allocation for each connection. Simulation results show performance of proposed scheme in comparison with two other well-known scheduling algorithms, MAX-SNR scheduling and Proportional Fairness (PF) scheduling. Our proposed cross-layer algorithm outperforms the other algorithms in delay and packet loss rate values for real-time services.展开更多
In the case of video streaming over wireless channels, burst errors may lead to serious video quality degradation. By jointly exploiting the scheduling mechanism on different communication layers, this paper proposes ...In the case of video streaming over wireless channels, burst errors may lead to serious video quality degradation. By jointly exploiting the scheduling mechanism on different communication layers, this paper proposes a quality-aware cross-layer scheduling scheme to achieve unequal error control for each Latency-constraint Frame Set (LFS) of a video stream. After a network-layer agent at base station firstly utilizes the network-layer packet scheduling to provide packet-granularity importance classifi-cation for the current LFS, a link-layer agent at base station further utilizes the Radio-Link-Unit (RLU) scheduling to implement finer selective retransmission of the current LFS. Under scheduling delay and bandwidth constraints, the proposed scheme can be aware of the application-layer quality and time-varying channel conditions, and hence burst errors can simply be shifted to lower-priority transmission units in the current LFS. Simulation results demonstrate that the proposed scheme has strong robustness against burst errors, and thus improves the overall received quality of the video stream over wireless channels.展开更多
Traditional resource allocation algorithms use the hierarchical system, which does not apply to the bad channel environment in broadband power line communication system. Introducing the idea of cross-layer can improve...Traditional resource allocation algorithms use the hierarchical system, which does not apply to the bad channel environment in broadband power line communication system. Introducing the idea of cross-layer can improve the utilization of resources and ensure the QoS of services. This paper proposes a cross-layer resource allocation on broadband power line based on QoS priority scheduling function on MAC layer. Firstly, the algorithm considers both of real-time users’ requirements for delay and non-real-time users’ requirements for queue length. And then user priority function is proposed. Then each user’s scheduled packets number is calculated according to its priority function. The scheduling sequences are based on the utility function. In physical layer, according to the scheduled packets, the algorithm allocates physical resources for packets. The simulation results show that the proposed algorithm give consideration to both latency and throughput of the system with improving users’ QoS.展开更多
To solve the deadlock problem of tasks that the interdependence between tasks fails to consider during the course of resource assignment and task scheduling based on the heuristics algorithm, an improved ant colony sy...To solve the deadlock problem of tasks that the interdependence between tasks fails to consider during the course of resource assignment and task scheduling based on the heuristics algorithm, an improved ant colony system (ACS) based algorithm is proposed. First, how to map the resource assignment and task scheduling (RATS) problem into the optimization selection problem of task resource assignment graph (TRAG) and to add the semaphore mechanism in the optimal TRAG to solve deadlocks are explained. Secondly, how to utilize the grid pheromone system model to realize the algorithm based on ACS is explicated. This refers to the construction of TRAG by the random selection of appropriate resources for each task by the user agent and the optimization of TRAG through the positive feedback and distributed parallel computing mechanism of the ACS. Simulation results show that the proposed algorithm is effective and efficient in solving the deadlock problem.展开更多
Satellite observation scheduling plays a significant role in improving the efficiency of satellite observation systems.Although many scheduling algorithms have been proposed,emergency tasks,characterized as importance...Satellite observation scheduling plays a significant role in improving the efficiency of satellite observation systems.Although many scheduling algorithms have been proposed,emergency tasks,characterized as importance and urgency(e.g.,observation tasks orienting to the earthquake area and military conflict area),have not been taken into account yet.Therefore,it is crucial to investigate the satellite integrated scheduling methods,which focus on meeting the requirements of emergency tasks while maximizing the profit of common tasks.Firstly,a pretreatment approach is proposed,which eliminates conflicts among emergency tasks and allocates all tasks with a potential time-window to related orbits of satellites.Secondly,a mathematical model and an acyclic directed graph model are constructed.Thirdly,a hybrid ant colony optimization method mixed with iteration local search(ACO-ILS) is established to solve the problem.Moreover,to guarantee all solutions satisfying the emergency task requirement constraints,a constraint repair method is presented.Extensive experimental simulations show that the proposed integrated scheduling method is superior to two-phased scheduling methods,the performance of ACO-ILS is greatly improved in both evolution speed and solution quality by iteration local search,and ACO-ILS outperforms both genetic algorithm and simulated annealing algorithm.展开更多
In this paper combined with the advantages of genetic algorithm and simulated annealing, brings forward a parallel genetic simulated annealing hybrid algorithm (PGSAHA) and applied to solve task scheduling problem i...In this paper combined with the advantages of genetic algorithm and simulated annealing, brings forward a parallel genetic simulated annealing hybrid algorithm (PGSAHA) and applied to solve task scheduling problem in grid computing. It first generates a new group of individuals through genetic operation such as reproduction, crossover, mutation, etc, and than simulated anneals independently all the generated individuals respectively. When the temperature in the process of cooling no longer falls, the result is the optimal solution on the whole. From the analysis and experiment result, it is concluded that this algorithm is superior to genetic algorithm and simulated annealing.展开更多
Heterogeneous computing is one effective method of high performance computing with many advantages. Task scheduling is a critical issue in heterogeneous environments as well as in homogeneous environments. A number of...Heterogeneous computing is one effective method of high performance computing with many advantages. Task scheduling is a critical issue in heterogeneous environments as well as in homogeneous environments. A number of task scheduling algorithms for homogeneous environments have been proposed, whereas, a few for heterogeneous environments can be found in the literature. A novel task scheduling algorithm for heterogeneous environments, called the heterogeneous critical task (HCT) scheduling algorithm is presented. By means of the directed acyclic graph and the gantt graph, the HCT algorithm defines the critical task and the idle time slot. After determining the critical tasks of a given task, the HCT algorithm tentatively duplicates the critical tasks onto the processor that has the given task in the idle time slot, to reduce the start time of the given task. To compare the performance of the HCT algorithm with several recently proposed algorithms, a large set of randomly generated applications and the Gaussian elimination application are randomly generated. The experimental result has shown that the HCT algorithm outperforms the other algorithm.展开更多
A scheduling algorithm is presented aiming at the task scheduling problem in the phased array radar. Rather than assuming the scheduling interval(SI) time, which is the update interval of the radar invoking the schedu...A scheduling algorithm is presented aiming at the task scheduling problem in the phased array radar. Rather than assuming the scheduling interval(SI) time, which is the update interval of the radar invoking the scheduling algorithm, to be a fixed value,it is modeled as a fuzzy set to improve the scheduling flexibility.The scheduling algorithm exploits the fuzzy set model in order to intelligently adjust the SI time. The idle time in other SIs is provided for SIs which will be overload. Thereby more request tasks can be accommodated. The simulation results show that the proposed algorithm improves the successful scheduling ratio by 16%,the threat ratio of execution by 16% and the time utilization ratio by 15% compared with the highest task mode priority first(HPF)algorithm.展开更多
How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data cente...How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.展开更多
文摘Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.
基金appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.
基金supported by the National Natural Science Foundation of China(61374186)。
文摘In response to the challenges faced by unmanned swarms in mountain obstacle-breaching missions within complex terrains,such as poor task-resource coupling,lengthy solution generation times,and poor inter-platform collaboration,an unmanned swarm scheduling strategy tailored is proposed for mountain obstacle-breaching missions.Initially,by formalizing the descriptions of obstacle breaching operations,the swarm,and obstacle targets,an optimization model is constructed with the objectives of expected global benefit,timeliness,and task completion degree.A meta-task decomposition and reassembly strategy is then introduced to more precisely match the capabilities of unmanned platforms with task requirements.Additionally,a meta-task decomposition optimization model and a meta-task allocation operator are incorporated to achieve efficient allocation of swarm resources and collaborative scheduling.Simulation results demonstrate that the model can accurately generate reasonable and feasible obstacle breaching execution plans for unmanned swarms based on specific task requirements and environmental conditions.Moreover,compared to conventional strategies,the proposed strategy enhances task completion degree and expected returns while reducing the execution time of the plans.
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
基金the Deanship of Postgraduate Studies and Scientific Research at Majmaah University for funding this research work through the project number(R-2025-1567).
文摘Due to the intense data flow in expanding Internet of Things(IoT)applications,a heavy processing cost and workload on the fog-cloud side become inevitable.One of the most critical challenges is optimal task scheduling.Since this is an NP-hard problem type,a metaheuristic approach can be a good option.This study introduces a novel enhancement to the Artificial Rabbits Optimization(ARO)algorithm by integrating Chaotic maps and Levy flight strategies(CLARO).This dual approach addresses the limitations of standard ARO in terms of population diversity and convergence speed.It is designed for task scheduling in fog-cloud environments,optimizing energy consumption,makespan,and execution time simultaneously three critical parameters often treated individually in prior works.Unlike conventional single-objective methods,the proposed approach incorporates a multi-objective fitness function that dynamically adjusts the weight of each parameter,resulting in better resource allocation and load balancing.In analysis,a real-world dataset,the Open-source Google Cloud Jobs Dataset(GoCJ_Dataset),is used for performance measurement,and analyses are performed on three considered parameters.Comparisons are applied with well-known algorithms:GWO,SCSO,PSO,WOA,and ARO to indicate the reliability of the proposed method.In this regard,performance evaluation is performed by assigning these tasks to Virtual Machines(VMs)in the resource pool.Simulations are performed on 90 base cases and 30 scenarios for each evaluation parameter.The results indicated that the proposed algorithm achieved the best makespan performance in 80% of cases,ranked first in execution time in 61%of cases,and performed best in the final parameter in 69% of cases.In addition,according to the obtained results based on the defined fitness function,the proposed method(CLARO)is 2.52%better than ARO,3.95%better than SCSO,5.06%better than GWO,8.15%better than PSO,and 9.41%better than WOA.
基金supported by the National Key Research and Development Program of China(2020YFB1807700)Innovation Capability Support Program of Shaanxi(2024RS-CXTD-01).
文摘With the rapid advancement of satellite communication technologies,space information networks(SINs)have become essential infrastructure for complex service delivery and cross-domain task coordination,facilitating the transition toward an intent-driven task-oriented coordination paradigm across the space,ground,and user segments.This study presents a novel intent-driven task-oriented network(IDTN)framework to address task scheduling and resource allocation challenges in SINs.The scheduling problem is formulated as a three-sided matching game that incorporates the preference attributes of entities across all network segments.To manage the variability of random task arrivals and dynamic resources,a context-aware linear upper-confidence-bound online learning mechanism is integrated to reduce decision-making uncertainty.Simulation results demonstrate the effectiveness of the proposed IDTN framework.Compared with conventional baseline methods,the framework achieves significant performance improvements,including a 4.4%-28.9%increase in average system reward,a 6.2%-34.5%improvement in resource utilization,and a 5.6%-35.7%enhancement in user satisfaction.The proposed framework is expected to facilitate the integration and orchestration of space-based platforms.
文摘The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.
基金supported by the National Natural Science Foundation of China (52275480)the Guizhou Provincial Science and Technology Program of Qiankehe Zhongdi Guiding ([2023]02)+1 种基金the Guizhou Provincial Science and Technology Program of Qiankehe Platform Talent Project (GCC[2023]001)the Guizhou Provincial Science and Technology Project of Qiankehe Platform Project (KXJZ[2024]002).
文摘Metaheuristic algorithms are pivotal in cloud task scheduling. However, the complexity and uncertainty of the scheduling problem severely limit algorithms. To bypass this circumvent, numerous algorithms have been proposed. The Hiking Optimization Algorithm (HOA) have been used in multiple fields. However, HOA suffers from local optimization, slow convergence, and low efficiency of late iteration search when solving cloud task scheduling problems. Thus, this paper proposes an improved HOA called CMOHOA. It collaborates with multi-strategy to improve HOA. Specifically, Chebyshev chaos is introduced to increase population diversity. Then, a hybrid speed update strategy is designed to enhance convergence speed. Meanwhile, an adversarial learning strategy is introduced to enhance the search capability in the late iteration. Different scenarios of scheduling problems are used to test the CMOHOA’s performance. First, CMOHOA was used to solve basic cloud computing task scheduling problems, and the results showed that it reduced the average total cost by 10% or more. Secondly, CMOHOA has been applied to edge fog cloud scheduling problems, and the results show that it reduces the average total scheduling cost by 2% or more. Finally, CMOHOA reduced the average total cost by 7% or more in scheduling problems for information transmission.
基金supported by the Project Social Science Foundation Jiangsu Province(No.22GLB026)2025 National Major Project for Logistics Education Reform and Research in Higher Education and Vocational Colleges(No.JZW2025002)。
文摘With the widespread adoption of unmanned aerial vehicle(UAV)technology,task scheduling for UAV swarms has become a crucial approach to improve operational efficiency.Most existing studies oversimplify the operational process rules of UAVs,making it difficult to accurately characterize the adaptability differences of UAVs to various tasks under practical operational constraints.To address this limitation,this paper proposes a UAV swarm task scheduling problem with limited communication range(UAVS-LCR)and establishes an integer programming model for its formal description.For solving this problem,a multi-neighborhood iterative local search(MNILS)algorithm is designed,which adopts a doubly linked list solution representation method to reduce the computational complexity of basic neighborhood operations.This algorithm generates high-quality initial solutions via a greedy construction strategy,combines insertion search,multi-swap search and the two-opt operator to enable alternating exploration across multiple neighborhoods,and incorporates a simulated annealing mechanism to balance search efficiency and solution diversity.This method can provide an effective solution for various application scenarios including wide-area UAV inspection and heterogeneous UAV collaborative operations.Experimental results on 12 power grid maintenance test instances demonstrate that the MNILS algorithm significantly outperforms the genetic algorithm,the artificial bee colony algorithm,the ant colony optimization algorithm and the variable neighborhood search algorithm in terms of both solution quality and scalability for large-scale problems.
基金supported by the National Natural Science Foundation of China(61571149,62001139)the Initiation Fund for Postdoctoral Research in Heilongjiang Province(LBH-Q19098)the Natural Science Foundation of Heilongjiang Province(LH2020F0178).
文摘Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.
基金supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.RS-2022-00155885, Artificial Intelligence Convergence Innovation Human Resources Development (Hanyang University ERICA))supported by the National Natural Science Foundation of China under Grant No. 61971264the National Natural Science Foundation of China/Research Grants Council Collaborative Research Scheme under Grant No. 62261160390
文摘Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus on enabling congestion control to minimize network transmission delays through flexible power control.To effectively solve the congestion problem,we propose a distributed cross-layer scheduling algorithm,which is empowered by graph-based multi-agent deep reinforcement learning.The transmit power is adaptively adjusted in real-time by our algorithm based only on local information(i.e.,channel state information and queue length)and local communication(i.e.,information exchanged with neighbors).Moreover,the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network.In the evaluation,we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states,and demonstrate the adaptability and stability in different topologies.The method is general and can be extended to various types of topologies.
文摘IEEE 802.16e based WiMAX networks promise a desirable available quality of service for mobile users and scheduling algorithms provide the best effective use of network resources in it. In this paper, we propose a novel cross-layer scheduling algorithm for OFDMA-based WiMAX networks. Our scheme employs a priority function at the MAC layer and a slot allocation policy at physical layer and by interaction between these two layers specifies the best allocation for each connection. Simulation results show performance of proposed scheme in comparison with two other well-known scheduling algorithms, MAX-SNR scheduling and Proportional Fairness (PF) scheduling. Our proposed cross-layer algorithm outperforms the other algorithms in delay and packet loss rate values for real-time services.
文摘In the case of video streaming over wireless channels, burst errors may lead to serious video quality degradation. By jointly exploiting the scheduling mechanism on different communication layers, this paper proposes a quality-aware cross-layer scheduling scheme to achieve unequal error control for each Latency-constraint Frame Set (LFS) of a video stream. After a network-layer agent at base station firstly utilizes the network-layer packet scheduling to provide packet-granularity importance classifi-cation for the current LFS, a link-layer agent at base station further utilizes the Radio-Link-Unit (RLU) scheduling to implement finer selective retransmission of the current LFS. Under scheduling delay and bandwidth constraints, the proposed scheme can be aware of the application-layer quality and time-varying channel conditions, and hence burst errors can simply be shifted to lower-priority transmission units in the current LFS. Simulation results demonstrate that the proposed scheme has strong robustness against burst errors, and thus improves the overall received quality of the video stream over wireless channels.
文摘Traditional resource allocation algorithms use the hierarchical system, which does not apply to the bad channel environment in broadband power line communication system. Introducing the idea of cross-layer can improve the utilization of resources and ensure the QoS of services. This paper proposes a cross-layer resource allocation on broadband power line based on QoS priority scheduling function on MAC layer. Firstly, the algorithm considers both of real-time users’ requirements for delay and non-real-time users’ requirements for queue length. And then user priority function is proposed. Then each user’s scheduled packets number is calculated according to its priority function. The scheduling sequences are based on the utility function. In physical layer, according to the scheduled packets, the algorithm allocates physical resources for packets. The simulation results show that the proposed algorithm give consideration to both latency and throughput of the system with improving users’ QoS.
文摘To solve the deadlock problem of tasks that the interdependence between tasks fails to consider during the course of resource assignment and task scheduling based on the heuristics algorithm, an improved ant colony system (ACS) based algorithm is proposed. First, how to map the resource assignment and task scheduling (RATS) problem into the optimization selection problem of task resource assignment graph (TRAG) and to add the semaphore mechanism in the optimal TRAG to solve deadlocks are explained. Secondly, how to utilize the grid pheromone system model to realize the algorithm based on ACS is explicated. This refers to the construction of TRAG by the random selection of appropriate resources for each task by the user agent and the optimization of TRAG through the positive feedback and distributed parallel computing mechanism of the ACS. Simulation results show that the proposed algorithm is effective and efficient in solving the deadlock problem.
基金supported by the National Natural Science Foundation of China (61104180)the National Basic Research Program of China(973 Program) (97361361)
文摘Satellite observation scheduling plays a significant role in improving the efficiency of satellite observation systems.Although many scheduling algorithms have been proposed,emergency tasks,characterized as importance and urgency(e.g.,observation tasks orienting to the earthquake area and military conflict area),have not been taken into account yet.Therefore,it is crucial to investigate the satellite integrated scheduling methods,which focus on meeting the requirements of emergency tasks while maximizing the profit of common tasks.Firstly,a pretreatment approach is proposed,which eliminates conflicts among emergency tasks and allocates all tasks with a potential time-window to related orbits of satellites.Secondly,a mathematical model and an acyclic directed graph model are constructed.Thirdly,a hybrid ant colony optimization method mixed with iteration local search(ACO-ILS) is established to solve the problem.Moreover,to guarantee all solutions satisfying the emergency task requirement constraints,a constraint repair method is presented.Extensive experimental simulations show that the proposed integrated scheduling method is superior to two-phased scheduling methods,the performance of ACO-ILS is greatly improved in both evolution speed and solution quality by iteration local search,and ACO-ILS outperforms both genetic algorithm and simulated annealing algorithm.
基金Supported by the National Basic ResearchProgramof China (973 Program2003CB314804)
文摘In this paper combined with the advantages of genetic algorithm and simulated annealing, brings forward a parallel genetic simulated annealing hybrid algorithm (PGSAHA) and applied to solve task scheduling problem in grid computing. It first generates a new group of individuals through genetic operation such as reproduction, crossover, mutation, etc, and than simulated anneals independently all the generated individuals respectively. When the temperature in the process of cooling no longer falls, the result is the optimal solution on the whole. From the analysis and experiment result, it is concluded that this algorithm is superior to genetic algorithm and simulated annealing.
文摘Heterogeneous computing is one effective method of high performance computing with many advantages. Task scheduling is a critical issue in heterogeneous environments as well as in homogeneous environments. A number of task scheduling algorithms for homogeneous environments have been proposed, whereas, a few for heterogeneous environments can be found in the literature. A novel task scheduling algorithm for heterogeneous environments, called the heterogeneous critical task (HCT) scheduling algorithm is presented. By means of the directed acyclic graph and the gantt graph, the HCT algorithm defines the critical task and the idle time slot. After determining the critical tasks of a given task, the HCT algorithm tentatively duplicates the critical tasks onto the processor that has the given task in the idle time slot, to reduce the start time of the given task. To compare the performance of the HCT algorithm with several recently proposed algorithms, a large set of randomly generated applications and the Gaussian elimination application are randomly generated. The experimental result has shown that the HCT algorithm outperforms the other algorithm.
基金supported by the National Youth Foundation(61503408)
文摘A scheduling algorithm is presented aiming at the task scheduling problem in the phased array radar. Rather than assuming the scheduling interval(SI) time, which is the update interval of the radar invoking the scheduling algorithm, to be a fixed value,it is modeled as a fuzzy set to improve the scheduling flexibility.The scheduling algorithm exploits the fuzzy set model in order to intelligently adjust the SI time. The idle time in other SIs is provided for SIs which will be overload. Thereby more request tasks can be accommodated. The simulation results show that the proposed algorithm improves the successful scheduling ratio by 16%,the threat ratio of execution by 16% and the time utilization ratio by 15% compared with the highest task mode priority first(HPF)algorithm.
基金supported by the National Natural Science Foundation of China(6120200461272084)+9 种基金the National Key Basic Research Program of China(973 Program)(2011CB302903)the Specialized Research Fund for the Doctoral Program of Higher Education(2009322312000120113223110003)the China Postdoctoral Science Foundation Funded Project(2011M5000952012T50514)the Natural Science Foundation of Jiangsu Province(BK2011754BK2009426)the Jiangsu Postdoctoral Science Foundation Funded Project(1102103C)the Natural Science Fund of Higher Education of Jiangsu Province(12KJB520007)the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(yx002001)
文摘How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.