As a novel application technology,wireless video sensor networks become the current research focus,especially on target tracking and surveillance scenario.Based on multiple agents' technique,this article introduces a...As a novel application technology,wireless video sensor networks become the current research focus,especially on target tracking and surveillance scenario.Based on multiple agents' technique,this article introduces a series of intelligent algorithms such as simulated annealing algorithm(SA),genetic algorithm(GA),and ant colony optimization algorithm(ACO) or their mixed algorithms,to resolve the optimization of tasks schedule and data transmission.This article analyzes the performance of abovementioned algorithms and verifies their feasibility associated with agents.The simulations demonstrates that the mixed algorithms based on SA and GA obtain the optimal solution to tasks schedule,and those combined with SA-ACO show advantages on multimedia sensor networks routing optimization.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
Due to the intense data flow in expanding Internet of Things(IoT)applications,a heavy processing cost and workload on the fog-cloud side become inevitable.One of the most critical challenges is optimal task scheduling...Due to the intense data flow in expanding Internet of Things(IoT)applications,a heavy processing cost and workload on the fog-cloud side become inevitable.One of the most critical challenges is optimal task scheduling.Since this is an NP-hard problem type,a metaheuristic approach can be a good option.This study introduces a novel enhancement to the Artificial Rabbits Optimization(ARO)algorithm by integrating Chaotic maps and Levy flight strategies(CLARO).This dual approach addresses the limitations of standard ARO in terms of population diversity and convergence speed.It is designed for task scheduling in fog-cloud environments,optimizing energy consumption,makespan,and execution time simultaneously three critical parameters often treated individually in prior works.Unlike conventional single-objective methods,the proposed approach incorporates a multi-objective fitness function that dynamically adjusts the weight of each parameter,resulting in better resource allocation and load balancing.In analysis,a real-world dataset,the Open-source Google Cloud Jobs Dataset(GoCJ_Dataset),is used for performance measurement,and analyses are performed on three considered parameters.Comparisons are applied with well-known algorithms:GWO,SCSO,PSO,WOA,and ARO to indicate the reliability of the proposed method.In this regard,performance evaluation is performed by assigning these tasks to Virtual Machines(VMs)in the resource pool.Simulations are performed on 90 base cases and 30 scenarios for each evaluation parameter.The results indicated that the proposed algorithm achieved the best makespan performance in 80% of cases,ranked first in execution time in 61%of cases,and performed best in the final parameter in 69% of cases.In addition,according to the obtained results based on the defined fitness function,the proposed method(CLARO)is 2.52%better than ARO,3.95%better than SCSO,5.06%better than GWO,8.15%better than PSO,and 9.41%better than WOA.展开更多
The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resource...The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.展开更多
Metaheuristic algorithms are pivotal in cloud task scheduling. However, the complexity and uncertainty of the scheduling problem severely limit algorithms. To bypass this circumvent, numerous algorithms have been prop...Metaheuristic algorithms are pivotal in cloud task scheduling. However, the complexity and uncertainty of the scheduling problem severely limit algorithms. To bypass this circumvent, numerous algorithms have been proposed. The Hiking Optimization Algorithm (HOA) have been used in multiple fields. However, HOA suffers from local optimization, slow convergence, and low efficiency of late iteration search when solving cloud task scheduling problems. Thus, this paper proposes an improved HOA called CMOHOA. It collaborates with multi-strategy to improve HOA. Specifically, Chebyshev chaos is introduced to increase population diversity. Then, a hybrid speed update strategy is designed to enhance convergence speed. Meanwhile, an adversarial learning strategy is introduced to enhance the search capability in the late iteration. Different scenarios of scheduling problems are used to test the CMOHOA’s performance. First, CMOHOA was used to solve basic cloud computing task scheduling problems, and the results showed that it reduced the average total cost by 10% or more. Secondly, CMOHOA has been applied to edge fog cloud scheduling problems, and the results show that it reduces the average total scheduling cost by 2% or more. Finally, CMOHOA reduced the average total cost by 7% or more in scheduling problems for information transmission.展开更多
Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems...Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.展开更多
A Genetic Algorithm-Ant Colony Algorithm(GA-ACA),which can be used to optimize multi-Unit Under Test(UUT)parallel test tasks sequences and resources configuration quickly and accurately,is proposed in the paper.With t...A Genetic Algorithm-Ant Colony Algorithm(GA-ACA),which can be used to optimize multi-Unit Under Test(UUT)parallel test tasks sequences and resources configuration quickly and accurately,is proposed in the paper.With the establishment of the mathematic model of multi-UUT parallel test tasks and resources,the condition of multi-UUT resources mergence is analyzed to obtain minimum resource requirement under minimum test time.The definition of cost efficiency is put forward,followed by the design of gene coding and path selection project,which can satisfy multi-UUT parallel test tasks scheduling.At the threshold of the algorithm,GA is adopted to provide initial pheromone for ACA,and then dual-convergence pheromone feedback mode is applied in ACA to avoid local optimization and parameters dependence.The practical application proves that the algorithm has a remarkable effect on solving the problems of multi-UUT parallel test tasks scheduling and resources configuration.展开更多
Well organized datacentres with interconnected servers constitute the cloud computing infrastructure.User requests are submitted through an interface to these servers that provide service to them in an on-demand basis...Well organized datacentres with interconnected servers constitute the cloud computing infrastructure.User requests are submitted through an interface to these servers that provide service to them in an on-demand basis.The scientific applications that get executed at cloud by making use of the heterogeneous resources being allocated to them in a dynamic manner are grouped under NP hard problem category.Task scheduling in cloud poses numerous challenges impacting the cloud performance.If not handled properly,user satisfaction becomes questionable.More recently researchers had come up with meta-heuristic type of solutions for enriching the task scheduling activity in the cloud environment.The prime aim of task scheduling is to utilize the resources available in an optimal manner and reduce the time span of task execution.An improvised seagull optimization algorithm which combines the features of the Cuckoo search(CS)and seagull optimization algorithm(SOA)had been proposed in this work to enhance the performance of the scheduling activity inside the cloud computing environment.The proposed algorithm aims to minimize the cost and time parameters that are spent during task scheduling in the heterogeneous cloud environment.Performance evaluation of the proposed algorithm had been performed using the Cloudsim 3.0 toolkit by comparing it with Multi objective-Ant Colony Optimization(MO-ACO),ACO and Min-Min algorithms.The proposed SOA-CS technique had produced an improvement of 1.06%,4.2%,and 2.4%for makespan and had reduced the overall cost to the extent of 1.74%,3.93%and 2.77%when compared with PSO,ACO,IDEA algorithms respectively when 300 vms are considered.The comparative simulation results obtained had shown that the proposed improvised seagull optimization algorithm fares better than other contemporaries.展开更多
A checkpointing scheme for relevant distributed real-time tasks which can be scheduled as a DAG is proposed. A typical algorithm, OSA, is selected for DAG scheduling. A new methods based a new structure, Scheduled Clu...A checkpointing scheme for relevant distributed real-time tasks which can be scheduled as a DAG is proposed. A typical algorithm, OSA, is selected for DAG scheduling. A new methods based a new structure, Scheduled Cluster Tree, is presented to calculate the slack time of each task in the task cluster. In the checkpointing scheme, the optimal checkpoint intervals which minimize the approximated failure probability are derived formally and validated experimentally. The complexity of approximated failure probability is quite small compared with that of the exact probability. Meanwhile, the consistency of the checkpointing is discussed also.展开更多
AI(Artificial Intelligence)workloads are proliferating in modernreal-time systems.As the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be reexami...AI(Artificial Intelligence)workloads are proliferating in modernreal-time systems.As the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be reexamined.In particular,it is difficult to immediately handle changes inreal-time tasks without violating the deadline constraints.To cope with thissituation,this paper analyzes the task situations of AI workloads and findsthe following two observations.First,resource planning for AI workloadsis a complicated search problem that requires much time for optimization.Second,although the task set of an AI workload may change over time,thepossible combinations of the task sets are known in advance.Based on theseobservations,this paper proposes a new resource planning scheme for AIworkloads that supports the re-planning of resources.Instead of generatingresource plans on the fly,the proposed scheme pre-determines resourceplans for various combinations of tasks.Thus,in any case,the workload isimmediately executed according to the resource plan maintained.Specifically,the proposed scheme maintains an optimized CPU(Central Processing Unit)and memory resource plan using genetic algorithms and applies it as soonas the workload changes.The proposed scheme is implemented in the opensourcesimulator SimRTS for the validation of its effectiveness.Simulationexperiments show that the proposed scheme reduces the energy consumptionof CPU and memory by 45.5%on average without deadline misses.展开更多
Cloud computing plays a significant role in Information Technology(IT)industry to deliver scalable resources as a service.One of the most important factor to increase the performance of the cloud server is maximizing t...Cloud computing plays a significant role in Information Technology(IT)industry to deliver scalable resources as a service.One of the most important factor to increase the performance of the cloud server is maximizing the resource utilization in task scheduling.The main advantage of this scheduling is to max-imize the performance and minimize the time loss.Various researchers examined numerous scheduling methods to achieve Quality of Service(QoS)and to reduce execution time.However,it had disadvantages in terms of low throughput and high response time.Hence,this study aimed to schedule the task efficiently and to eliminate the faults in scheduling the tasks to the Virtual Machines(VMs).For this purpose,the research proposed novel Particle Swarm Optimization-Bandwidth Aware divisible Task(PSO-BATS)scheduling with Multi-Layered Regression Host Employment(MLRHE)to sort out the issues of task scheduling and ease the scheduling operation by load balancing.The proposed efficient sche-duling provides benefits to both cloud users and servers.The performance evalua-tion is undertaken with respect to cost,Performance Improvement Rate(PIR)and makespan which revealed the efficiency of the proposed method.Additionally,comparative analysis is undertaken which confirmed the performance of the intro-duced system than conventional system for scheduling tasks with highflexibility.展开更多
In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications...In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.展开更多
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u...A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.展开更多
This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependenci...This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.展开更多
More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud com...More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.展开更多
This paper reviews task scheduling frameworks,methods,and evaluation metrics of central processing unit-graphics processing unit(CPU-GPU)heterogeneous clusters.Task scheduling of CPU-GPU heterogeneous clusters can be ...This paper reviews task scheduling frameworks,methods,and evaluation metrics of central processing unit-graphics processing unit(CPU-GPU)heterogeneous clusters.Task scheduling of CPU-GPU heterogeneous clusters can be carried out on the system level,nodelevel,and device level.Most task-scheduling technologies are heuristic based on the experts’experience,while some technologies are based on statistic methods using machine learning,deep learning,or reinforcement learning.Many metrics have been adopted to evaluate and compare different task scheduling technologies that try to optimize different goals of task scheduling.Although statistic task scheduling has reached fewer research achievements than heuristic task scheduling,the statistic task scheduling still has significant research potential.展开更多
Cloud computing has rapidly evolved into a critical technology,seamlessly integrating into various aspects of daily life.As user demand for cloud services continues to surge,the need for efficient virtualization and r...Cloud computing has rapidly evolved into a critical technology,seamlessly integrating into various aspects of daily life.As user demand for cloud services continues to surge,the need for efficient virtualization and resource management becomes paramount.At the core of this efficiency lies task scheduling,a complex process that determines how tasks are allocated and executed across cloud resources.While extensive research has been conducted in the area of task scheduling,optimizing multiple objectives simultaneously remains a significant challenge due to the NP(Non-deterministic Polynomial)Complete nature of the problem.This study aims to address these challenges by providing a comprehensive review and experimental analysis of task scheduling approaches,with a particular focus on hybrid techniques that offer promising solutions.Utilizing the CloudSim simulation toolkit,we evaluated the performance of three hybrid algorithms:Estimation of Distribution Algorithm-Genetic Algorithm(EDA-GA),Hybrid Genetic Algorithm-Ant Colony Optimization(HGA-ACO),and Improved Discrete Particle Swarm Optimization(IDPSO).Our experimental results demonstrate that these hybrid methods significantly outperform traditional standalone algorithms in reducing Makespan,which is a critical measure of task completion time.Notably,the IDPSO algorithm exhibited superior performance,achieving a Makespan of just 0.64 milliseconds for a set of 150 tasks.These findings underscore the potential of hybrid algorithms to enhance task scheduling efficiency in cloud computing environments.This paper concludes with a discussion of the implications of our findings and offers recommendations for future research aimed at further improving task scheduling strategies,particularly in the context of increasingly complex and dynamic cloud environments.展开更多
In the fiercely competitive landscape of product-oriented operating systems,including the Internet of Things(IoT),efficiently managing a substantial stream of real-time tasks coexisting with resource-intensive user ap...In the fiercely competitive landscape of product-oriented operating systems,including the Internet of Things(IoT),efficiently managing a substantial stream of real-time tasks coexisting with resource-intensive user applications embedded in constrained hardware presents a significant challenge.Bridging the gap between embedded and general-purpose operating systems,we introduce XIRAC,an optimized operating system shaped by information-theory principles.XIRAC leverages Shannon’s information theory to regulate processor workloads,minimize context switches,and preempt processes by maximizing system entropy tolerance.Unlike prior approaches that apply information theory to task priority alignment,the proposed method integrates maximum entropy into the core of the real-time operating system(RTOS)and scheduling algorithms.Subsequently,we optimize numerous system parameters by shifting and integrating commonly used unlimited tasks from the application layer to the kernel.We describe the advantages of this architectural shift,including improved system performance,scalability,and adaptability.A new application-programming paradigm,termed“object-emulated programming,”has emerged from this integration.Practical implementations of XIRAC in diverse products have revealed additional benefits,including reduced learning curves,elimination of library functions and threading dependencies,optimized chip capabilities,and increased competitiveness in product development.We provide a comprehensive explanation of these benefits and explore their impact through real-world use cases and practical applications.展开更多
MapReduce is a popular data parallel processing framework in data centers.MapReduce splits a job into multiple map tasks and reduce tasks so that the tasks can be executed in parallel.Before running the map and reduce...MapReduce is a popular data parallel processing framework in data centers.MapReduce splits a job into multiple map tasks and reduce tasks so that the tasks can be executed in parallel.Before running the map and reduce tasks,the task nodes communicate with the data nodes to fetch the data required by the execution of the tasks.The network traffic between the nodes accounts for a big part of the running time of the MapReduce job.Therefore,careful map and reduce tasks scheduling is critical for MapReduce performance.Most of the current task scheduling algorithms only perform the scheduling for either map tasks or reduce tasks without the joint consideration of the impact of both map and reduce tasks scheduling on the network traffic.In this paper,we deal with the joint scheduling of map and reduce tasks problem with the aim to reduce the network traffic.We also propose a data replica Location-Aware Joint Scheduling of map and reduce tasks algorithm(LAJS).The algorithm determines the scheduling locations of map and reduce tasks according to the node processing capabilities and the data replica locations of the input data for the map tasks.We finally conduct experiments through simulations.Experiment results show that the proposed algorithm LAJS can effectively reduce the data traffic during job processing and improve job makespan performance.展开更多
基金sponsored by the National Natural Science Foundation of China (60973139, 60773041)the Natural Science Foundation of Jiangsu Province (BK2008451)+4 种基金the Hi-Tech Research and Development Program of China (2007AA01Z404, 2007AA01Z478)Special Fund for Software Technology of Jiangsu ProvinceFoundation of National Laboratory for Modern Communications (9140C1105040805)Postdoctoral Foundation (0801019C, 20090451240)the six kinds of Top Talent of Jiangsu Province (2008118)
文摘As a novel application technology,wireless video sensor networks become the current research focus,especially on target tracking and surveillance scenario.Based on multiple agents' technique,this article introduces a series of intelligent algorithms such as simulated annealing algorithm(SA),genetic algorithm(GA),and ant colony optimization algorithm(ACO) or their mixed algorithms,to resolve the optimization of tasks schedule and data transmission.This article analyzes the performance of abovementioned algorithms and verifies their feasibility associated with agents.The simulations demonstrates that the mixed algorithms based on SA and GA obtain the optimal solution to tasks schedule,and those combined with SA-ACO show advantages on multimedia sensor networks routing optimization.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
基金the Deanship of Postgraduate Studies and Scientific Research at Majmaah University for funding this research work through the project number(R-2025-1567).
文摘Due to the intense data flow in expanding Internet of Things(IoT)applications,a heavy processing cost and workload on the fog-cloud side become inevitable.One of the most critical challenges is optimal task scheduling.Since this is an NP-hard problem type,a metaheuristic approach can be a good option.This study introduces a novel enhancement to the Artificial Rabbits Optimization(ARO)algorithm by integrating Chaotic maps and Levy flight strategies(CLARO).This dual approach addresses the limitations of standard ARO in terms of population diversity and convergence speed.It is designed for task scheduling in fog-cloud environments,optimizing energy consumption,makespan,and execution time simultaneously three critical parameters often treated individually in prior works.Unlike conventional single-objective methods,the proposed approach incorporates a multi-objective fitness function that dynamically adjusts the weight of each parameter,resulting in better resource allocation and load balancing.In analysis,a real-world dataset,the Open-source Google Cloud Jobs Dataset(GoCJ_Dataset),is used for performance measurement,and analyses are performed on three considered parameters.Comparisons are applied with well-known algorithms:GWO,SCSO,PSO,WOA,and ARO to indicate the reliability of the proposed method.In this regard,performance evaluation is performed by assigning these tasks to Virtual Machines(VMs)in the resource pool.Simulations are performed on 90 base cases and 30 scenarios for each evaluation parameter.The results indicated that the proposed algorithm achieved the best makespan performance in 80% of cases,ranked first in execution time in 61%of cases,and performed best in the final parameter in 69% of cases.In addition,according to the obtained results based on the defined fitness function,the proposed method(CLARO)is 2.52%better than ARO,3.95%better than SCSO,5.06%better than GWO,8.15%better than PSO,and 9.41%better than WOA.
文摘The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.
基金supported by the National Natural Science Foundation of China (52275480)the Guizhou Provincial Science and Technology Program of Qiankehe Zhongdi Guiding ([2023]02)+1 种基金the Guizhou Provincial Science and Technology Program of Qiankehe Platform Talent Project (GCC[2023]001)the Guizhou Provincial Science and Technology Project of Qiankehe Platform Project (KXJZ[2024]002).
文摘Metaheuristic algorithms are pivotal in cloud task scheduling. However, the complexity and uncertainty of the scheduling problem severely limit algorithms. To bypass this circumvent, numerous algorithms have been proposed. The Hiking Optimization Algorithm (HOA) have been used in multiple fields. However, HOA suffers from local optimization, slow convergence, and low efficiency of late iteration search when solving cloud task scheduling problems. Thus, this paper proposes an improved HOA called CMOHOA. It collaborates with multi-strategy to improve HOA. Specifically, Chebyshev chaos is introduced to increase population diversity. Then, a hybrid speed update strategy is designed to enhance convergence speed. Meanwhile, an adversarial learning strategy is introduced to enhance the search capability in the late iteration. Different scenarios of scheduling problems are used to test the CMOHOA’s performance. First, CMOHOA was used to solve basic cloud computing task scheduling problems, and the results showed that it reduced the average total cost by 10% or more. Secondly, CMOHOA has been applied to edge fog cloud scheduling problems, and the results show that it reduces the average total scheduling cost by 2% or more. Finally, CMOHOA reduced the average total cost by 7% or more in scheduling problems for information transmission.
基金supported by the National Natural Science Foundation of China(61571149,62001139)the Initiation Fund for Postdoctoral Research in Heilongjiang Province(LBH-Q19098)the Natural Science Foundation of Heilongjiang Province(LH2020F0178).
文摘Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.
基金supported by“11th Five-year Projects”pre-research projects fund of the National Arming Department
文摘A Genetic Algorithm-Ant Colony Algorithm(GA-ACA),which can be used to optimize multi-Unit Under Test(UUT)parallel test tasks sequences and resources configuration quickly and accurately,is proposed in the paper.With the establishment of the mathematic model of multi-UUT parallel test tasks and resources,the condition of multi-UUT resources mergence is analyzed to obtain minimum resource requirement under minimum test time.The definition of cost efficiency is put forward,followed by the design of gene coding and path selection project,which can satisfy multi-UUT parallel test tasks scheduling.At the threshold of the algorithm,GA is adopted to provide initial pheromone for ACA,and then dual-convergence pheromone feedback mode is applied in ACA to avoid local optimization and parameters dependence.The practical application proves that the algorithm has a remarkable effect on solving the problems of multi-UUT parallel test tasks scheduling and resources configuration.
文摘Well organized datacentres with interconnected servers constitute the cloud computing infrastructure.User requests are submitted through an interface to these servers that provide service to them in an on-demand basis.The scientific applications that get executed at cloud by making use of the heterogeneous resources being allocated to them in a dynamic manner are grouped under NP hard problem category.Task scheduling in cloud poses numerous challenges impacting the cloud performance.If not handled properly,user satisfaction becomes questionable.More recently researchers had come up with meta-heuristic type of solutions for enriching the task scheduling activity in the cloud environment.The prime aim of task scheduling is to utilize the resources available in an optimal manner and reduce the time span of task execution.An improvised seagull optimization algorithm which combines the features of the Cuckoo search(CS)and seagull optimization algorithm(SOA)had been proposed in this work to enhance the performance of the scheduling activity inside the cloud computing environment.The proposed algorithm aims to minimize the cost and time parameters that are spent during task scheduling in the heterogeneous cloud environment.Performance evaluation of the proposed algorithm had been performed using the Cloudsim 3.0 toolkit by comparing it with Multi objective-Ant Colony Optimization(MO-ACO),ACO and Min-Min algorithms.The proposed SOA-CS technique had produced an improvement of 1.06%,4.2%,and 2.4%for makespan and had reduced the overall cost to the extent of 1.74%,3.93%and 2.77%when compared with PSO,ACO,IDEA algorithms respectively when 300 vms are considered.The comparative simulation results obtained had shown that the proposed improvised seagull optimization algorithm fares better than other contemporaries.
文摘A checkpointing scheme for relevant distributed real-time tasks which can be scheduled as a DAG is proposed. A typical algorithm, OSA, is selected for DAG scheduling. A new methods based a new structure, Scheduled Cluster Tree, is presented to calculate the slack time of each task in the task cluster. In the checkpointing scheme, the optimal checkpoint intervals which minimize the approximated failure probability are derived formally and validated experimentally. The complexity of approximated failure probability is quite small compared with that of the exact probability. Meanwhile, the consistency of the checkpointing is discussed also.
基金This work was partly supported by the Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by theKorean government(MSIT)(No.2021-0-02068,Artificial Intelligence Innovation Hub)(No.RS-2022-00155966,Artificial Intelligence Convergence Innovation Human Resources Development(Ewha University)).
文摘AI(Artificial Intelligence)workloads are proliferating in modernreal-time systems.As the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be reexamined.In particular,it is difficult to immediately handle changes inreal-time tasks without violating the deadline constraints.To cope with thissituation,this paper analyzes the task situations of AI workloads and findsthe following two observations.First,resource planning for AI workloadsis a complicated search problem that requires much time for optimization.Second,although the task set of an AI workload may change over time,thepossible combinations of the task sets are known in advance.Based on theseobservations,this paper proposes a new resource planning scheme for AIworkloads that supports the re-planning of resources.Instead of generatingresource plans on the fly,the proposed scheme pre-determines resourceplans for various combinations of tasks.Thus,in any case,the workload isimmediately executed according to the resource plan maintained.Specifically,the proposed scheme maintains an optimized CPU(Central Processing Unit)and memory resource plan using genetic algorithms and applies it as soonas the workload changes.The proposed scheme is implemented in the opensourcesimulator SimRTS for the validation of its effectiveness.Simulationexperiments show that the proposed scheme reduces the energy consumptionof CPU and memory by 45.5%on average without deadline misses.
文摘Cloud computing plays a significant role in Information Technology(IT)industry to deliver scalable resources as a service.One of the most important factor to increase the performance of the cloud server is maximizing the resource utilization in task scheduling.The main advantage of this scheduling is to max-imize the performance and minimize the time loss.Various researchers examined numerous scheduling methods to achieve Quality of Service(QoS)and to reduce execution time.However,it had disadvantages in terms of low throughput and high response time.Hence,this study aimed to schedule the task efficiently and to eliminate the faults in scheduling the tasks to the Virtual Machines(VMs).For this purpose,the research proposed novel Particle Swarm Optimization-Bandwidth Aware divisible Task(PSO-BATS)scheduling with Multi-Layered Regression Host Employment(MLRHE)to sort out the issues of task scheduling and ease the scheduling operation by load balancing.The proposed efficient sche-duling provides benefits to both cloud users and servers.The performance evalua-tion is undertaken with respect to cost,Performance Improvement Rate(PIR)and makespan which revealed the efficiency of the proposed method.Additionally,comparative analysis is undertaken which confirmed the performance of the intro-duced system than conventional system for scheduling tasks with highflexibility.
基金This work was supported in part by the National Science and Technology Council of Taiwan,under Contract NSTC 112-2410-H-324-001-MY2.
文摘In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.
基金This work was supported by the National Key Research and Development Program of China(2021YFB2900603)the National Natural Science Foundation of China(61831008).
文摘A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.
基金funded by the Science and Technology Foundation of State Grid Corporation of China(Grant No.5108-202218280A-2-397-XG).
文摘This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.
基金in part by the Hubei Natural Science and Research Project under Grant 2020418in part by the 2021 Light of Taihu Science and Technology Projectin part by the 2022 Wuxi Science and Technology Innovation and Entrepreneurship Program.
文摘More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.
基金supported by ZTE‑University‑Institute Fund Project under Grant No.IA20230629009.
文摘This paper reviews task scheduling frameworks,methods,and evaluation metrics of central processing unit-graphics processing unit(CPU-GPU)heterogeneous clusters.Task scheduling of CPU-GPU heterogeneous clusters can be carried out on the system level,nodelevel,and device level.Most task-scheduling technologies are heuristic based on the experts’experience,while some technologies are based on statistic methods using machine learning,deep learning,or reinforcement learning.Many metrics have been adopted to evaluate and compare different task scheduling technologies that try to optimize different goals of task scheduling.Although statistic task scheduling has reached fewer research achievements than heuristic task scheduling,the statistic task scheduling still has significant research potential.
文摘Cloud computing has rapidly evolved into a critical technology,seamlessly integrating into various aspects of daily life.As user demand for cloud services continues to surge,the need for efficient virtualization and resource management becomes paramount.At the core of this efficiency lies task scheduling,a complex process that determines how tasks are allocated and executed across cloud resources.While extensive research has been conducted in the area of task scheduling,optimizing multiple objectives simultaneously remains a significant challenge due to the NP(Non-deterministic Polynomial)Complete nature of the problem.This study aims to address these challenges by providing a comprehensive review and experimental analysis of task scheduling approaches,with a particular focus on hybrid techniques that offer promising solutions.Utilizing the CloudSim simulation toolkit,we evaluated the performance of three hybrid algorithms:Estimation of Distribution Algorithm-Genetic Algorithm(EDA-GA),Hybrid Genetic Algorithm-Ant Colony Optimization(HGA-ACO),and Improved Discrete Particle Swarm Optimization(IDPSO).Our experimental results demonstrate that these hybrid methods significantly outperform traditional standalone algorithms in reducing Makespan,which is a critical measure of task completion time.Notably,the IDPSO algorithm exhibited superior performance,achieving a Makespan of just 0.64 milliseconds for a set of 150 tasks.These findings underscore the potential of hybrid algorithms to enhance task scheduling efficiency in cloud computing environments.This paper concludes with a discussion of the implications of our findings and offers recommendations for future research aimed at further improving task scheduling strategies,particularly in the context of increasingly complex and dynamic cloud environments.
文摘In the fiercely competitive landscape of product-oriented operating systems,including the Internet of Things(IoT),efficiently managing a substantial stream of real-time tasks coexisting with resource-intensive user applications embedded in constrained hardware presents a significant challenge.Bridging the gap between embedded and general-purpose operating systems,we introduce XIRAC,an optimized operating system shaped by information-theory principles.XIRAC leverages Shannon’s information theory to regulate processor workloads,minimize context switches,and preempt processes by maximizing system entropy tolerance.Unlike prior approaches that apply information theory to task priority alignment,the proposed method integrates maximum entropy into the core of the real-time operating system(RTOS)and scheduling algorithms.Subsequently,we optimize numerous system parameters by shifting and integrating commonly used unlimited tasks from the application layer to the kernel.We describe the advantages of this architectural shift,including improved system performance,scalability,and adaptability.A new application-programming paradigm,termed“object-emulated programming,”has emerged from this integration.Practical implementations of XIRAC in diverse products have revealed additional benefits,including reduced learning curves,elimination of library functions and threading dependencies,optimized chip capabilities,and increased competitiveness in product development.We provide a comprehensive explanation of these benefits and explore their impact through real-world use cases and practical applications.
文摘MapReduce is a popular data parallel processing framework in data centers.MapReduce splits a job into multiple map tasks and reduce tasks so that the tasks can be executed in parallel.Before running the map and reduce tasks,the task nodes communicate with the data nodes to fetch the data required by the execution of the tasks.The network traffic between the nodes accounts for a big part of the running time of the MapReduce job.Therefore,careful map and reduce tasks scheduling is critical for MapReduce performance.Most of the current task scheduling algorithms only perform the scheduling for either map tasks or reduce tasks without the joint consideration of the impact of both map and reduce tasks scheduling on the network traffic.In this paper,we deal with the joint scheduling of map and reduce tasks problem with the aim to reduce the network traffic.We also propose a data replica Location-Aware Joint Scheduling of map and reduce tasks algorithm(LAJS).The algorithm determines the scheduling locations of map and reduce tasks according to the node processing capabilities and the data replica locations of the input data for the map tasks.We finally conduct experiments through simulations.Experiment results show that the proposed algorithm LAJS can effectively reduce the data traffic during job processing and improve job makespan performance.