The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is n...The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap.展开更多
In deep drilling applications,such as those for geothermal energy,there are many challenges,such as those related to efficient operation of the drilling fluid(mud)pumping system.Legacy drilling rigs often use paired,p...In deep drilling applications,such as those for geothermal energy,there are many challenges,such as those related to efficient operation of the drilling fluid(mud)pumping system.Legacy drilling rigs often use paired,parallel-connected independent-excitation direct-current(DC)motors for mud pumps,that are supplied by a single power converter.This configuration results in electrical power imbalance,thus reducing its efficiency.This paper investigates this power imbalance issue in such legacy DC mud pump drive systems and offers an innovative solution in the form of a closed-loop control system for electrical load balancing.The paper first analyzes the drilling fluid circulation and electrical drive layout to develop an analytical model that can be used for electrical load balancing and related energy efficiency improvements.Based on this analysis,a feedback control system(so-called“current mirror”control system)is designed to balance the electrical load(i.e.,armature currents)of parallel-connected DC machines by adjusting the excitation current of one of the DC machines,thus mitigating the power imbalance of the electrical drive.Theproposed control systemeffectiveness has been validated,first through simulations,followed by experimental testing on a deep drilling rig during commissioning and field tests.The results demonstrate the practical viability of the proposed“current mirror”control system that can effectively and rather quickly equalize the armature currents of both DC machines in a parallel-connected electrical drive,and thus balance both the electrical and mechanical load of individual DC machines under realistic operating conditions of the mud pump electrical drive.展开更多
In recent years,load balancing routing al-gorithms have been extensively studied in satellite net-works.Most existing studies focus on path selection and hop-count optimization for end-to-end transmis-sion,while overl...In recent years,load balancing routing al-gorithms have been extensively studied in satellite net-works.Most existing studies focus on path selection and hop-count optimization for end-to-end transmis-sion,while overlooking congestion issues on feeder links caused by the limited number and centralized distribution of ground stations.Hence,a multi-service routing algorithm called the Multi-service Load Bal-ancing Routing Algorithm for Traffic Return(MLB-TR)is proposed.Unlike traditional approaches,MLB-TR aims to achieve a broader and more comprehensive load balancing objective.Specifically,based on the service type,an appropriate landing satellite is first selected by considering factors such as shortest path hop count and satellite load.Then,a set of candidate paths from the source satellite to the selected landing satellite is computed.Finally,using the regional load balancing index as the optimization objective,the final transmission path is selected from the candidate path set.Simulation results show that the proposed algo-rithm outperforms the existing works.展开更多
Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV ...Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV acts as an aerial relay to divert some traffic from the overloaded cell to its adjacent underloaded cell.To fully exploit its potential,we jointly optimize the UAV position,user association,spectrum allocation,and power allocation to maximize the sum-log-rate of all users in two adjacent cells.To tackle the complicated joint optimization problem,we first design a genetic-based algorithm to optimize the UAV position.Then,we simplify the problem by theoretical analysis and devise a low-complexity algorithm according to the branch-and-bound method,so as to obtain the optimal user association and spectrum allocation schemes.We further propose an iterative power allocation algorithm based on the sequential convex approximation theory.The simulation results indicate that the proposed UAV-assisted wireless network is superior to the terrestrial network in both utility and throughput,and the proposed algorithms can substantially improve the network performance in comparison with the other schemes.展开更多
This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependenci...This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.展开更多
Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led...Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.展开更多
With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The...With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.展开更多
In this paper, a sender-initiated protocol is applied which uses fuzzy logic control method to improve computer networks performance by balancing loads among computers. This new model devises sender-initiated protocol...In this paper, a sender-initiated protocol is applied which uses fuzzy logic control method to improve computer networks performance by balancing loads among computers. This new model devises sender-initiated protocol for load transfer for load balancing. Groups are formed and every group has a node called a designated representative (DR). During load transferring processes, loads are transferred using the DR in each group to achieve load balancing purposes. The simulation results show that the performance of the protocol proposed is better than the compared conventional method. This protocol is more stable than the method without using the fuzzy logic control.展开更多
To solve the load balancing problem in a triplet-based hierarchical interconnection network(THIN) system, a dynamic load balancing (DLB)algorithm--THINDLBA, which adopts multicast tree (MT)technology to improve ...To solve the load balancing problem in a triplet-based hierarchical interconnection network(THIN) system, a dynamic load balancing (DLB)algorithm--THINDLBA, which adopts multicast tree (MT)technology to improve the efficiency of interchanging load information, is presented. To support the algorithm, a complete set of DLB messages and a schema of maintaining DLB information in each processing node are designed. The load migration request messages from the heavily loaded node (HLN)are spread along an MT whose root is the HLN. And the lightly loaded nodes(LLNs) covered by the MT are the candidate destinations of load migration; the load information interchanged between the LLNs and the HLN can be transmitted along the MT. So the HLN can migrate excess loads out as many as possible during a one time execution of the THINDLBA, and its load state can be improved as quickly as possible. To avoid wrongly transmitted or redundant DLB messages due to MT overlapping, the MT construction is restricted in the design of the THINDLBA. Through experiments, the effectiveness of four DLB algorithms are compared, and the results show that the THINDLBA can effectively decrease the time costs of THIN systems in dealing with large scale computeintensive tasks more than others.展开更多
To improve data distribution efficiency a load-balancing data distribution LBDD method is proposed in publish/subscribe mode.In the LBDD method subscribers are involved in distribution tasks and data transfers while r...To improve data distribution efficiency a load-balancing data distribution LBDD method is proposed in publish/subscribe mode.In the LBDD method subscribers are involved in distribution tasks and data transfers while receiving data themselves.A dissemination tree is constructed among the subscribers based on MD5 where the publisher acts as the root. The proposed method provides bucket construction target selection and path updates furthermore the property of one-way dissemination is proven.That the average out-going degree of a node is 2 is guaranteed with the proposed LBDD.The experiments on data distribution delay data distribution rate and load distribution are conducted. Experimental results show that the LBDD method aids in shaping the task load between the publisher and subscribers and outperforms the point-to-point approach.展开更多
This paper focuses on solving a problem of improving system robustness and the efficiency of a distributed system at the same time. Fault tolerance with active replication and load balancing techniques are used. The p...This paper focuses on solving a problem of improving system robustness and the efficiency of a distributed system at the same time. Fault tolerance with active replication and load balancing techniques are used. The pros and cons of both techniques are analyzed, and a novel load balancing framework for fault tolerant systems with active replication is presented. Hierarchical architecture is described in detail. The framework can dynamically adjust fault tolerant groups and their memberships with respect to system loads. Three potential task scheduler group selection methods are proposed and simulation tests are made. Further analysis of test data is done and helpful observations for system design are also pointed out, including effects of task arrival intensity and task set size, relationship between total task execution time and single task execution time.展开更多
Load balancing plays a critical role in a cellular network. As one kind of cellular network, Radio-over-Fibre (RoF) system can provide ubiquitous high data-rate transmissions, which has attracted many attentions, bu...Load balancing plays a critical role in a cellular network. As one kind of cellular network, Radio-over-Fibre (RoF) system can provide ubiquitous high data-rate transmissions, which has attracted many attentions, but it also suffer load unbalancing problem. In order to improve the system performance, in this paper, we propose a novel loading balance scheme in RoF system based on differential game theory. The scheme formulates the load allocated to each RAP (Radio Access Point) as a Nasb Equilibrium, using non-cooperative differential game to obtain the optical load allocation of each RAP. The simulations performed show that the non-cooperative differential game algorithm is applicable and the optimal load solution can be achieved.展开更多
The rapid evolution and expanding scale of AI(artificial intelligence)technologies exert unprecedented energy demands on global electrical grids.Powering computationally intensive tasks such as large-scale AI model tr...The rapid evolution and expanding scale of AI(artificial intelligence)technologies exert unprecedented energy demands on global electrical grids.Powering computationally intensive tasks such as large-scale AI model training and widespread real-time inference necessitates substantial electricity consumption,presenting a significant challenge to conventional power infrastructure.This paper examines the critical need for a fundamental shift towards smart energy grids in response to AI’s growing energy footprint.It delves into the symbiotic relationship wherein AI acts as a significant energy consumer while offering the intelligence required for dynamic load management,efficient integration of renewable energy sources,and optimized grid operations.We posit that advanced smart grids are indispensable for facilitating AI’s sustainable growth,underscoring this synergy as a pivotal advancement toward a resilient energy future.展开更多
Underwater Wireless Sensor Networks(UWSNs)are gaining popularity because of their potential uses in oceanography,seismic activity monitoring,environmental preservation,and underwater mapping.Yet,these networks are fac...Underwater Wireless Sensor Networks(UWSNs)are gaining popularity because of their potential uses in oceanography,seismic activity monitoring,environmental preservation,and underwater mapping.Yet,these networks are faced with challenges such as self-interference,long propagation delays,limited bandwidth,and changing network topologies.These challenges are coped with by designing advanced routing protocols.In this work,we present Under Water Fuzzy-Routing Protocol for Low power and Lossy networks(UWF-RPL),an enhanced fuzzy-based protocol that improves decision-making during path selection and traffic distribution over different network nodes.Our method extends RPL with the aid of fuzzy logic to optimize depth,energy,Received Signal Strength Indicator(RSSI)to Expected Transmission Count(ETX)ratio,and latency.Theproposed protocol outperforms other techniques in that it offersmore energy efficiency,better packet delivery,lowdelay,and no queue overflow.It also exhibits better scalability and reliability in dynamic underwater networks,which is of very high importance in maintaining the network operations efficiency and the lifetime of UWSNs optimized.Compared to other recent methods,it offers improved network convergence time(10%–23%),energy efficiency(15%),packet delivery(17%),and delay(24%).展开更多
The increase in user mobility and density in modern cellular networks increases the risk of overloading certain base stations in popular locations such as shopping malls or stadiums,which can result in connection loss...The increase in user mobility and density in modern cellular networks increases the risk of overloading certain base stations in popular locations such as shopping malls or stadiums,which can result in connection loss for some users.To combat this,the traffic load of base stations should be kept as balanced as possible.In this paper,we propose an efficient load balancing-aware handover algorithm for highly dynamic beyond 5G heterogeneous networks by assigning mobile users to base stations with lighter loads when a handover is performed.The proposed algorithm is evaluated in a scenario with users having different levels of mobility,such as pedestrians and vehicles,and is shown to outperform the conventional handover mechanism,as well as another algorithm from the literature.As a secondary benefit,the overall energy consumption in the network is shown to be reduced with the proposed algorithm.展开更多
Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a...Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning(DRL)model to predict and allocate incoming workloads flexibly.The proposed methodology integrates workload prediction utilising long short-term memory(LSTM)networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms.By continuously analysing current and historical data,the model can efficiently allocate resources,prioritizing speed and energy preservation.The experimental results demonstrate that our load balancing system,which utilises DRL,significantly reduces average response times and energy usage compared to traditional methods.This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance.It consistently provides reliable and durable performance across a range of dynamic workloads.展开更多
Spark performs excellently in large-scale data-parallel computing and iterative processing.However,with the increase in data size and program complexity,the default scheduling strategy has difficultymeeting the demand...Spark performs excellently in large-scale data-parallel computing and iterative processing.However,with the increase in data size and program complexity,the default scheduling strategy has difficultymeeting the demands of resource utilization and performance optimization.Scheduling strategy optimization,as a key direction for improving Spark’s execution efficiency,has attracted widespread attention.This paper first introduces the basic theories of Spark,compares several default scheduling strategies,and discusses common scheduling performance evaluation indicators and factors affecting scheduling efficiency.Subsequently,existing scheduling optimization schemes are summarized based on three scheduling modes:load characteristics,cluster characteristics,and matching of both,and representative algorithms are analyzed in terms of performance indicators and applicable scenarios,comparing the advantages and disadvantages of different scheduling modes.The article also explores in detail the integration of Spark scheduling strategies with specific application scenarios and the challenges in production environments.Finally,the limitations of the existing schemes are analyzed,and prospects are envisioned.展开更多
The Internet ofThings(IoT)and edge computing have substantially contributed to the development and growth of smart cities.It handled time-constrained services and mobile devices to capture the observing environment fo...The Internet ofThings(IoT)and edge computing have substantially contributed to the development and growth of smart cities.It handled time-constrained services and mobile devices to capture the observing environment for surveillance applications.These systems are composed of wireless cameras,digital devices,and tiny sensors to facilitate the operations of crucial healthcare services.Recently,many interactive applications have been proposed,including integrating intelligent systems to handle data processing and enable dynamic communication functionalities for crucial IoT services.Nonetheless,most solutions lack optimizing relayingmethods and impose excessive overheads for maintaining devices’connectivity.Alternatively,data integrity and trust are another vital consideration for nextgeneration networks.This research proposed a load-balanced trusted surveillance routing model with collaborative decisions at network edges to enhance energymanagement and resource balancing.It leverages graph-based optimization to enable reliable analysis of decision-making parameters.Furthermore,mobile devices integratewith the proposed model to sustain trusted routes with lightweight privacy-preserving and authentication.The proposed model analyzed its performance results in a simulation-based environment and illustrated an exceptional improvement in packet loss ratio,energy consumption,detection anomaly,and blockchain overhead than related solutions.展开更多
As an important part of satellite communication network,LEO satellite constellation network is one of the hot research directions.Since the nonuniform distribution of terrestrial services may cause inter-satellite lin...As an important part of satellite communication network,LEO satellite constellation network is one of the hot research directions.Since the nonuniform distribution of terrestrial services may cause inter-satellite link congestion,improving network load balancing performance has become one of the key issues that need to be solved for routing algorithms in LEO network.Therefore,by expanding the range of available paths and combining the congestion avoidance mechanism,a load balancing routing algorithm based on extended link states in LEO constellation network is proposed.Simulation results show that the algorithm achieves a balanced distribution of traffic load,reduces link congestion and packet loss rate,and improves throughput of LEO satellite network.展开更多
To deal with the dynamic and imbalanced traffic requirements in Low Earth Orbit satellite networks, several distributed load balancing routing schemes have been proposed. However, because of the lack of global view, t...To deal with the dynamic and imbalanced traffic requirements in Low Earth Orbit satellite networks, several distributed load balancing routing schemes have been proposed. However, because of the lack of global view, these schemes may lead to cascading congestion in regions with high volume of traffic. To solve this problem, a Hybrid-Traffic-Detour based Load Balancing Routing(HLBR) scheme is proposed, where a Long-Distance Traffic Detour(LTD) method is devised and coordinates with distributed traffic detour method to perform self-adaptive load balancing. The forwarding path of LTD is acquired by the Circuitous Multipath Calculation(CMC) based on prior geographical information, and activated by the LTDShift-Trigger(LST) through real-time congestion perception. Simulation results show that the HLBR can mitigate cascading congestion and achieve efficient traffic distribution.展开更多
文摘The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap.
文摘In deep drilling applications,such as those for geothermal energy,there are many challenges,such as those related to efficient operation of the drilling fluid(mud)pumping system.Legacy drilling rigs often use paired,parallel-connected independent-excitation direct-current(DC)motors for mud pumps,that are supplied by a single power converter.This configuration results in electrical power imbalance,thus reducing its efficiency.This paper investigates this power imbalance issue in such legacy DC mud pump drive systems and offers an innovative solution in the form of a closed-loop control system for electrical load balancing.The paper first analyzes the drilling fluid circulation and electrical drive layout to develop an analytical model that can be used for electrical load balancing and related energy efficiency improvements.Based on this analysis,a feedback control system(so-called“current mirror”control system)is designed to balance the electrical load(i.e.,armature currents)of parallel-connected DC machines by adjusting the excitation current of one of the DC machines,thus mitigating the power imbalance of the electrical drive.Theproposed control systemeffectiveness has been validated,first through simulations,followed by experimental testing on a deep drilling rig during commissioning and field tests.The results demonstrate the practical viability of the proposed“current mirror”control system that can effectively and rather quickly equalize the armature currents of both DC machines in a parallel-connected electrical drive,and thus balance both the electrical and mechanical load of individual DC machines under realistic operating conditions of the mud pump electrical drive.
基金supported by the National Key Research and Development Program of China under Grant No.2022YFB2902501the Fundamental Research Funds for the Central Universities under Grant No.2023ZCJH09the Haidian District Golden Bridge Seed Fund of Beijing Municipality under Grant No.S2024161.
文摘In recent years,load balancing routing al-gorithms have been extensively studied in satellite net-works.Most existing studies focus on path selection and hop-count optimization for end-to-end transmis-sion,while overlooking congestion issues on feeder links caused by the limited number and centralized distribution of ground stations.Hence,a multi-service routing algorithm called the Multi-service Load Bal-ancing Routing Algorithm for Traffic Return(MLB-TR)is proposed.Unlike traditional approaches,MLB-TR aims to achieve a broader and more comprehensive load balancing objective.Specifically,based on the service type,an appropriate landing satellite is first selected by considering factors such as shortest path hop count and satellite load.Then,a set of candidate paths from the source satellite to the selected landing satellite is computed.Finally,using the regional load balancing index as the optimization objective,the final transmission path is selected from the candidate path set.Simulation results show that the proposed algo-rithm outperforms the existing works.
基金supported in part by the National Key Research and Development Program of China under Grant 2020YFB1807003in part by the National Natural Science Foundation of China under Grants 61901381,62171385,and 61901378+3 种基金in part by the Aeronautical Science Foundation of China under Grant 2020z073053004in part by the Foundation of the State Key Laboratory of Integrated Services Networks of Xidian University under Grant ISN21-06in part by the Key Research Program and Industrial Innovation Chain Project of Shaanxi Province under Grant 2019ZDLGY07-10in part by the Natural Science Fundamental Research Program of Shaanxi Province under Grant 2021JM-069.
文摘Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV acts as an aerial relay to divert some traffic from the overloaded cell to its adjacent underloaded cell.To fully exploit its potential,we jointly optimize the UAV position,user association,spectrum allocation,and power allocation to maximize the sum-log-rate of all users in two adjacent cells.To tackle the complicated joint optimization problem,we first design a genetic-based algorithm to optimize the UAV position.Then,we simplify the problem by theoretical analysis and devise a low-complexity algorithm according to the branch-and-bound method,so as to obtain the optimal user association and spectrum allocation schemes.We further propose an iterative power allocation algorithm based on the sequential convex approximation theory.The simulation results indicate that the proposed UAV-assisted wireless network is superior to the terrestrial network in both utility and throughput,and the proposed algorithms can substantially improve the network performance in comparison with the other schemes.
基金funded by the Science and Technology Foundation of State Grid Corporation of China(Grant No.5108-202218280A-2-397-XG).
文摘This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.
文摘Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.
文摘With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.
文摘In this paper, a sender-initiated protocol is applied which uses fuzzy logic control method to improve computer networks performance by balancing loads among computers. This new model devises sender-initiated protocol for load transfer for load balancing. Groups are formed and every group has a node called a designated representative (DR). During load transferring processes, loads are transferred using the DR in each group to achieve load balancing purposes. The simulation results show that the performance of the protocol proposed is better than the compared conventional method. This protocol is more stable than the method without using the fuzzy logic control.
基金The National Natural Science Foundation of China(No.69973007).
文摘To solve the load balancing problem in a triplet-based hierarchical interconnection network(THIN) system, a dynamic load balancing (DLB)algorithm--THINDLBA, which adopts multicast tree (MT)technology to improve the efficiency of interchanging load information, is presented. To support the algorithm, a complete set of DLB messages and a schema of maintaining DLB information in each processing node are designed. The load migration request messages from the heavily loaded node (HLN)are spread along an MT whose root is the HLN. And the lightly loaded nodes(LLNs) covered by the MT are the candidate destinations of load migration; the load information interchanged between the LLNs and the HLN can be transmitted along the MT. So the HLN can migrate excess loads out as many as possible during a one time execution of the THINDLBA, and its load state can be improved as quickly as possible. To avoid wrongly transmitted or redundant DLB messages due to MT overlapping, the MT construction is restricted in the design of the THINDLBA. Through experiments, the effectiveness of four DLB algorithms are compared, and the results show that the THINDLBA can effectively decrease the time costs of THIN systems in dealing with large scale computeintensive tasks more than others.
基金The National Key Basic Research Program of China(973 Program)
文摘To improve data distribution efficiency a load-balancing data distribution LBDD method is proposed in publish/subscribe mode.In the LBDD method subscribers are involved in distribution tasks and data transfers while receiving data themselves.A dissemination tree is constructed among the subscribers based on MD5 where the publisher acts as the root. The proposed method provides bucket construction target selection and path updates furthermore the property of one-way dissemination is proven.That the average out-going degree of a node is 2 is guaranteed with the proposed LBDD.The experiments on data distribution delay data distribution rate and load distribution are conducted. Experimental results show that the LBDD method aids in shaping the task load between the publisher and subscribers and outperforms the point-to-point approach.
文摘This paper focuses on solving a problem of improving system robustness and the efficiency of a distributed system at the same time. Fault tolerance with active replication and load balancing techniques are used. The pros and cons of both techniques are analyzed, and a novel load balancing framework for fault tolerant systems with active replication is presented. Hierarchical architecture is described in detail. The framework can dynamically adjust fault tolerant groups and their memberships with respect to system loads. Three potential task scheduler group selection methods are proposed and simulation tests are made. Further analysis of test data is done and helpful observations for system design are also pointed out, including effects of task arrival intensity and task set size, relationship between total task execution time and single task execution time.
基金This research was supported by the Fundamental Research Funds for the Central Universities,also supported by the National Natural Science Foundation of P.R.China
文摘Load balancing plays a critical role in a cellular network. As one kind of cellular network, Radio-over-Fibre (RoF) system can provide ubiquitous high data-rate transmissions, which has attracted many attentions, but it also suffer load unbalancing problem. In order to improve the system performance, in this paper, we propose a novel loading balance scheme in RoF system based on differential game theory. The scheme formulates the load allocated to each RAP (Radio Access Point) as a Nasb Equilibrium, using non-cooperative differential game to obtain the optical load allocation of each RAP. The simulations performed show that the non-cooperative differential game algorithm is applicable and the optimal load solution can be achieved.
文摘The rapid evolution and expanding scale of AI(artificial intelligence)technologies exert unprecedented energy demands on global electrical grids.Powering computationally intensive tasks such as large-scale AI model training and widespread real-time inference necessitates substantial electricity consumption,presenting a significant challenge to conventional power infrastructure.This paper examines the critical need for a fundamental shift towards smart energy grids in response to AI’s growing energy footprint.It delves into the symbiotic relationship wherein AI acts as a significant energy consumer while offering the intelligence required for dynamic load management,efficient integration of renewable energy sources,and optimized grid operations.We posit that advanced smart grids are indispensable for facilitating AI’s sustainable growth,underscoring this synergy as a pivotal advancement toward a resilient energy future.
文摘Underwater Wireless Sensor Networks(UWSNs)are gaining popularity because of their potential uses in oceanography,seismic activity monitoring,environmental preservation,and underwater mapping.Yet,these networks are faced with challenges such as self-interference,long propagation delays,limited bandwidth,and changing network topologies.These challenges are coped with by designing advanced routing protocols.In this work,we present Under Water Fuzzy-Routing Protocol for Low power and Lossy networks(UWF-RPL),an enhanced fuzzy-based protocol that improves decision-making during path selection and traffic distribution over different network nodes.Our method extends RPL with the aid of fuzzy logic to optimize depth,energy,Received Signal Strength Indicator(RSSI)to Expected Transmission Count(ETX)ratio,and latency.Theproposed protocol outperforms other techniques in that it offersmore energy efficiency,better packet delivery,lowdelay,and no queue overflow.It also exhibits better scalability and reliability in dynamic underwater networks,which is of very high importance in maintaining the network operations efficiency and the lifetime of UWSNs optimized.Compared to other recent methods,it offers improved network convergence time(10%–23%),energy efficiency(15%),packet delivery(17%),and delay(24%).
基金supported in part by the Istanbul Technical University Scientific Research Projects Coordination Unit under Grant FHD-2024-45764in part by TUBITAK 1515 Frontier R&D Laboratories Support Program for Turkcell 6GEN LAB under Grant 5229902Turkcell Technology R&D Center(Law no.5746)has partially supported this study。
文摘The increase in user mobility and density in modern cellular networks increases the risk of overloading certain base stations in popular locations such as shopping malls or stadiums,which can result in connection loss for some users.To combat this,the traffic load of base stations should be kept as balanced as possible.In this paper,we propose an efficient load balancing-aware handover algorithm for highly dynamic beyond 5G heterogeneous networks by assigning mobile users to base stations with lighter loads when a handover is performed.The proposed algorithm is evaluated in a scenario with users having different levels of mobility,such as pedestrians and vehicles,and is shown to outperform the conventional handover mechanism,as well as another algorithm from the literature.As a secondary benefit,the overall energy consumption in the network is shown to be reduced with the proposed algorithm.
文摘Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning(DRL)model to predict and allocate incoming workloads flexibly.The proposed methodology integrates workload prediction utilising long short-term memory(LSTM)networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms.By continuously analysing current and historical data,the model can efficiently allocate resources,prioritizing speed and energy preservation.The experimental results demonstrate that our load balancing system,which utilises DRL,significantly reduces average response times and energy usage compared to traditional methods.This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance.It consistently provides reliable and durable performance across a range of dynamic workloads.
基金supported in part by the Key Research and Development Program of Shaanxi under Grant 2023-ZDLGY-34.
文摘Spark performs excellently in large-scale data-parallel computing and iterative processing.However,with the increase in data size and program complexity,the default scheduling strategy has difficultymeeting the demands of resource utilization and performance optimization.Scheduling strategy optimization,as a key direction for improving Spark’s execution efficiency,has attracted widespread attention.This paper first introduces the basic theories of Spark,compares several default scheduling strategies,and discusses common scheduling performance evaluation indicators and factors affecting scheduling efficiency.Subsequently,existing scheduling optimization schemes are summarized based on three scheduling modes:load characteristics,cluster characteristics,and matching of both,and representative algorithms are analyzed in terms of performance indicators and applicable scenarios,comparing the advantages and disadvantages of different scheduling modes.The article also explores in detail the integration of Spark scheduling strategies with specific application scenarios and the challenges in production environments.Finally,the limitations of the existing schemes are analyzed,and prospects are envisioned.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-02090).
文摘The Internet ofThings(IoT)and edge computing have substantially contributed to the development and growth of smart cities.It handled time-constrained services and mobile devices to capture the observing environment for surveillance applications.These systems are composed of wireless cameras,digital devices,and tiny sensors to facilitate the operations of crucial healthcare services.Recently,many interactive applications have been proposed,including integrating intelligent systems to handle data processing and enable dynamic communication functionalities for crucial IoT services.Nonetheless,most solutions lack optimizing relayingmethods and impose excessive overheads for maintaining devices’connectivity.Alternatively,data integrity and trust are another vital consideration for nextgeneration networks.This research proposed a load-balanced trusted surveillance routing model with collaborative decisions at network edges to enhance energymanagement and resource balancing.It leverages graph-based optimization to enable reliable analysis of decision-making parameters.Furthermore,mobile devices integratewith the proposed model to sustain trusted routes with lightweight privacy-preserving and authentication.The proposed model analyzed its performance results in a simulation-based environment and illustrated an exceptional improvement in packet loss ratio,energy consumption,detection anomaly,and blockchain overhead than related solutions.
基金supported by the National Natural Science Foundation of China(No.6217011238 and No.61931011).
文摘As an important part of satellite communication network,LEO satellite constellation network is one of the hot research directions.Since the nonuniform distribution of terrestrial services may cause inter-satellite link congestion,improving network load balancing performance has become one of the key issues that need to be solved for routing algorithms in LEO network.Therefore,by expanding the range of available paths and combining the congestion avoidance mechanism,a load balancing routing algorithm based on extended link states in LEO constellation network is proposed.Simulation results show that the algorithm achieves a balanced distribution of traffic load,reduces link congestion and packet loss rate,and improves throughput of LEO satellite network.
基金supported by the National Science Foundation of China(No.61472189)Zhejiang Provincial Natural Science Foundation of China(No.LY18F030015)Wenzhou Public Welfare Science and Technology Project of China(No.G20150015)
文摘To deal with the dynamic and imbalanced traffic requirements in Low Earth Orbit satellite networks, several distributed load balancing routing schemes have been proposed. However, because of the lack of global view, these schemes may lead to cascading congestion in regions with high volume of traffic. To solve this problem, a Hybrid-Traffic-Detour based Load Balancing Routing(HLBR) scheme is proposed, where a Long-Distance Traffic Detour(LTD) method is devised and coordinates with distributed traffic detour method to perform self-adaptive load balancing. The forwarding path of LTD is acquired by the Circuitous Multipath Calculation(CMC) based on prior geographical information, and activated by the LTDShift-Trigger(LST) through real-time congestion perception. Simulation results show that the HLBR can mitigate cascading congestion and achieve efficient traffic distribution.