Although small cell offloading technology can alleviate the congestion in macrocell, aggressively offloading data traffic from macrocell to small cell can also degrade the performance of small cell due to the heavy lo...Although small cell offloading technology can alleviate the congestion in macrocell, aggressively offloading data traffic from macrocell to small cell can also degrade the performance of small cell due to the heavy load. Because of collision and backoff, the degradation is significant especially in network with contention-based channel access, and finally decreases throughput of the whole network. To find an optimal fraction of traffic to be offloaded in heterogeneous network, we combine Markov chain with the Poisson point process model to analyze contention-based throughput in irregularly deployment networks. Then we derive the close-form solution of the throughput and find that it is a function of the transmit power and density of base stations.Based on this, we propose the load-aware offloading strategies via power control and base station density adjustment. The numerical results verify our analysis and show a great performance gain compared with non-load-aware offloading.展开更多
As the penetration rate of renewable energy sources(RES)gradually increases,demand-side resources(DSR)should be fully utilized to provide flexibility and rapidly respond to real-time power supply-demand imbalance.Howe...As the penetration rate of renewable energy sources(RES)gradually increases,demand-side resources(DSR)should be fully utilized to provide flexibility and rapidly respond to real-time power supply-demand imbalance.However,scheduling a large number of DSR clusters will inevitably bring unbearable transmission delay,and computation delay,which in turn lead to lower response speeds.This paper examines flexibility scheduling of DSR clusters within a smart distribution network(SDN)in view of both kinds of delay.Building upon a SDN model,maximum schedulable flexibility of DSR clusters is first quantified.Then,a flexibility response curve is analyzed to reflect the effect of delay on flexibility scheduling.Aiming at reducing flexibility shortage brought by delay,we propose a modified flexibility scheduling strategy based on cloud-edge collaboration.Compared with traditional strategy,centralized optimization is replaced by distributed optimization to consider both economic efficiency and effect of delay.Besides,an offloading strategy is also formulated to decide optimal edge nodes and corresponding wired paths for edge computations.In a case study,we evaluate scheduled flexibility,operational cost,average delay and the chosen edge nodes for edge computations with traditional strategy and our proposed strategy.Evaluation results show the proposed strategy can significantly reduce the effect of delay on flexibility scheduling,and guarantee the optimality of operational cost to some extent.展开更多
With network developing and virtualization rising, more and more indoor environment (POIs) such as care, library, office, even bus and subway can provide plenty of bandwidth and computing resources. Meanwhile many p...With network developing and virtualization rising, more and more indoor environment (POIs) such as care, library, office, even bus and subway can provide plenty of bandwidth and computing resources. Meanwhile many people daily spending much time in them are still suffering from the mobile device with limited resources. This situation implies a novel local cloud computing paradigm in which mobile device can leverage nearby resources to facilitate task execution. In this paper, we implement a mobile local computing system based on indoor virtual cloud. This system mainly contains three key components: 1)As to application, we create a parser to generate the "method call and cost tree" and analyze it to identify resource- intensive methods. 2) As to mobile device, we design a self-learning execution controller to make offtoading decision at runtime. 3) As to cloud, we construct a social scheduling based application-isolation virtual cloud model. The evaluation results demonstrate that our system is effective and efficient by evaluating CPU- intensive calculation application, Memory- intensive image translation application and I/ O-intensive image downloading application.展开更多
With the rapid development of the Industrial Internet of Things(IIoT),the traditional centralized cloud processing model has encountered the challenges of high communication latency and high energy consumption in hand...With the rapid development of the Industrial Internet of Things(IIoT),the traditional centralized cloud processing model has encountered the challenges of high communication latency and high energy consumption in handling industrial big data tasks.This paper aims to propose a low-latency and lowenergy path computing scheme for the above problems.This scheme is based on the cloud-fog network architecture.The computing resources of fog network devices in the fog computing layer are used to complete task processing step by step during the data interaction from industrial field devices to the cloud center.A collaborative scheduling strategy based on the particle diversity discrete binary particle swarm optimization(PDBPSO)algorithm is proposed to deploy manufacturing tasks to the fog computing layer reasonably.The task in the form of a directed acyclic graph(DAG)is mapped to a factory fog network in the form of an undirected graph(UG)to find the appropriate computing path for the task,significantly reducing the task processing latency under energy consumption constraints.Simulation experiments show that this scheme’s latency performance outperforms the strategy that tasks are wholly offloaded to the cloud and the strategy that tasks are entirely offloaded to the edge equipment.展开更多
基金supported by the National High-Tech R&D Program (863 Program) under grant No. 2015AA01A705Beijing Municipal Science and Technology Commission research fund project under grant No. D151100000115002+1 种基金China Scholarship Council under grant No. 201406470038BUPT youth scientific research innovation program under grant No. 500401238
文摘Although small cell offloading technology can alleviate the congestion in macrocell, aggressively offloading data traffic from macrocell to small cell can also degrade the performance of small cell due to the heavy load. Because of collision and backoff, the degradation is significant especially in network with contention-based channel access, and finally decreases throughput of the whole network. To find an optimal fraction of traffic to be offloaded in heterogeneous network, we combine Markov chain with the Poisson point process model to analyze contention-based throughput in irregularly deployment networks. Then we derive the close-form solution of the throughput and find that it is a function of the transmit power and density of base stations.Based on this, we propose the load-aware offloading strategies via power control and base station density adjustment. The numerical results verify our analysis and show a great performance gain compared with non-load-aware offloading.
文摘As the penetration rate of renewable energy sources(RES)gradually increases,demand-side resources(DSR)should be fully utilized to provide flexibility and rapidly respond to real-time power supply-demand imbalance.However,scheduling a large number of DSR clusters will inevitably bring unbearable transmission delay,and computation delay,which in turn lead to lower response speeds.This paper examines flexibility scheduling of DSR clusters within a smart distribution network(SDN)in view of both kinds of delay.Building upon a SDN model,maximum schedulable flexibility of DSR clusters is first quantified.Then,a flexibility response curve is analyzed to reflect the effect of delay on flexibility scheduling.Aiming at reducing flexibility shortage brought by delay,we propose a modified flexibility scheduling strategy based on cloud-edge collaboration.Compared with traditional strategy,centralized optimization is replaced by distributed optimization to consider both economic efficiency and effect of delay.Besides,an offloading strategy is also formulated to decide optimal edge nodes and corresponding wired paths for edge computations.In a case study,we evaluate scheduled flexibility,operational cost,average delay and the chosen edge nodes for edge computations with traditional strategy and our proposed strategy.Evaluation results show the proposed strategy can significantly reduce the effect of delay on flexibility scheduling,and guarantee the optimality of operational cost to some extent.
基金ACKNOWLEDGEMENTS This work was supported by the Research Fund for the Doctoral Program of Higher Education of China (No.20110031110026 and No.20120031110035), the National Natural Science Foundation of China (No. 61103214), and the Key Project in Tianjin Science & Technology Pillar Program (No. 13ZCZDGX01098).
文摘With network developing and virtualization rising, more and more indoor environment (POIs) such as care, library, office, even bus and subway can provide plenty of bandwidth and computing resources. Meanwhile many people daily spending much time in them are still suffering from the mobile device with limited resources. This situation implies a novel local cloud computing paradigm in which mobile device can leverage nearby resources to facilitate task execution. In this paper, we implement a mobile local computing system based on indoor virtual cloud. This system mainly contains three key components: 1)As to application, we create a parser to generate the "method call and cost tree" and analyze it to identify resource- intensive methods. 2) As to mobile device, we design a self-learning execution controller to make offtoading decision at runtime. 3) As to cloud, we construct a social scheduling based application-isolation virtual cloud model. The evaluation results demonstrate that our system is effective and efficient by evaluating CPU- intensive calculation application, Memory- intensive image translation application and I/ O-intensive image downloading application.
基金supported by the Shaanxi Key R&D Program Project(2021GY-100).
文摘With the rapid development of the Industrial Internet of Things(IIoT),the traditional centralized cloud processing model has encountered the challenges of high communication latency and high energy consumption in handling industrial big data tasks.This paper aims to propose a low-latency and lowenergy path computing scheme for the above problems.This scheme is based on the cloud-fog network architecture.The computing resources of fog network devices in the fog computing layer are used to complete task processing step by step during the data interaction from industrial field devices to the cloud center.A collaborative scheduling strategy based on the particle diversity discrete binary particle swarm optimization(PDBPSO)algorithm is proposed to deploy manufacturing tasks to the fog computing layer reasonably.The task in the form of a directed acyclic graph(DAG)is mapped to a factory fog network in the form of an undirected graph(UG)to find the appropriate computing path for the task,significantly reducing the task processing latency under energy consumption constraints.Simulation experiments show that this scheme’s latency performance outperforms the strategy that tasks are wholly offloaded to the cloud and the strategy that tasks are entirely offloaded to the edge equipment.