Mobile edge computing (MEC) has a vital role in various delay-sensitive applications. With the increasing popularity of low-computing-capability Internet of Things (IoT) devices in industry 4.0 technology, MEC also fa...Mobile edge computing (MEC) has a vital role in various delay-sensitive applications. With the increasing popularity of low-computing-capability Internet of Things (IoT) devices in industry 4.0 technology, MEC also facilitates wireless power transfer, enhancing efficiency and sustainability for these devices. The most related studies concerning the computation rate in MEC are based on the coordinate descent method, the alternating direction method of multipliers (ADMMs) and Lyapunov optimization. Nevertheless, these studies do not consider the buffer queue size. This research work concerns the computation rate maximization for wireless-powered and multiple-user MEC systems, specifically focusing on the computation rate of end devices and managing the task buffer queue before computation at the terminal devices. A deep reinforcement learning (RL)-based task offloading algorithm is proposed to maximize the computation rate of end devices and minimizes the buffer queue size at the terminal devices.Precisely, considering the channel gain, the buffer queue size and wireless power transfer, it further formalizes the task offloading problem. The mode selection for task offloading is based on the individual channel gain, the buffer queue size and wireless power transfer maximization in a particular time slot.The central idea of this work is to explore the best optimal mode selection for IoT devices connected to the MEC system. The proposed algorithm optimizes computation delay by maximizing the computation rate of end devices and minimizing the buffer queue size before computation at the terminal devices. Then, the current study presents a deep RL-based task offloading algorithm to solve such a mixed-integer and non-convex optimization problem, aiming to get a better trade-off between the buffer queue size and the computation rate. The extensive simulation results reveal that the presented algorithm is much more efficient than the existing work to maintain a small buffer queue for terminal devices while simultaneously achieving a high-level computation rate.展开更多
We investigate the similarities and differences among three queue rules,the first-in-first-out(FIFO)rule,last-in-firstout(LIFO)rule and random-in-random-out(RIRO)rule,on dynamical networks with limited buffer size.In ...We investigate the similarities and differences among three queue rules,the first-in-first-out(FIFO)rule,last-in-firstout(LIFO)rule and random-in-random-out(RIRO)rule,on dynamical networks with limited buffer size.In our network model,nodes move at each time step.Packets are transmitted by an adaptive routing strategy,combining Euclidean distance and node load by a tunable parameter.Because of this routing strategy,at the initial stage of increasing buffer size,the network density will increase,and the packet loss rate will decrease.Packet loss and traffic congestion occur by these three rules,but nodes keep unblocked and lose no packet in a larger buffer size range on the RIRO rule networks.If packets are lost and traffic congestion occurs,different dynamic characteristics are shown by these three queue rules.Moreover,a phenomenon similar to Braess’paradox is also found by the LIFO rule and the RIRO rule.展开更多
The impaction of aggregated network traffic on queueing system is studied in this paper. It shows that the network traffic stayed in buffer has different impaction on queueing performance when it is aggregated at diff...The impaction of aggregated network traffic on queueing system is studied in this paper. It shows that the network traffic stayed in buffer has different impaction on queueing performance when it is aggregated at different scales. And its influence is related not only to traffic parameters but also to system parameter, such as buffer size. The increased buffer size can absorb the effect of short-range dependence (SRD) in network traffic and only the effect of long-range dependence (LRD) is expressed. The queueing length is asymptotic Weibull distribution with increasing buffer size, which is irrespective with the effect of short-range dependence character. Monte-Carlo based simulation confirmed the validity of these results.展开更多
An optimal design problem of local buffer allocation in the FMS is discussed in order to maximize a reward earned from processed jobs at all workstations. Structural properties of the optimal design problem are analyz...An optimal design problem of local buffer allocation in the FMS is discussed in order to maximize a reward earned from processed jobs at all workstations. Structural properties of the optimal design problem are analyzed for the model with two job routing policies. Based on these properties, approaches to optimal solutions are given.展开更多
In this paper, classical control theory and Smith's principle are applied in designing a class of effective andsimple congestion control schemes for high-speed computer communication networks. Mathematical analyse...In this paper, classical control theory and Smith's principle are applied in designing a class of effective andsimple congestion control schemes for high-speed computer communication networks. Mathematical analyses and sim-ulations verify the efficiency of the congestion control schemes. The proposed congestion control laws guarantee fullutilization of network links and stability of network queues so that the network has no data loss in a general networktopology and traffic scenario. The approach has some advantage over the usual Smith's principle based congestioncontrol scheme; it can be applied to those networks that may have smaller bottleneck capacity. Theoretical analysesand simulation results show good performance of networks if implemented by the congestion controllers, which aredesigned on the basis of the improved Smith principle.展开更多
基金National Natural Science Foundation of China(No.61902060)Shanghai Sailing Program,China(No.19YF1402100)+1 种基金Fundamental Research Funds for the Central Universities,China(No.2232019D3-51)Open Foundation of State Key Laboratory of Networking and Switching Technology(Beijing University of Posts and Telecommunications,China)(No.SKLNST-2021-1-06)。
文摘Mobile edge computing (MEC) has a vital role in various delay-sensitive applications. With the increasing popularity of low-computing-capability Internet of Things (IoT) devices in industry 4.0 technology, MEC also facilitates wireless power transfer, enhancing efficiency and sustainability for these devices. The most related studies concerning the computation rate in MEC are based on the coordinate descent method, the alternating direction method of multipliers (ADMMs) and Lyapunov optimization. Nevertheless, these studies do not consider the buffer queue size. This research work concerns the computation rate maximization for wireless-powered and multiple-user MEC systems, specifically focusing on the computation rate of end devices and managing the task buffer queue before computation at the terminal devices. A deep reinforcement learning (RL)-based task offloading algorithm is proposed to maximize the computation rate of end devices and minimizes the buffer queue size at the terminal devices.Precisely, considering the channel gain, the buffer queue size and wireless power transfer, it further formalizes the task offloading problem. The mode selection for task offloading is based on the individual channel gain, the buffer queue size and wireless power transfer maximization in a particular time slot.The central idea of this work is to explore the best optimal mode selection for IoT devices connected to the MEC system. The proposed algorithm optimizes computation delay by maximizing the computation rate of end devices and minimizing the buffer queue size before computation at the terminal devices. Then, the current study presents a deep RL-based task offloading algorithm to solve such a mixed-integer and non-convex optimization problem, aiming to get a better trade-off between the buffer queue size and the computation rate. The extensive simulation results reveal that the presented algorithm is much more efficient than the existing work to maintain a small buffer queue for terminal devices while simultaneously achieving a high-level computation rate.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.71801066 and 71431003)the Fundamental Research Funds for the Central Universities of China(Grant Nos.PA2019GDQT0020 and JZ2017HGTB0186)
文摘We investigate the similarities and differences among three queue rules,the first-in-first-out(FIFO)rule,last-in-firstout(LIFO)rule and random-in-random-out(RIRO)rule,on dynamical networks with limited buffer size.In our network model,nodes move at each time step.Packets are transmitted by an adaptive routing strategy,combining Euclidean distance and node load by a tunable parameter.Because of this routing strategy,at the initial stage of increasing buffer size,the network density will increase,and the packet loss rate will decrease.Packet loss and traffic congestion occur by these three rules,but nodes keep unblocked and lose no packet in a larger buffer size range on the RIRO rule networks.If packets are lost and traffic congestion occurs,different dynamic characteristics are shown by these three queue rules.Moreover,a phenomenon similar to Braess’paradox is also found by the LIFO rule and the RIRO rule.
基金Supported by the National High Technology Research and Development Program of China(2004AA639690)the Ph. D.Programs Foundation of Ministry of Education of China(20040486049)the Wuhan Chenguang Project (20055003059-27)
文摘The impaction of aggregated network traffic on queueing system is studied in this paper. It shows that the network traffic stayed in buffer has different impaction on queueing performance when it is aggregated at different scales. And its influence is related not only to traffic parameters but also to system parameter, such as buffer size. The increased buffer size can absorb the effect of short-range dependence (SRD) in network traffic and only the effect of long-range dependence (LRD) is expressed. The queueing length is asymptotic Weibull distribution with increasing buffer size, which is irrespective with the effect of short-range dependence character. Monte-Carlo based simulation confirmed the validity of these results.
文摘An optimal design problem of local buffer allocation in the FMS is discussed in order to maximize a reward earned from processed jobs at all workstations. Structural properties of the optimal design problem are analyzed for the model with two job routing policies. Based on these properties, approaches to optimal solutions are given.
文摘In this paper, classical control theory and Smith's principle are applied in designing a class of effective andsimple congestion control schemes for high-speed computer communication networks. Mathematical analyses and sim-ulations verify the efficiency of the congestion control schemes. The proposed congestion control laws guarantee fullutilization of network links and stability of network queues so that the network has no data loss in a general networktopology and traffic scenario. The approach has some advantage over the usual Smith's principle based congestioncontrol scheme; it can be applied to those networks that may have smaller bottleneck capacity. Theoretical analysesand simulation results show good performance of networks if implemented by the congestion controllers, which aredesigned on the basis of the improved Smith principle.