In opportunistic networks, most existing buffer management policies including scheduling and passive dropping policies are mainly for routing protocols. In this paper, we proposed a Utility-based Buffer Management str...In opportunistic networks, most existing buffer management policies including scheduling and passive dropping policies are mainly for routing protocols. In this paper, we proposed a Utility-based Buffer Management strategy(UBM) for data dissemination in opportunistic networks. In UBM, we first design a method of computing the utility values of caching messages according to the interest of nodes and the delivery probability of messages, and then propose an overall buffer management policy based on the utility. UBM driven by receivers completely implements not only caching policies, passive and proactive dropping policies, but also scheduling policies of senders. Simulation results show that, compared with some classical dropping strategies, UBM can obtain higher delivery ratio and lower delay latency by using smaller network cost.展开更多
Active queue management(AQM) is essentially a router buffer management strategy supporting TCP congestion control.Since existing AQM schemes exhibit poor performance and even instability in time delay uncertain networ...Active queue management(AQM) is essentially a router buffer management strategy supporting TCP congestion control.Since existing AQM schemes exhibit poor performance and even instability in time delay uncertain networks,a robust buffer management(RBM) mechanism is proposed to guarantee the quality of service(QoS).RBM consists of a Smith predictor and two independent controllers.The Smith predictor is used to compensate for the round trip time(RTT) delay and to restrain its negative influence on network performance.The main feedback controller and the disturbance rejection controller are designed as proportional-integral (PI) controller and proportional(P) controller by internal model control(IMC) and frequency-domain analysis respectively.By simulation experiments in Netwrok-Simulator-2(NS2),it is demonstrated that RBM can effectively manage the buffer occupation around the target value against time delay and system disturbance. Compared with delay compensation-AQM algorithm(DC-AQM),proportional-integral-derivative(PID) algorithm and random exponential marking(REM) algorithm,the RBM scheme exhibits the superiority in terms of stability, responsiveness and robustness.展开更多
Delay Tolerant Networks(DTNs)have the major problem of message delay in the network due to a lack of endto-end connectivity between the nodes,especially when the nodes are mobile.The nodes in DTNs have limited buffer ...Delay Tolerant Networks(DTNs)have the major problem of message delay in the network due to a lack of endto-end connectivity between the nodes,especially when the nodes are mobile.The nodes in DTNs have limited buffer storage for storing delayed messages.This instantaneous sharing of data creates a low buffer/shortage problem.Consequently,buffer congestion would occur and there would be no more space available in the buffer for the upcoming messages.To address this problem a buffer management policy is proposed named“A Novel and Proficient Buffer Management Technique(NPBMT)for the Internet of Vehicle-Based DTNs”.NPBMT combines appropriate-size messages with the lowest Time-to-Live(TTL)and then drops a combination of the appropriate messages to accommodate the newly arrived messages.To evaluate the performance of the proposed technique comparison is done with Drop Oldest(DOL),Size Aware Drop(SAD),and Drop Larges(DLA).The proposed technique is implemented in the Opportunistic Network Environment(ONE)simulator.The shortest path mapbased movement model has been used as the movement path model for the nodes with the epidemic routing protocol.From the simulation results,a significant change has been observed in the delivery probability as the proposed policy delivered 380 messages,DOL delivered 186 messages,SAD delivered 190 messages,and DLA delivered only 95 messages.A significant decrease has been observed in the overhead ratio,as the SAD overhead ratio is 324.37,DLA overhead ratio is 266.74,and DOL and NPBMT overhead ratios are 141.89 and 52.85,respectively,which reveals a significant reduction of overhead ratio in NPBMT as compared to existing policies.The network latency average of DOL is 7785.5,DLA is 5898.42,and SAD is 5789.43 whereas the NPBMT latency average is 3909.4.This reveals that the proposed policy keeps the messages for a short time in the network,which reduces the overhead ratio.展开更多
For improving Transfer Control Protocol (TCP) performance in mobile environment,smooth handover with buffer management has been proposed to realize seamless handovers. However in our simulation, even if smooth handove...For improving Transfer Control Protocol (TCP) performance in mobile environment,smooth handover with buffer management has been proposed to realize seamless handovers. However in our simulation, even if smooth handover in Mobile IPv6 (MIPv6) is implemented, TCP can not always achieve better performance due to packets forwarding burst. Based on the study of buffer management for smooth handover, this paper proposes an enhanced buffer management scheme for smooth handover to improve TCP performance. In this scheme, a packet-pair probing technology is adopted to estimate the available bandwidth of the new path from Previous router (Prtr) to Mobile Node (MN), which will be used by Prtr to control the buffered packets forwarding. The simulation results demonstrate that smooth handover with this scheme can achieve better TCP performance than the original scheme.展开更多
Flash solid-state drives (SSDs) provide much faster access to data compared with traditional hard disk drives (HDDs). The current price and performance of SSD suggest it can be adopted as a data buffer between mai...Flash solid-state drives (SSDs) provide much faster access to data compared with traditional hard disk drives (HDDs). The current price and performance of SSD suggest it can be adopted as a data buffer between main memory and HDD, and buffer management policy in such hybrid systems has attracted more and more interest from research community recently. In this paper, we propose a novel approach to manage the buffer in flash-based hybrid storage systems, named hotness aware hit (HAT). HAT exploits a page reference queue to record the access history as well as the status of accessed pages, i.e., hot, warm, and cold. Additionally, the page reference queue is further split into hot and warm regions which correspond to the memory and flash in general. The HAT approach updates the page status and deals with the page migration in the memory hierarchy according to the current page status and hit position in the page reference queue. Compared with the existing hybrid storage approaches, the proposed HAT can manage the memory and flash cache layers more effectively. Our empirical evaluation on benchmark traces demonstrates the superiority of the proposed strategy against the state-of-the-art competitors.展开更多
Hypertext transfer protocol(HTTP) adaptive streaming(HAS) plays a key role in mobile video transmission. Considering the multi-segment and multi-rate features of HAS, this paper proposes a buffer-driven resource manag...Hypertext transfer protocol(HTTP) adaptive streaming(HAS) plays a key role in mobile video transmission. Considering the multi-segment and multi-rate features of HAS, this paper proposes a buffer-driven resource management(BDRM) method to enhance HAS quality of experience(QoE) in mobile network. Different from the traditional methods only focusing on base station side without considering the buffer, the proposed method takes both station and client sides into account and end user's buffer plays as the drive of whole schedule process. The proposed HAS QoE influencing factors are composed of initial delay, rebuffering and quality level. The BDRM method decomposes the HAS QoE maximization problem into client and base station sides separately to solve it in multicell and multi-user video playing scene in mobile network. In client side, the decision is made based on buffer probe and rate request algorithm by each user separately. It guarantees the less rebuffering events and decides which HAS segment rate to fetch. While, in the base station side, the schedule of wireless resource is made to maximize the quality level of all access clients and decides the final rate pulled from HAS server. The drive of buffer and twice rate request schemes make BDRMtake full advantage of HAS's multi-segment and multi-rate features. As to the simulation results, compared with proportional fair(PF), Max C/I and traditional HAS schedule(THS) methods, the proposed BDRM method decreases rebuffering percent to 1.96% from 11.1% with PF and from 7.01% with THS and increases the mean MOS of all users to 3.94 from 3.42 with PF method and from 2.15 with Max C/I method. It also guarantees a high fairness with 0.98 from the view of objective and subjective assessment metrics.展开更多
ECFD (erroneous cell tail drop), a buffer management optimization strategy is suggested which can improve the utilization of buffer resources in satellite ATM (asynchronous transfer mode) networks. The strategy, i...ECFD (erroneous cell tail drop), a buffer management optimization strategy is suggested which can improve the utilization of buffer resources in satellite ATM (asynchronous transfer mode) networks. The strategy, in which erroneous cells caused by satellite channel and the following cells that belong to the same PDU (protocol data Unit) are discarded, concerns non-real-time data services that use higher layer protocol for retransmission. Based on EPD (early packet drop) policy, mathematical models are established with and without ECTD. The numerical results show that ECTD would optimize buffer management and improve effective throughput (goodput), and the increment of goodput is relative to the CER (cell error ratio) and the PDU length. The higher their values are, the greater the increment. For example, when the average PDU length values are 30 and 90, the improvement of goodput are respectively about 4% and 10%.展开更多
Switch and router architectures employing a shared buffer are known to provide high throughput, low delay, and high memory utilization. Superior performance of a shared-memory switch compared to switches employing oth...Switch and router architectures employing a shared buffer are known to provide high throughput, low delay, and high memory utilization. Superior performance of a shared-memory switch compared to switches employing other buffer strategies can be achieved by carefully implementing a buffer-management scheme. A buffer-sharing policy should allow all of the output interfaces to have fair and robust access to buffer resources. The sliding-window (SW) packet switch is a novel architecture that uses an array of parallel memory modules that are logically shared by all input and output lines to store and process data packets. The innovative aspects of the SW architecture are the approach to accomplishing parallel operation and the simplicity of the control functions. The implementation of a buffer-management scheme in a SW packet switch is dependent on how the buffer space is organized into output queues. This paper presents an efficient SW buffer-management scheme that regulates the sharing of the buffer space. We compare the proposed scheme with previous work under bursty traffic conditions. Also, we explain how the proposed buffer-management scheme can provide quality-of-service (QoS) to different traffic classes.展开更多
Virtual Reality(VR)is a key industry for the development of the digital economy in the future.Mobile VR has advantages in terms of mobility,lightweight and cost-effectiveness,which has gradually become the mainstream ...Virtual Reality(VR)is a key industry for the development of the digital economy in the future.Mobile VR has advantages in terms of mobility,lightweight and cost-effectiveness,which has gradually become the mainstream implementation of VR.In this paper,a mobile VR video adaptive transmission mechanism based on intelligent caching and hierarchical buffering strategy in Mobile Edge Computing(MEC)-equipped 5G networks is proposed,aiming at the low latency requirements of mobile VR services and flexible buffer management for VR video adaptive transmission.To support VR content proactive caching and intelligent buffer management,users’behavioral similarity and head movement trajectory are jointly used for viewpoint prediction.The tile-based content is proactively cached in the MEC nodes based on the popularity of the VR content.Second,a hierarchical buffer-based adaptive update algorithm is presented,which jointly considers bandwidth,buffer,and predicted viewpoint status to update the tile chunk in client buffer.Then,according to the decomposition of the problem,the buffer update problem is modeled as an optimization problem,and the corresponding solution algorithms are presented.Finally,the simulation results show that the adaptive caching algorithm based on 5G intelligent edge and hierarchical buffer strategy can improve the user experience in the case of bandwidth fluctuations,and the proposed viewpoint prediction method can significantly improve the accuracy of viewpoint prediction by 15%.展开更多
The assumption of static and deterministic conditions is common in the practice of construction project planning. However, at the construction phase, projects are subject to uncertainty. This may lead to serious sched...The assumption of static and deterministic conditions is common in the practice of construction project planning. However, at the construction phase, projects are subject to uncertainty. This may lead to serious schedule disruptions and, as a consequence, serious revisions oft.he schedule baseline. The aim of the paper is developing a method for constructing robust project schedules with a proactive procedure. Robust project scheduling allows for constructing stable schedules with time buffers introduced to cope with multiple disruptions during project execution. The method proposed by the authors, based on Monte Carlo simulation technique and mathematical programming for buffer sizing optimization, was applied to scheduling an example project. The results were compared, in terms of schedule stability, to those of the float factor heuristic procedttre.展开更多
基金supported by the National Natural Science Fund of China under Grant No. 61472097the Education Ministry Doctoral Research Foundation of China (20132304110017)the International Exchange Program of Harbin Engineering University for Innovation-oriented Talents Cultivation
文摘In opportunistic networks, most existing buffer management policies including scheduling and passive dropping policies are mainly for routing protocols. In this paper, we proposed a Utility-based Buffer Management strategy(UBM) for data dissemination in opportunistic networks. In UBM, we first design a method of computing the utility values of caching messages according to the interest of nodes and the delivery probability of messages, and then propose an overall buffer management policy based on the utility. UBM driven by receivers completely implements not only caching policies, passive and proactive dropping policies, but also scheduling policies of senders. Simulation results show that, compared with some classical dropping strategies, UBM can obtain higher delivery ratio and lower delay latency by using smaller network cost.
基金the National Natural Science Foundation of China(No.60574081)
文摘Active queue management(AQM) is essentially a router buffer management strategy supporting TCP congestion control.Since existing AQM schemes exhibit poor performance and even instability in time delay uncertain networks,a robust buffer management(RBM) mechanism is proposed to guarantee the quality of service(QoS).RBM consists of a Smith predictor and two independent controllers.The Smith predictor is used to compensate for the round trip time(RTT) delay and to restrain its negative influence on network performance.The main feedback controller and the disturbance rejection controller are designed as proportional-integral (PI) controller and proportional(P) controller by internal model control(IMC) and frequency-domain analysis respectively.By simulation experiments in Netwrok-Simulator-2(NS2),it is demonstrated that RBM can effectively manage the buffer occupation around the target value against time delay and system disturbance. Compared with delay compensation-AQM algorithm(DC-AQM),proportional-integral-derivative(PID) algorithm and random exponential marking(REM) algorithm,the RBM scheme exhibits the superiority in terms of stability, responsiveness and robustness.
基金funded by Researchers Supporting Project Number(RSPD2023R947),King Saud University,Riyadh,Saudi Arabia.
文摘Delay Tolerant Networks(DTNs)have the major problem of message delay in the network due to a lack of endto-end connectivity between the nodes,especially when the nodes are mobile.The nodes in DTNs have limited buffer storage for storing delayed messages.This instantaneous sharing of data creates a low buffer/shortage problem.Consequently,buffer congestion would occur and there would be no more space available in the buffer for the upcoming messages.To address this problem a buffer management policy is proposed named“A Novel and Proficient Buffer Management Technique(NPBMT)for the Internet of Vehicle-Based DTNs”.NPBMT combines appropriate-size messages with the lowest Time-to-Live(TTL)and then drops a combination of the appropriate messages to accommodate the newly arrived messages.To evaluate the performance of the proposed technique comparison is done with Drop Oldest(DOL),Size Aware Drop(SAD),and Drop Larges(DLA).The proposed technique is implemented in the Opportunistic Network Environment(ONE)simulator.The shortest path mapbased movement model has been used as the movement path model for the nodes with the epidemic routing protocol.From the simulation results,a significant change has been observed in the delivery probability as the proposed policy delivered 380 messages,DOL delivered 186 messages,SAD delivered 190 messages,and DLA delivered only 95 messages.A significant decrease has been observed in the overhead ratio,as the SAD overhead ratio is 324.37,DLA overhead ratio is 266.74,and DOL and NPBMT overhead ratios are 141.89 and 52.85,respectively,which reveals a significant reduction of overhead ratio in NPBMT as compared to existing policies.The network latency average of DOL is 7785.5,DLA is 5898.42,and SAD is 5789.43 whereas the NPBMT latency average is 3909.4.This reveals that the proposed policy keeps the messages for a short time in the network,which reduces the overhead ratio.
文摘For improving Transfer Control Protocol (TCP) performance in mobile environment,smooth handover with buffer management has been proposed to realize seamless handovers. However in our simulation, even if smooth handover in Mobile IPv6 (MIPv6) is implemented, TCP can not always achieve better performance due to packets forwarding burst. Based on the study of buffer management for smooth handover, this paper proposes an enhanced buffer management scheme for smooth handover to improve TCP performance. In this scheme, a packet-pair probing technology is adopted to estimate the available bandwidth of the new path from Previous router (Prtr) to Mobile Node (MN), which will be used by Prtr to control the buffered packets forwarding. The simulation results demonstrate that smooth handover with this scheme can achieve better TCP performance than the original scheme.
基金Supported by National Natural Science Foundation of China(60434030,60673178,and 60472076) and National Basic Research Program of China(973 Program)(2007CB307106)
基金Acknowledgements This research was supported by the Nalional Natural Science Foundation of China (Grant No. 61272155) and Ministry of Industry and Information Technology (2010ZX01042-001-001-04).
文摘Flash solid-state drives (SSDs) provide much faster access to data compared with traditional hard disk drives (HDDs). The current price and performance of SSD suggest it can be adopted as a data buffer between main memory and HDD, and buffer management policy in such hybrid systems has attracted more and more interest from research community recently. In this paper, we propose a novel approach to manage the buffer in flash-based hybrid storage systems, named hotness aware hit (HAT). HAT exploits a page reference queue to record the access history as well as the status of accessed pages, i.e., hot, warm, and cold. Additionally, the page reference queue is further split into hot and warm regions which correspond to the memory and flash in general. The HAT approach updates the page status and deals with the page migration in the memory hierarchy according to the current page status and hit position in the page reference queue. Compared with the existing hybrid storage approaches, the proposed HAT can manage the memory and flash cache layers more effectively. Our empirical evaluation on benchmark traces demonstrates the superiority of the proposed strategy against the state-of-the-art competitors.
基金supported by the 863 project (Grant No. 2014AA01A701) Beijing Natural Science Foundation (Grant No. 4152047)
文摘Hypertext transfer protocol(HTTP) adaptive streaming(HAS) plays a key role in mobile video transmission. Considering the multi-segment and multi-rate features of HAS, this paper proposes a buffer-driven resource management(BDRM) method to enhance HAS quality of experience(QoE) in mobile network. Different from the traditional methods only focusing on base station side without considering the buffer, the proposed method takes both station and client sides into account and end user's buffer plays as the drive of whole schedule process. The proposed HAS QoE influencing factors are composed of initial delay, rebuffering and quality level. The BDRM method decomposes the HAS QoE maximization problem into client and base station sides separately to solve it in multicell and multi-user video playing scene in mobile network. In client side, the decision is made based on buffer probe and rate request algorithm by each user separately. It guarantees the less rebuffering events and decides which HAS segment rate to fetch. While, in the base station side, the schedule of wireless resource is made to maximize the quality level of all access clients and decides the final rate pulled from HAS server. The drive of buffer and twice rate request schemes make BDRMtake full advantage of HAS's multi-segment and multi-rate features. As to the simulation results, compared with proportional fair(PF), Max C/I and traditional HAS schedule(THS) methods, the proposed BDRM method decreases rebuffering percent to 1.96% from 11.1% with PF and from 7.01% with THS and increases the mean MOS of all users to 3.94 from 3.42 with PF method and from 2.15 with Max C/I method. It also guarantees a high fairness with 0.98 from the view of objective and subjective assessment metrics.
文摘ECFD (erroneous cell tail drop), a buffer management optimization strategy is suggested which can improve the utilization of buffer resources in satellite ATM (asynchronous transfer mode) networks. The strategy, in which erroneous cells caused by satellite channel and the following cells that belong to the same PDU (protocol data Unit) are discarded, concerns non-real-time data services that use higher layer protocol for retransmission. Based on EPD (early packet drop) policy, mathematical models are established with and without ECTD. The numerical results show that ECTD would optimize buffer management and improve effective throughput (goodput), and the increment of goodput is relative to the CER (cell error ratio) and the PDU length. The higher their values are, the greater the increment. For example, when the average PDU length values are 30 and 90, the improvement of goodput are respectively about 4% and 10%.
文摘Switch and router architectures employing a shared buffer are known to provide high throughput, low delay, and high memory utilization. Superior performance of a shared-memory switch compared to switches employing other buffer strategies can be achieved by carefully implementing a buffer-management scheme. A buffer-sharing policy should allow all of the output interfaces to have fair and robust access to buffer resources. The sliding-window (SW) packet switch is a novel architecture that uses an array of parallel memory modules that are logically shared by all input and output lines to store and process data packets. The innovative aspects of the SW architecture are the approach to accomplishing parallel operation and the simplicity of the control functions. The implementation of a buffer-management scheme in a SW packet switch is dependent on how the buffer space is organized into output queues. This paper presents an efficient SW buffer-management scheme that regulates the sharing of the buffer space. We compare the proposed scheme with previous work under bursty traffic conditions. Also, we explain how the proposed buffer-management scheme can provide quality-of-service (QoS) to different traffic classes.
基金supported in part by the Chongqing Municipal Education Commission projects under Grant No.KJCX2020035,KJQN202200829Chongqing Science and Technology Commission projects under grant No.CSTB2022BSXM-JCX0117 and cstc2020jcyjmsxmX0339+1 种基金supported in part by National Natural Science Foundation of China under Grant No.(62171072,62172064,62003067,61901067)supported in part by Chongqing Technology and Business University projects under Grant no.(2156004,212017).
文摘Virtual Reality(VR)is a key industry for the development of the digital economy in the future.Mobile VR has advantages in terms of mobility,lightweight and cost-effectiveness,which has gradually become the mainstream implementation of VR.In this paper,a mobile VR video adaptive transmission mechanism based on intelligent caching and hierarchical buffering strategy in Mobile Edge Computing(MEC)-equipped 5G networks is proposed,aiming at the low latency requirements of mobile VR services and flexible buffer management for VR video adaptive transmission.To support VR content proactive caching and intelligent buffer management,users’behavioral similarity and head movement trajectory are jointly used for viewpoint prediction.The tile-based content is proactively cached in the MEC nodes based on the popularity of the VR content.Second,a hierarchical buffer-based adaptive update algorithm is presented,which jointly considers bandwidth,buffer,and predicted viewpoint status to update the tile chunk in client buffer.Then,according to the decomposition of the problem,the buffer update problem is modeled as an optimization problem,and the corresponding solution algorithms are presented.Finally,the simulation results show that the adaptive caching algorithm based on 5G intelligent edge and hierarchical buffer strategy can improve the user experience in the case of bandwidth fluctuations,and the proposed viewpoint prediction method can significantly improve the accuracy of viewpoint prediction by 15%.
文摘The assumption of static and deterministic conditions is common in the practice of construction project planning. However, at the construction phase, projects are subject to uncertainty. This may lead to serious schedule disruptions and, as a consequence, serious revisions oft.he schedule baseline. The aim of the paper is developing a method for constructing robust project schedules with a proactive procedure. Robust project scheduling allows for constructing stable schedules with time buffers introduced to cope with multiple disruptions during project execution. The method proposed by the authors, based on Monte Carlo simulation technique and mathematical programming for buffer sizing optimization, was applied to scheduling an example project. The results were compared, in terms of schedule stability, to those of the float factor heuristic procedttre.