With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for opti...With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for optimizing video content distribution.However,the limited edge node cache capacity and dynamic user requests make edge caching more complex.Therefore,we propose a recommendation-driven edge Caching network architecture for the Full life cycle of video streaming(FlyCache)designed to improve users’Quality of Experience(QoE)and reduce backhaul traffic consumption.FlyCache implements intelligent caching management across three key stages:before-playback,during-playback,and after-playback.Specifically,we introduce a cache placement policy for the before-playback stage,a dynamic prefetching and cache admission policy for the during-playback stage,and a progressive cache eviction policy for the after-playback stage.To validate the effectiveness of FlyCache,we developed a user behavior-driven edge caching simulation framework incorporating recommendation mechanisms.Experiments conducted on the MovieLens and synthetic datasets demonstrate that FlyCache outperforms other caching strategies in terms of byte hit rate,backhaul traffic,and delayed startup rate.展开更多
In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope wit...In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope with computation-intensive and/or time-sensitive tasks,part of tasks is offloaded to the UAV side,and UAV process them with its own computing resources and caching resources.Thus,the burden of IoTDs gets relieved under the satisfaction of the quality of service(QoS)require-ments.However,owing to the limited resources of UAV,the cost of whole system,i.e.,that is defined as the weighted sum of energy consumption and time de-lay with caching,should be further optimized while the objective function and the constraints are non-convex.Therefore,we first jointly optimize commu-nication resources B,computing resources F and of-floading rates X with alternating iteration and convex optimization method,and then determine the value of caching decision Y with branch-and-bound(BB)al-gorithm.Numerical results show that UAV assisting partial task offloading with content caching is supe-rior to local computing and full offloading mechanism without caching,and meanwhile the cost of whole sys-tem gets further optimized with our proposed scheme.展开更多
Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of conge...Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of congestion as well as the need for more resources and more dedicated caching schemes.Recently,Multi-access Edge Computing(MEC)-enabled heterogeneous networks,which leverage edge caches for proximity delivery,have emerged as a promising solution to all of these problems.Designing an effective edge caching scheme is critical to its success,however,in the face of limited resources.We propose a novel Knowledge Graph(KG)-based Dueling Deep Q-Network(KG-DDQN)for cooperative caching in MEC-enabled heterogeneous networks.The KGDDQN scheme leverages a KG to uncover video relations,providing valuable insights into user preferences for the caching scheme.Specifically,the KG guides the selection of related videos as caching candidates(i.e.,actions in the DDQN),thus providing a rich reference for implementing a personalized caching scheme while also improving the decision efficiency of the DDQN.Extensive simulation results validate the convergence effectiveness of the KG-DDQN,and it also outperforms baselines regarding cache hit rate and service delay.展开更多
Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the...Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the benefit obtained per unit of cache bandwidth usage,degrades when static or greedy caching strategies fail to adapt to changing demand patterns.To address this,we propose a deep reinforcement learning(DRL)-based caching framework built upon the proximal policy optimization(PPO)algorithm.Our approach formulates edge caching as a sequential decision-making problem and introduces a reward model that balances cache hit performance and utility by prioritizing high-demand,high-quality content while penalizing degraded quality delivery.We construct a realistic synthetic dataset that captures both temporal variations and shifting content popularity to validate our model.Experimental results demonstrate that our proposed method improves utility by up to 135.9%and achieves an average improvement of 22.6%compared to traditional greedy algorithms and long short-term memory(LSTM)-based prediction models.Moreover,our method consistently performs well across a variety of utility functions,workload distributions,and storage limitations,underscoring its adaptability and robustness in dynamic video caching environments.展开更多
Vehicular networks enable seamless connectivity for exchanging emergency and infotainment content.However,retrieving infotainment data from remote servers often introduces high delays,degrading the Quality of Service(...Vehicular networks enable seamless connectivity for exchanging emergency and infotainment content.However,retrieving infotainment data from remote servers often introduces high delays,degrading the Quality of Service(QoS).To overcome this,caching frequently requested content at fog-enabled Road Side Units(RSUs)reduces communication latency.Yet,the limited caching capacity of RSUs makes it impractical to store all contents with varying sizes and popularity.This research proposes an efficient content caching algorithm that adapts to dynamic vehicular demands on highways to maximize request satisfaction.The scheme is evaluated against Intelligent Content Caching(ICC)and Random Caching(RC).The obtained results show that our proposed scheme entertains more contentrequesting vehicles as compared to ICC and RC,with 33%and 41%more downloaded data in 28%and 35%less amount of time from ICC and RC schemes,respectively.展开更多
Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a r...Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a result many service providers have begun to deploy edge servers to cache intelligent services in order to reduce transmission delay and communication energy consumption.However,finding the optimal service caching strategy remains a significant challenge due to the stochastic nature of service requests and the bulky nature of intelligent services.To deal with this,we propose a distributed service caching scheme integrating deep reinforcement learning(DRL)with mobility prediction,which we refer to as DSDM.Specifically,we employ the D3QN(Deep Double Dueling Q-Network)framework to integrate Long Short-Term Memory(LSTM)predicted mobile device locations into the service caching replacement algorithm and adopt the distributed multi-agent approach for learning and training.Experimental results demonstrate that DSDM achieves significant performance improvements in reducing communication energy consumption compared to traditional methods across various scenarios.展开更多
In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic natu...In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic nature of user locations and the potential of device-to-device(D2D)cooperative caching,limiting the reduction of transmission latency.To address this issue,this paper proposes a joint optimization scheme for edge caching that integrates user mobility prediction with deep reinforcement learning.First,a Transformer-based geolocation prediction model is designed,leveraging multi-head attention mechanisms to capture correlations in historical user trajectories for accurate future location prediction.Then,within a three-tier heterogeneous network,we formulate a latency minimization problem under a D2D cooperative caching architecture and develop a mobility-aware Deep Q-Network(DQN)caching strategy.This strategy takes predicted location information as state input and dynamically adjusts the content distribution across small base stations(SBSs)andmobile users(MUs)to reduce end-to-end delay inmulti-hop content retrieval.Simulation results show that the proposed DQN-based method outperforms other baseline strategies across variousmetrics,achieving a 17.2%reduction in transmission delay compared to DQNmethods withoutmobility integration,thus validating the effectiveness of the joint optimization of location prediction and caching decisions.展开更多
Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently r...Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently realize load balancing.However,such a ubiquitous caching approach may cause problems including duplicate caching and low data diversity,thus reducing the caching efficiency of NDN routers.To mitigate these caching problems and improve the NDN caching efficiency,in this paper,a hierarchical-based sequential caching(HSC)scheme is proposed.In this scheme,the NDN routers in the data transmission path are divided into various levels and data with different request frequencies are cached in distinct router levels.The aim is to cache data with high request frequencies in the router that is closest to the content requester to increase the response probability of the nearby data,improve the data caching efficiency of named data networks,shorten the response time,and reduce cache redundancy.Simulation results show that this scheme can effectively improve the cache hit rate(CHR)and reduce the average request delay(ARD)and average route hop(ARH).展开更多
With the development of internet of vehicles,the traditional centralized content caching mode transmits content through the core network,which causes a large delay and cannot meet the demands for delay-sensitive servi...With the development of internet of vehicles,the traditional centralized content caching mode transmits content through the core network,which causes a large delay and cannot meet the demands for delay-sensitive services.To solve these problems,on basis of vehicle caching network,we propose an edge colla-borative caching scheme.Road side unit(RSU)and mobile edge computing(MEC)are used to collect vehicle information,predict and cache popular content,thereby provide low-latency content delivery services.However,the storage capa-city of a single RSU severely limits the edge caching performance and cannot handle intensive content requests at the same time.Through content sharing,col-laborative caching can relieve the storage burden on caching servers.Therefore,we integrate RSU and collaborative caching to build a MEC-assisted vehicle edge collaborative caching(MVECC)scheme,so as to realize the collaborative caching among cloud,edge and vehicle.MVECC uses deep reinforcement learning to pre-dict what needs to be cached on RSU,which enables RSUs to cache more popular content.In addition,MVECC also introduces a mobility-aware caching replace-ment scheme at the edge network to reduce redundant cache and improving cache efficiency,which allows RSU to dynamically replace the cached content in response to the mobility of vehicles.The simulation results show that the pro-posed MVECC scheme can improve cache performance in terms of energy cost and content hit rate.展开更多
Multimedia streaming served through peer-to-peer (P2P) networks is booming nowadays. However, the end-to-end streaming quality is generally unstable due to the variability of the state of serve-peers. On the other han...Multimedia streaming served through peer-to-peer (P2P) networks is booming nowadays. However, the end-to-end streaming quality is generally unstable due to the variability of the state of serve-peers. On the other hand, proxy caching is a bandwidth-efficient scheme for streaming over the Internet, whereas it is a substantially expensive method needing dedicated powerful proxy servers. In this paper, we present a P2P cooperative streaming architecture combined with the advantages of both P2P networks and multimedia proxy caching techniques to improve the streaming quality of participating clients. In this frame- work, a client will simultaneously retrieve contents from the server and other peers that have viewed and cached the same title before. In the meantime, the client will also selectively cache the aggregated video content so as to serve still future clients. The associate protocol to facilitate the multi-path streaming and a distributed utility-based partial caching scheme are detailedly dis- cussed. We demonstrate the effectiveness of this proposed architecture through extensive simulation experiments on large, Inter- net-like topologies.展开更多
With the explosive growth of highdefinition video streaming data,a substantial increase in network traffic has ensued.The emergency of mobile edge caching(MEC)can not only alleviate the burden on core network,but also...With the explosive growth of highdefinition video streaming data,a substantial increase in network traffic has ensued.The emergency of mobile edge caching(MEC)can not only alleviate the burden on core network,but also significantly improve user experience.Integrating with the MEC and satellite networks,the network is empowered popular content ubiquitously and seamlessly.Addressing the research gap between multilayer satellite networks and MEC,we study the caching placement problem in this paper.Initially,we introduce a three-layer distributed network caching management architecture designed for efficient and flexible handling of large-scale networks.Considering the constraint on satellite capacity and content propagation delay,the cache placement problem is then formulated and transformed into a markov decision process(MDP),where the content coded caching mechanism is utilized to promote the efficiency of content delivery.Furthermore,a new generic metric,content delivery cost,is proposed to elaborate the performance of caching decision in large-scale networks.Then,we introduce a graph convolutional network(GCN)-based multi-agent advantage actor-critic(A2C)algorithm to optimize the caching decision.Finally,extensive simulations are conducted to evaluate the proposed algorithm in terms of content delivery cost and transferability.展开更多
Edge caching is an emerging technology for supporting massive content access in mobile edge networks to address rapidly growing Internet of Things(IoT)services and content applications.However,the edge server is limit...Edge caching is an emerging technology for supporting massive content access in mobile edge networks to address rapidly growing Internet of Things(IoT)services and content applications.However,the edge server is limited with the computation/storage capacity,which causes a low cache hit.Cooperative edge caching jointing neighbor edge servers is regarded as a promising technique to improve cache hit and reduce congestion of the networks.Further,recommender systems can provide personalized content services to meet user’s requirements in the entertainment-oriented mobile networks.Therefore,we investigate the issue of joint cooperative edge caching and recommender systems to achieve additional cache gains by the soft caching framework.To measure the cache profits,the optimization problem is formulated as a 0-1 Integer Linear Programming(ILP),which is NP-hard.Specifically,the method of processing content requests is defined as server actions,we determine the server actions to maximize the quality of experience(QoE).We propose a cachefriendly heuristic algorithm to solve it.Simulation results demonstrate that the proposed framework has superior performance in improving the QoE.展开更多
Although content caching and recommendation are two complementary approaches to improve the user experience,it is still challenging to provide an integrated paradigm to fully explore their potential,due to the high co...Although content caching and recommendation are two complementary approaches to improve the user experience,it is still challenging to provide an integrated paradigm to fully explore their potential,due to the high complexity and complicated tradeoff relationship.To provide an efficient management framework,the joint design of content delivery and recommendation in wireless content caching networks is studied in this paper.First,a joint transmission scheme of content objects and recommendation lists is designed with edge caching,and an optimization problem is formulated to balance the utility and cost of content caching and recommendation,which is an mixed integer nonlinear programming problem.Second,a reinforcement learning based algorithm is proposed to implement real time management of content caching,recommendation and delivery,which can approach the optimal solution without iterations during each decision epoch.Finally,the simulation results are provided to evaluate the performance of our proposed scheme,which show that it can achieve lower cost than the existing content caching and recommendation schemes.展开更多
It is expected that by 2003 continuous media will account for more than 50% of the data available on origin servers, this will provoke a significant change in Internet workload. Due to the high bandwidth requirements ...It is expected that by 2003 continuous media will account for more than 50% of the data available on origin servers, this will provoke a significant change in Internet workload. Due to the high bandwidth requirements and the long-lived nature of digital video, streaming server loads and network bandwidths are proven to be major limiting factors. Aiming at the characteristics of broadband network in residential areas, this paper proposes a popularity-based server-proxy caching strategy for streaming media. According to a streaming media popularity on streaming server and proxy, this strategy caches the content of the streaming media partially or completely. The paper also proposes two formulas that calculate the popularity coefficient of a streaming media on server and proxy, and caching replacement policy. As expected, this strategy decreases the server load, reduces the traffic from streaming server to proxy, and improves client start-up latency.展开更多
In Information Centric Networking(ICN)where content is the object of exchange,in-network caching is a unique functional feature with the ability to handle data storage and distribution in remote sensing satellite netw...In Information Centric Networking(ICN)where content is the object of exchange,in-network caching is a unique functional feature with the ability to handle data storage and distribution in remote sensing satellite networks.Setting up cache space at any node enables users to access data nearby,thus relieving the processing pressure on the servers.However,the existing caching strategies still suffer from the lack of global planning of cache contents and low utilization of cache resources due to the lack of fine-grained division of cache contents.To address the issues mentioned,a cooperative caching strategy(CSTL)for remote sensing satellite networks based on a two-layer caching model is proposed.The two-layer caching model is constructed by setting up separate cache spaces in the satellite network and the ground station.Probabilistic caching of popular contents in the region at the ground station to reduce the access delay of users.A content classification method based on hierarchical division is proposed in the satellite network,and differential probabilistic caching is employed for different levels of content.The cached content is also dynamically adjusted by analyzing the subsequent changes in the popularity of the cached content.In the two-layer caching model,ground stations and satellite networks collaboratively cache to achieve global planning of cache contents,rationalize the utilization of cache resources,and reduce the propagation delay of remote sensing data.Simulation results show that the CSTL strategy not only has a high cache hit ratio compared with other caching strategies but also effectively reduces user request delay and server load,which satisfies the timeliness requirement of remote sensing data transmission.展开更多
To improve efficiency of search engines, the query result cache has drawn much attention re- cently. According to the query processing and user's query logs locality, a new hybrid result cache strategy which associat...To improve efficiency of search engines, the query result cache has drawn much attention re- cently. According to the query processing and user's query logs locality, a new hybrid result cache strategy which associates with caching heat and worth is proposed to compute cache score in accord- ance with cost-aware strategies. Exactly, query repeated distance and query length factor are utilized to improve the static result policy, and the dynamic policy is adjusted by the caching worth. The hy- brid result cache is implemented in term of the document content and document ids (docIds) se- quence. Based on a score format and the new hybrid structure, an initial algorithm and a new rou- ting algorithm are designed for result cache. Experiments' results show that the improved caching policies decrease the average response time effectively, and increase the system throughput signifi- cantly. By choosing comfortable combination of page cache and docIds cache, the new hybrid cac- hing strategy almost reduces more than 20% of the only cache and docId-only cache. average query time compared with the basic page-展开更多
基金supported by the National Natural Science Foundation of China(NSFC)[Grant No.62072469].
文摘With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for optimizing video content distribution.However,the limited edge node cache capacity and dynamic user requests make edge caching more complex.Therefore,we propose a recommendation-driven edge Caching network architecture for the Full life cycle of video streaming(FlyCache)designed to improve users’Quality of Experience(QoE)and reduce backhaul traffic consumption.FlyCache implements intelligent caching management across three key stages:before-playback,during-playback,and after-playback.Specifically,we introduce a cache placement policy for the before-playback stage,a dynamic prefetching and cache admission policy for the during-playback stage,and a progressive cache eviction policy for the after-playback stage.To validate the effectiveness of FlyCache,we developed a user behavior-driven edge caching simulation framework incorporating recommendation mechanisms.Experiments conducted on the MovieLens and synthetic datasets demonstrate that FlyCache outperforms other caching strategies in terms of byte hit rate,backhaul traffic,and delayed startup rate.
基金supported by National Natural Science Foundation of China(No.61821001)Science and Technology Key Project of Guangdong Province,China(2019B010157001).
文摘In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope with computation-intensive and/or time-sensitive tasks,part of tasks is offloaded to the UAV side,and UAV process them with its own computing resources and caching resources.Thus,the burden of IoTDs gets relieved under the satisfaction of the quality of service(QoS)require-ments.However,owing to the limited resources of UAV,the cost of whole system,i.e.,that is defined as the weighted sum of energy consumption and time de-lay with caching,should be further optimized while the objective function and the constraints are non-convex.Therefore,we first jointly optimize commu-nication resources B,computing resources F and of-floading rates X with alternating iteration and convex optimization method,and then determine the value of caching decision Y with branch-and-bound(BB)al-gorithm.Numerical results show that UAV assisting partial task offloading with content caching is supe-rior to local computing and full offloading mechanism without caching,and meanwhile the cost of whole sys-tem gets further optimized with our proposed scheme.
基金supported by the National Natural Science Foundation of China(Nos.62201419,62372357)the Natural Science Foundation of Chongqing(CSTB2023NSCQ-LMX0032)the ISN State Key Laboratory.
文摘Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of congestion as well as the need for more resources and more dedicated caching schemes.Recently,Multi-access Edge Computing(MEC)-enabled heterogeneous networks,which leverage edge caches for proximity delivery,have emerged as a promising solution to all of these problems.Designing an effective edge caching scheme is critical to its success,however,in the face of limited resources.We propose a novel Knowledge Graph(KG)-based Dueling Deep Q-Network(KG-DDQN)for cooperative caching in MEC-enabled heterogeneous networks.The KGDDQN scheme leverages a KG to uncover video relations,providing valuable insights into user preferences for the caching scheme.Specifically,the KG guides the selection of related videos as caching candidates(i.e.,actions in the DDQN),thus providing a rich reference for implementing a personalized caching scheme while also improving the decision efficiency of the DDQN.Extensive simulation results validate the convergence effectiveness of the KG-DDQN,and it also outperforms baselines regarding cache hit rate and service delay.
文摘Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the benefit obtained per unit of cache bandwidth usage,degrades when static or greedy caching strategies fail to adapt to changing demand patterns.To address this,we propose a deep reinforcement learning(DRL)-based caching framework built upon the proximal policy optimization(PPO)algorithm.Our approach formulates edge caching as a sequential decision-making problem and introduces a reward model that balances cache hit performance and utility by prioritizing high-demand,high-quality content while penalizing degraded quality delivery.We construct a realistic synthetic dataset that captures both temporal variations and shifting content popularity to validate our model.Experimental results demonstrate that our proposed method improves utility by up to 135.9%and achieves an average improvement of 22.6%compared to traditional greedy algorithms and long short-term memory(LSTM)-based prediction models.Moreover,our method consistently performs well across a variety of utility functions,workload distributions,and storage limitations,underscoring its adaptability and robustness in dynamic video caching environments.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2504).
文摘Vehicular networks enable seamless connectivity for exchanging emergency and infotainment content.However,retrieving infotainment data from remote servers often introduces high delays,degrading the Quality of Service(QoS).To overcome this,caching frequently requested content at fog-enabled Road Side Units(RSUs)reduces communication latency.Yet,the limited caching capacity of RSUs makes it impractical to store all contents with varying sizes and popularity.This research proposes an efficient content caching algorithm that adapts to dynamic vehicular demands on highways to maximize request satisfaction.The scheme is evaluated against Intelligent Content Caching(ICC)and Random Caching(RC).The obtained results show that our proposed scheme entertains more contentrequesting vehicles as compared to ICC and RC,with 33%and 41%more downloaded data in 28%and 35%less amount of time from ICC and RC schemes,respectively.
基金supported by the National Natural Science Foundation of China under grants No.92267104 and 62372242。
文摘Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a result many service providers have begun to deploy edge servers to cache intelligent services in order to reduce transmission delay and communication energy consumption.However,finding the optimal service caching strategy remains a significant challenge due to the stochastic nature of service requests and the bulky nature of intelligent services.To deal with this,we propose a distributed service caching scheme integrating deep reinforcement learning(DRL)with mobility prediction,which we refer to as DSDM.Specifically,we employ the D3QN(Deep Double Dueling Q-Network)framework to integrate Long Short-Term Memory(LSTM)predicted mobile device locations into the service caching replacement algorithm and adopt the distributed multi-agent approach for learning and training.Experimental results demonstrate that DSDM achieves significant performance improvements in reducing communication energy consumption compared to traditional methods across various scenarios.
基金supported by the Liaoning Provincial Education Department Fund,grant number JYTZD2023083.
文摘In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic nature of user locations and the potential of device-to-device(D2D)cooperative caching,limiting the reduction of transmission latency.To address this issue,this paper proposes a joint optimization scheme for edge caching that integrates user mobility prediction with deep reinforcement learning.First,a Transformer-based geolocation prediction model is designed,leveraging multi-head attention mechanisms to capture correlations in historical user trajectories for accurate future location prediction.Then,within a three-tier heterogeneous network,we formulate a latency minimization problem under a D2D cooperative caching architecture and develop a mobility-aware Deep Q-Network(DQN)caching strategy.This strategy takes predicted location information as state input and dynamically adjusts the content distribution across small base stations(SBSs)andmobile users(MUs)to reduce end-to-end delay inmulti-hop content retrieval.Simulation results show that the proposed DQN-based method outperforms other baseline strategies across variousmetrics,achieving a 17.2%reduction in transmission delay compared to DQNmethods withoutmobility integration,thus validating the effectiveness of the joint optimization of location prediction and caching decisions.
基金supported in part by the National Natural Science Foundation of China under Grant 61972424 and 62372479in part by the High Value Intellectual Property Cultivation Project of Hubei Province,China,under grant D2021002094+1 种基金in part by JSPS KAKENHI under Grants JP16K00117 and JP19K20250in part by the Leading Initiative for Excellent Young Researchers(LEADER),MEXT,Japan,and KDDI Foundation.
文摘Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently realize load balancing.However,such a ubiquitous caching approach may cause problems including duplicate caching and low data diversity,thus reducing the caching efficiency of NDN routers.To mitigate these caching problems and improve the NDN caching efficiency,in this paper,a hierarchical-based sequential caching(HSC)scheme is proposed.In this scheme,the NDN routers in the data transmission path are divided into various levels and data with different request frequencies are cached in distinct router levels.The aim is to cache data with high request frequencies in the router that is closest to the content requester to increase the response probability of the nearby data,improve the data caching efficiency of named data networks,shorten the response time,and reduce cache redundancy.Simulation results show that this scheme can effectively improve the cache hit rate(CHR)and reduce the average request delay(ARD)and average route hop(ARH).
基金supported by the Science and Technology Project of State Grid Corporation of China:Research and Application of Key Technologies in Virtual Operation of Information and Communication Resources.
文摘With the development of internet of vehicles,the traditional centralized content caching mode transmits content through the core network,which causes a large delay and cannot meet the demands for delay-sensitive services.To solve these problems,on basis of vehicle caching network,we propose an edge colla-borative caching scheme.Road side unit(RSU)and mobile edge computing(MEC)are used to collect vehicle information,predict and cache popular content,thereby provide low-latency content delivery services.However,the storage capa-city of a single RSU severely limits the edge caching performance and cannot handle intensive content requests at the same time.Through content sharing,col-laborative caching can relieve the storage burden on caching servers.Therefore,we integrate RSU and collaborative caching to build a MEC-assisted vehicle edge collaborative caching(MVECC)scheme,so as to realize the collaborative caching among cloud,edge and vehicle.MVECC uses deep reinforcement learning to pre-dict what needs to be cached on RSU,which enables RSUs to cache more popular content.In addition,MVECC also introduces a mobility-aware caching replace-ment scheme at the edge network to reduce redundant cache and improving cache efficiency,which allows RSU to dynamically replace the cached content in response to the mobility of vehicles.The simulation results show that the pro-posed MVECC scheme can improve cache performance in terms of energy cost and content hit rate.
基金Project (Nos. 90412012 and 60673160) supported by the NationalNatural Science Foundation of China
文摘Multimedia streaming served through peer-to-peer (P2P) networks is booming nowadays. However, the end-to-end streaming quality is generally unstable due to the variability of the state of serve-peers. On the other hand, proxy caching is a bandwidth-efficient scheme for streaming over the Internet, whereas it is a substantially expensive method needing dedicated powerful proxy servers. In this paper, we present a P2P cooperative streaming architecture combined with the advantages of both P2P networks and multimedia proxy caching techniques to improve the streaming quality of participating clients. In this frame- work, a client will simultaneously retrieve contents from the server and other peers that have viewed and cached the same title before. In the meantime, the client will also selectively cache the aggregated video content so as to serve still future clients. The associate protocol to facilitate the multi-path streaming and a distributed utility-based partial caching scheme are detailedly dis- cussed. We demonstrate the effectiveness of this proposed architecture through extensive simulation experiments on large, Inter- net-like topologies.
基金supported by the National Key Research and Development Program of China under Grant 2020YFB1807700the National Natural Science Foundation of China(NSFC)under Grant(No.62201414,62201432)+2 种基金the Qinchuangyuan Project(OCYRCXM-2022-362)the Fundamental Research Funds for the Central Universities and the Innovation Fund of Xidian University under Grant YJSJ24017the Guangzhou Science and Technology Program under Grant 202201011732。
文摘With the explosive growth of highdefinition video streaming data,a substantial increase in network traffic has ensued.The emergency of mobile edge caching(MEC)can not only alleviate the burden on core network,but also significantly improve user experience.Integrating with the MEC and satellite networks,the network is empowered popular content ubiquitously and seamlessly.Addressing the research gap between multilayer satellite networks and MEC,we study the caching placement problem in this paper.Initially,we introduce a three-layer distributed network caching management architecture designed for efficient and flexible handling of large-scale networks.Considering the constraint on satellite capacity and content propagation delay,the cache placement problem is then formulated and transformed into a markov decision process(MDP),where the content coded caching mechanism is utilized to promote the efficiency of content delivery.Furthermore,a new generic metric,content delivery cost,is proposed to elaborate the performance of caching decision in large-scale networks.Then,we introduce a graph convolutional network(GCN)-based multi-agent advantage actor-critic(A2C)algorithm to optimize the caching decision.Finally,extensive simulations are conducted to evaluate the proposed algorithm in terms of content delivery cost and transferability.
基金supported in part by National Key R&D Program of China under Grant Nos. 2018YFB2100100 and 2018YFF0214700National NSFC under Grant Nos. 61902044 and 62072060+4 种基金Chongqing Research Program of Basic Research and Frontier Technology under Grant No. CSTC2019-jcyjmsxmX0589Key Research Program of Chongqing Science and Technology Commission under Grant Nos. CSTC2017jcyjBX0025 and CSTC2019jscxzdztzxX0031Fundamental Research Funds for the Central Universities under Grant No.2020CDJQY-A022Chinese National Engineering Laboratory for Big Data System Computing TechnologyCanadian NSERC
文摘Edge caching is an emerging technology for supporting massive content access in mobile edge networks to address rapidly growing Internet of Things(IoT)services and content applications.However,the edge server is limited with the computation/storage capacity,which causes a low cache hit.Cooperative edge caching jointing neighbor edge servers is regarded as a promising technique to improve cache hit and reduce congestion of the networks.Further,recommender systems can provide personalized content services to meet user’s requirements in the entertainment-oriented mobile networks.Therefore,we investigate the issue of joint cooperative edge caching and recommender systems to achieve additional cache gains by the soft caching framework.To measure the cache profits,the optimization problem is formulated as a 0-1 Integer Linear Programming(ILP),which is NP-hard.Specifically,the method of processing content requests is defined as server actions,we determine the server actions to maximize the quality of experience(QoE).We propose a cachefriendly heuristic algorithm to solve it.Simulation results demonstrate that the proposed framework has superior performance in improving the QoE.
基金supported by Beijing Natural Science Foundation(Grant L182039),and National Natural Science Foundation of China(Grant 61971061).
文摘Although content caching and recommendation are two complementary approaches to improve the user experience,it is still challenging to provide an integrated paradigm to fully explore their potential,due to the high complexity and complicated tradeoff relationship.To provide an efficient management framework,the joint design of content delivery and recommendation in wireless content caching networks is studied in this paper.First,a joint transmission scheme of content objects and recommendation lists is designed with edge caching,and an optimization problem is formulated to balance the utility and cost of content caching and recommendation,which is an mixed integer nonlinear programming problem.Second,a reinforcement learning based algorithm is proposed to implement real time management of content caching,recommendation and delivery,which can approach the optimal solution without iterations during each decision epoch.Finally,the simulation results are provided to evaluate the performance of our proposed scheme,which show that it can achieve lower cost than the existing content caching and recommendation schemes.
文摘It is expected that by 2003 continuous media will account for more than 50% of the data available on origin servers, this will provoke a significant change in Internet workload. Due to the high bandwidth requirements and the long-lived nature of digital video, streaming server loads and network bandwidths are proven to be major limiting factors. Aiming at the characteristics of broadband network in residential areas, this paper proposes a popularity-based server-proxy caching strategy for streaming media. According to a streaming media popularity on streaming server and proxy, this strategy caches the content of the streaming media partially or completely. The paper also proposes two formulas that calculate the popularity coefficient of a streaming media on server and proxy, and caching replacement policy. As expected, this strategy decreases the server load, reduces the traffic from streaming server to proxy, and improves client start-up latency.
基金This research was funded by the National Natural Science Foundation of China(No.U21A20451)the Science and Technology Planning Project of Jilin Province(No.20200401105GX)the China University Industry University Research Innovation Fund(No.2021FNA01003).
文摘In Information Centric Networking(ICN)where content is the object of exchange,in-network caching is a unique functional feature with the ability to handle data storage and distribution in remote sensing satellite networks.Setting up cache space at any node enables users to access data nearby,thus relieving the processing pressure on the servers.However,the existing caching strategies still suffer from the lack of global planning of cache contents and low utilization of cache resources due to the lack of fine-grained division of cache contents.To address the issues mentioned,a cooperative caching strategy(CSTL)for remote sensing satellite networks based on a two-layer caching model is proposed.The two-layer caching model is constructed by setting up separate cache spaces in the satellite network and the ground station.Probabilistic caching of popular contents in the region at the ground station to reduce the access delay of users.A content classification method based on hierarchical division is proposed in the satellite network,and differential probabilistic caching is employed for different levels of content.The cached content is also dynamically adjusted by analyzing the subsequent changes in the popularity of the cached content.In the two-layer caching model,ground stations and satellite networks collaboratively cache to achieve global planning of cache contents,rationalize the utilization of cache resources,and reduce the propagation delay of remote sensing data.Simulation results show that the CSTL strategy not only has a high cache hit ratio compared with other caching strategies but also effectively reduces user request delay and server load,which satisfies the timeliness requirement of remote sensing data transmission.
基金Supported by the National Natural Science Foundation of China(No.61173024)
文摘To improve efficiency of search engines, the query result cache has drawn much attention re- cently. According to the query processing and user's query logs locality, a new hybrid result cache strategy which associates with caching heat and worth is proposed to compute cache score in accord- ance with cost-aware strategies. Exactly, query repeated distance and query length factor are utilized to improve the static result policy, and the dynamic policy is adjusted by the caching worth. The hy- brid result cache is implemented in term of the document content and document ids (docIds) se- quence. Based on a score format and the new hybrid structure, an initial algorithm and a new rou- ting algorithm are designed for result cache. Experiments' results show that the improved caching policies decrease the average response time effectively, and increase the system throughput signifi- cantly. By choosing comfortable combination of page cache and docIds cache, the new hybrid cac- hing strategy almost reduces more than 20% of the only cache and docId-only cache. average query time compared with the basic page-