With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for opti...With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for optimizing video content distribution.However,the limited edge node cache capacity and dynamic user requests make edge caching more complex.Therefore,we propose a recommendation-driven edge Caching network architecture for the Full life cycle of video streaming(FlyCache)designed to improve users’Quality of Experience(QoE)and reduce backhaul traffic consumption.FlyCache implements intelligent caching management across three key stages:before-playback,during-playback,and after-playback.Specifically,we introduce a cache placement policy for the before-playback stage,a dynamic prefetching and cache admission policy for the during-playback stage,and a progressive cache eviction policy for the after-playback stage.To validate the effectiveness of FlyCache,we developed a user behavior-driven edge caching simulation framework incorporating recommendation mechanisms.Experiments conducted on the MovieLens and synthetic datasets demonstrate that FlyCache outperforms other caching strategies in terms of byte hit rate,backhaul traffic,and delayed startup rate.展开更多
In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope wit...In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope with computation-intensive and/or time-sensitive tasks,part of tasks is offloaded to the UAV side,and UAV process them with its own computing resources and caching resources.Thus,the burden of IoTDs gets relieved under the satisfaction of the quality of service(QoS)require-ments.However,owing to the limited resources of UAV,the cost of whole system,i.e.,that is defined as the weighted sum of energy consumption and time de-lay with caching,should be further optimized while the objective function and the constraints are non-convex.Therefore,we first jointly optimize commu-nication resources B,computing resources F and of-floading rates X with alternating iteration and convex optimization method,and then determine the value of caching decision Y with branch-and-bound(BB)al-gorithm.Numerical results show that UAV assisting partial task offloading with content caching is supe-rior to local computing and full offloading mechanism without caching,and meanwhile the cost of whole sys-tem gets further optimized with our proposed scheme.展开更多
Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the...Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the benefit obtained per unit of cache bandwidth usage,degrades when static or greedy caching strategies fail to adapt to changing demand patterns.To address this,we propose a deep reinforcement learning(DRL)-based caching framework built upon the proximal policy optimization(PPO)algorithm.Our approach formulates edge caching as a sequential decision-making problem and introduces a reward model that balances cache hit performance and utility by prioritizing high-demand,high-quality content while penalizing degraded quality delivery.We construct a realistic synthetic dataset that captures both temporal variations and shifting content popularity to validate our model.Experimental results demonstrate that our proposed method improves utility by up to 135.9%and achieves an average improvement of 22.6%compared to traditional greedy algorithms and long short-term memory(LSTM)-based prediction models.Moreover,our method consistently performs well across a variety of utility functions,workload distributions,and storage limitations,underscoring its adaptability and robustness in dynamic video caching environments.展开更多
Vehicular networks enable seamless connectivity for exchanging emergency and infotainment content.However,retrieving infotainment data from remote servers often introduces high delays,degrading the Quality of Service(...Vehicular networks enable seamless connectivity for exchanging emergency and infotainment content.However,retrieving infotainment data from remote servers often introduces high delays,degrading the Quality of Service(QoS).To overcome this,caching frequently requested content at fog-enabled Road Side Units(RSUs)reduces communication latency.Yet,the limited caching capacity of RSUs makes it impractical to store all contents with varying sizes and popularity.This research proposes an efficient content caching algorithm that adapts to dynamic vehicular demands on highways to maximize request satisfaction.The scheme is evaluated against Intelligent Content Caching(ICC)and Random Caching(RC).The obtained results show that our proposed scheme entertains more contentrequesting vehicles as compared to ICC and RC,with 33%and 41%more downloaded data in 28%and 35%less amount of time from ICC and RC schemes,respectively.展开更多
Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of conge...Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of congestion as well as the need for more resources and more dedicated caching schemes.Recently,Multi-access Edge Computing(MEC)-enabled heterogeneous networks,which leverage edge caches for proximity delivery,have emerged as a promising solution to all of these problems.Designing an effective edge caching scheme is critical to its success,however,in the face of limited resources.We propose a novel Knowledge Graph(KG)-based Dueling Deep Q-Network(KG-DDQN)for cooperative caching in MEC-enabled heterogeneous networks.The KGDDQN scheme leverages a KG to uncover video relations,providing valuable insights into user preferences for the caching scheme.Specifically,the KG guides the selection of related videos as caching candidates(i.e.,actions in the DDQN),thus providing a rich reference for implementing a personalized caching scheme while also improving the decision efficiency of the DDQN.Extensive simulation results validate the convergence effectiveness of the KG-DDQN,and it also outperforms baselines regarding cache hit rate and service delay.展开更多
Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently r...Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently realize load balancing.However,such a ubiquitous caching approach may cause problems including duplicate caching and low data diversity,thus reducing the caching efficiency of NDN routers.To mitigate these caching problems and improve the NDN caching efficiency,in this paper,a hierarchical-based sequential caching(HSC)scheme is proposed.In this scheme,the NDN routers in the data transmission path are divided into various levels and data with different request frequencies are cached in distinct router levels.The aim is to cache data with high request frequencies in the router that is closest to the content requester to increase the response probability of the nearby data,improve the data caching efficiency of named data networks,shorten the response time,and reduce cache redundancy.Simulation results show that this scheme can effectively improve the cache hit rate(CHR)and reduce the average request delay(ARD)and average route hop(ARH).展开更多
In the Satellite-integrated Internet of Things(S-IoT),data freshness in the time-sensitive scenarios could not be guaranteed over the timevarying topology with current distribution strategies aiming to reduce the tran...In the Satellite-integrated Internet of Things(S-IoT),data freshness in the time-sensitive scenarios could not be guaranteed over the timevarying topology with current distribution strategies aiming to reduce the transmission delay.To address this problem,in this paper,we propose an age-optimal caching distribution mechanism for the high-timeliness data collection in S-IoT by adopting a freshness metric,as called age of information(AoI)through the caching-based single-source multidestinations(SSMDs)transmission,namely Multi-AoI,with a well-designed cross-slot directed graph(CSG).With the proposed CSG,we make optimizations on the locations of cache nodes by solving a nonlinear integer programming problem on minimizing Multi-AoI.In particular,we put up forward three specific algorithms respectively for improving the Multi-AoI,i.e.,the minimum queuing delay algorithm(MQDA)based on node deviation from average level,the minimum propagation delay algorithm(MPDA)based on the node propagation delay reduction,and a delay balanced algorithm(DBA)based on node deviation from average level and propagation delay reduction.The simulation results show that the proposed mechanism can effectively improve the freshness of information compared with the random selection algorithm.展开更多
In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic natu...In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic nature of user locations and the potential of device-to-device(D2D)cooperative caching,limiting the reduction of transmission latency.To address this issue,this paper proposes a joint optimization scheme for edge caching that integrates user mobility prediction with deep reinforcement learning.First,a Transformer-based geolocation prediction model is designed,leveraging multi-head attention mechanisms to capture correlations in historical user trajectories for accurate future location prediction.Then,within a three-tier heterogeneous network,we formulate a latency minimization problem under a D2D cooperative caching architecture and develop a mobility-aware Deep Q-Network(DQN)caching strategy.This strategy takes predicted location information as state input and dynamically adjusts the content distribution across small base stations(SBSs)andmobile users(MUs)to reduce end-to-end delay inmulti-hop content retrieval.Simulation results show that the proposed DQN-based method outperforms other baseline strategies across variousmetrics,achieving a 17.2%reduction in transmission delay compared to DQNmethods withoutmobility integration,thus validating the effectiveness of the joint optimization of location prediction and caching decisions.展开更多
Scatter-hoarding rodents store seeds throughout their home ranges in superficially buried caches which,unlike seeds larder-hoarded in burrows,are difficult to defend.Cached seeds are often pilfered by other scatter-ho...Scatter-hoarding rodents store seeds throughout their home ranges in superficially buried caches which,unlike seeds larder-hoarded in burrows,are difficult to defend.Cached seeds are often pilfered by other scatter-hoarders and either re-cached,eaten or larder-hoarded.Such seed movements can influence seedling recruitment,because only seeds remaining in caches are likely to germinate.Although the importance of scatter-hoarding rodents in the dispersal of western juniper seeds has recently been revealed,the level of pilfering that occurs after initial burial is unknown.Seed traits,soil moisture,and substrate can influence pilfering processes,but less is known about how pilfering varies among caches placed in open versus canopy microsites,or how cache discovery and removal varies among different canopy-types,tree versus shrub.We compared the removal of artificial caches between open and canopy microsites and between tree and shrub canopies at two sites in northeastern California during late spring and fall.We also used trail cameras at one site to monitor artificial cache removal,identify potential pilferers,and illuminate microsite use by scatter-hoarders.Removal of artificial caches was faster in open microsites at both sites during both seasons,and more caches were removed from shrub than tree canopies.California kangaroo rats were the species observed most on cameras,foraging most often in open microsites,which could explain the observed pilfering patterns.This is the first study to document pilfering of western juniper seeds,providing further evidence of the importance of scatter-hoarding rodent foraging behavior in understanding seedling recruitment processes in juniper woodlands.展开更多
The widening gap between processor and memory speeds makes cache an important issue in the computer system design. Compared with work set of programs, cache resource is often rare. Therefore, it is very important for ...The widening gap between processor and memory speeds makes cache an important issue in the computer system design. Compared with work set of programs, cache resource is often rare. Therefore, it is very important for a computer system to use cache efficiently. Toward a dynamically reconfigurable cache proposed recently, DOOC (Data- Object Oriented Cache), this paper proposes a quantitative framework for analyzing the cache requirement of data-objects, which includes cache capacity, block size, associativity and coherence protocol. And a kind of graph coloring algorithm dealing with the competition between data-objects in the DOOC is proposed as well. Finally, we apply our approaches to the compiler management of DOOC. We test our approaches on both a single-core platform and a four-core platform. Compared with the traditional caches, the DOOC in both platforms achieves an average reduction of 44.98% and 49.69% in miss rate respectively. And its performance is very close to the ideal optimal cache.展开更多
Content-centric network (CCN) is a new Inter- net architecture in which content is treated as the primitive of communication. In CCN, routers are equipped with con- tent stores at the content level, which act as cac...Content-centric network (CCN) is a new Inter- net architecture in which content is treated as the primitive of communication. In CCN, routers are equipped with con- tent stores at the content level, which act as caches for fre- quently requested content. Based on this design, the Internet is available to provide content distribution services without any application-layer support. In addition, as caches are inte- grated into routers, the overall performance of CCN will be deeply affected by the caching efficiency. In this paper, our aim is to gain some insights on how caches should be designed to maintain a high performance in a cost-efficient way. We try to model the two-layer cache hi- erarchy composed of CCN touters using a two-dimensional discrete-time Markov chain, and develop an efficient algo- rithm to calculate the hit ratios of these caches. Simulations validate the accuracy of our modeling method, and convey some meaningful information which can help us better un- derstand the caching mechanism of CCN.展开更多
In-network caching is a fundamental mechanism advocated by information-centric networks (ICNs) for efficient content delivery. However, this new mechanism also brings serious privacy risks due to cache snooping atta...In-network caching is a fundamental mechanism advocated by information-centric networks (ICNs) for efficient content delivery. However, this new mechanism also brings serious privacy risks due to cache snooping attacks. One effective solution to this problem is random-cache, where the cache in a router randomly mimics a cache hit or a cache miss for each content request/probe. In this paper, we investigate the effectiveness of using multiple random-caches to protect cache privacy in a multi-path ICN. We propose models for characterizing the privacy of multi-path ICNs with random-caches, and analyze two different attack scenarios: 1) prefix-based attacks and 2) suffix-based attacks. Both homogeneous and heterogeneous caches are considered. Our analysis shows that in a multi-path ICN an adversary can potentially gain more privacy information by adopting prefix-based attacks. Furthermore, heterogeneous caches provide much better privacy protection than homogeneous ones under both attacks. The effect of different parameters on the privacy of multi-path random-caches is further investigated, and the comparison with its single-path counterpart is carried out based on numerical evaluations. The analysis and results in this paper provide insights in designing and evaluating multi-path ICNs when we take privacy into consideration.展开更多
针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立...针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立数学模型,并分析了算法的划分流程。仿真实验结果表明,MT-FTP算法在系统吞吐率方面表现较好,其平均IPC(Instructions Per Cycles)值比UCP(Use Case Point)算法高1.3%,比LRU(Least Recently Used)算法高11.6%。MT-FTP算法对应的系统平均公平性比LRU算法的系统平均公平性高17%,比UCP算法的平均公平性高16.5%。该算法实现了共享Cache划分公平性并兼顾了系统的吞吐率。展开更多
基金supported by the National Natural Science Foundation of China(NSFC)[Grant No.62072469].
文摘With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for optimizing video content distribution.However,the limited edge node cache capacity and dynamic user requests make edge caching more complex.Therefore,we propose a recommendation-driven edge Caching network architecture for the Full life cycle of video streaming(FlyCache)designed to improve users’Quality of Experience(QoE)and reduce backhaul traffic consumption.FlyCache implements intelligent caching management across three key stages:before-playback,during-playback,and after-playback.Specifically,we introduce a cache placement policy for the before-playback stage,a dynamic prefetching and cache admission policy for the during-playback stage,and a progressive cache eviction policy for the after-playback stage.To validate the effectiveness of FlyCache,we developed a user behavior-driven edge caching simulation framework incorporating recommendation mechanisms.Experiments conducted on the MovieLens and synthetic datasets demonstrate that FlyCache outperforms other caching strategies in terms of byte hit rate,backhaul traffic,and delayed startup rate.
基金supported by National Natural Science Foundation of China(No.61821001)Science and Technology Key Project of Guangdong Province,China(2019B010157001).
文摘In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope with computation-intensive and/or time-sensitive tasks,part of tasks is offloaded to the UAV side,and UAV process them with its own computing resources and caching resources.Thus,the burden of IoTDs gets relieved under the satisfaction of the quality of service(QoS)require-ments.However,owing to the limited resources of UAV,the cost of whole system,i.e.,that is defined as the weighted sum of energy consumption and time de-lay with caching,should be further optimized while the objective function and the constraints are non-convex.Therefore,we first jointly optimize commu-nication resources B,computing resources F and of-floading rates X with alternating iteration and convex optimization method,and then determine the value of caching decision Y with branch-and-bound(BB)al-gorithm.Numerical results show that UAV assisting partial task offloading with content caching is supe-rior to local computing and full offloading mechanism without caching,and meanwhile the cost of whole sys-tem gets further optimized with our proposed scheme.
文摘Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the benefit obtained per unit of cache bandwidth usage,degrades when static or greedy caching strategies fail to adapt to changing demand patterns.To address this,we propose a deep reinforcement learning(DRL)-based caching framework built upon the proximal policy optimization(PPO)algorithm.Our approach formulates edge caching as a sequential decision-making problem and introduces a reward model that balances cache hit performance and utility by prioritizing high-demand,high-quality content while penalizing degraded quality delivery.We construct a realistic synthetic dataset that captures both temporal variations and shifting content popularity to validate our model.Experimental results demonstrate that our proposed method improves utility by up to 135.9%and achieves an average improvement of 22.6%compared to traditional greedy algorithms and long short-term memory(LSTM)-based prediction models.Moreover,our method consistently performs well across a variety of utility functions,workload distributions,and storage limitations,underscoring its adaptability and robustness in dynamic video caching environments.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2504).
文摘Vehicular networks enable seamless connectivity for exchanging emergency and infotainment content.However,retrieving infotainment data from remote servers often introduces high delays,degrading the Quality of Service(QoS).To overcome this,caching frequently requested content at fog-enabled Road Side Units(RSUs)reduces communication latency.Yet,the limited caching capacity of RSUs makes it impractical to store all contents with varying sizes and popularity.This research proposes an efficient content caching algorithm that adapts to dynamic vehicular demands on highways to maximize request satisfaction.The scheme is evaluated against Intelligent Content Caching(ICC)and Random Caching(RC).The obtained results show that our proposed scheme entertains more contentrequesting vehicles as compared to ICC and RC,with 33%and 41%more downloaded data in 28%and 35%less amount of time from ICC and RC schemes,respectively.
基金supported by the National Natural Science Foundation of China(Nos.62201419,62372357)the Natural Science Foundation of Chongqing(CSTB2023NSCQ-LMX0032)the ISN State Key Laboratory.
文摘Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of congestion as well as the need for more resources and more dedicated caching schemes.Recently,Multi-access Edge Computing(MEC)-enabled heterogeneous networks,which leverage edge caches for proximity delivery,have emerged as a promising solution to all of these problems.Designing an effective edge caching scheme is critical to its success,however,in the face of limited resources.We propose a novel Knowledge Graph(KG)-based Dueling Deep Q-Network(KG-DDQN)for cooperative caching in MEC-enabled heterogeneous networks.The KGDDQN scheme leverages a KG to uncover video relations,providing valuable insights into user preferences for the caching scheme.Specifically,the KG guides the selection of related videos as caching candidates(i.e.,actions in the DDQN),thus providing a rich reference for implementing a personalized caching scheme while also improving the decision efficiency of the DDQN.Extensive simulation results validate the convergence effectiveness of the KG-DDQN,and it also outperforms baselines regarding cache hit rate and service delay.
基金supported in part by the National Natural Science Foundation of China under Grant 61972424 and 62372479in part by the High Value Intellectual Property Cultivation Project of Hubei Province,China,under grant D2021002094+1 种基金in part by JSPS KAKENHI under Grants JP16K00117 and JP19K20250in part by the Leading Initiative for Excellent Young Researchers(LEADER),MEXT,Japan,and KDDI Foundation.
文摘Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently realize load balancing.However,such a ubiquitous caching approach may cause problems including duplicate caching and low data diversity,thus reducing the caching efficiency of NDN routers.To mitigate these caching problems and improve the NDN caching efficiency,in this paper,a hierarchical-based sequential caching(HSC)scheme is proposed.In this scheme,the NDN routers in the data transmission path are divided into various levels and data with different request frequencies are cached in distinct router levels.The aim is to cache data with high request frequencies in the router that is closest to the content requester to increase the response probability of the nearby data,improve the data caching efficiency of named data networks,shorten the response time,and reduce cache redundancy.Simulation results show that this scheme can effectively improve the cache hit rate(CHR)and reduce the average request delay(ARD)and average route hop(ARH).
基金supports from the Major Key Project of PCL (PCL2021A031)Shenzhen Science Technology Program (GXWD20201230155427003-20200824093323001)
文摘In the Satellite-integrated Internet of Things(S-IoT),data freshness in the time-sensitive scenarios could not be guaranteed over the timevarying topology with current distribution strategies aiming to reduce the transmission delay.To address this problem,in this paper,we propose an age-optimal caching distribution mechanism for the high-timeliness data collection in S-IoT by adopting a freshness metric,as called age of information(AoI)through the caching-based single-source multidestinations(SSMDs)transmission,namely Multi-AoI,with a well-designed cross-slot directed graph(CSG).With the proposed CSG,we make optimizations on the locations of cache nodes by solving a nonlinear integer programming problem on minimizing Multi-AoI.In particular,we put up forward three specific algorithms respectively for improving the Multi-AoI,i.e.,the minimum queuing delay algorithm(MQDA)based on node deviation from average level,the minimum propagation delay algorithm(MPDA)based on the node propagation delay reduction,and a delay balanced algorithm(DBA)based on node deviation from average level and propagation delay reduction.The simulation results show that the proposed mechanism can effectively improve the freshness of information compared with the random selection algorithm.
基金supported by the Liaoning Provincial Education Department Fund,grant number JYTZD2023083.
文摘In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic nature of user locations and the potential of device-to-device(D2D)cooperative caching,limiting the reduction of transmission latency.To address this issue,this paper proposes a joint optimization scheme for edge caching that integrates user mobility prediction with deep reinforcement learning.First,a Transformer-based geolocation prediction model is designed,leveraging multi-head attention mechanisms to capture correlations in historical user trajectories for accurate future location prediction.Then,within a three-tier heterogeneous network,we formulate a latency minimization problem under a D2D cooperative caching architecture and develop a mobility-aware Deep Q-Network(DQN)caching strategy.This strategy takes predicted location information as state input and dynamically adjusts the content distribution across small base stations(SBSs)andmobile users(MUs)to reduce end-to-end delay inmulti-hop content retrieval.Simulation results show that the proposed DQN-based method outperforms other baseline strategies across variousmetrics,achieving a 17.2%reduction in transmission delay compared to DQNmethods withoutmobility integration,thus validating the effectiveness of the joint optimization of location prediction and caching decisions.
文摘Scatter-hoarding rodents store seeds throughout their home ranges in superficially buried caches which,unlike seeds larder-hoarded in burrows,are difficult to defend.Cached seeds are often pilfered by other scatter-hoarders and either re-cached,eaten or larder-hoarded.Such seed movements can influence seedling recruitment,because only seeds remaining in caches are likely to germinate.Although the importance of scatter-hoarding rodents in the dispersal of western juniper seeds has recently been revealed,the level of pilfering that occurs after initial burial is unknown.Seed traits,soil moisture,and substrate can influence pilfering processes,but less is known about how pilfering varies among caches placed in open versus canopy microsites,or how cache discovery and removal varies among different canopy-types,tree versus shrub.We compared the removal of artificial caches between open and canopy microsites and between tree and shrub canopies at two sites in northeastern California during late spring and fall.We also used trail cameras at one site to monitor artificial cache removal,identify potential pilferers,and illuminate microsite use by scatter-hoarders.Removal of artificial caches was faster in open microsites at both sites during both seasons,and more caches were removed from shrub than tree canopies.California kangaroo rats were the species observed most on cameras,foraging most often in open microsites,which could explain the observed pilfering patterns.This is the first study to document pilfering of western juniper seeds,providing further evidence of the importance of scatter-hoarding rodent foraging behavior in understanding seedling recruitment processes in juniper woodlands.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.60621003,60873014.
文摘The widening gap between processor and memory speeds makes cache an important issue in the computer system design. Compared with work set of programs, cache resource is often rare. Therefore, it is very important for a computer system to use cache efficiently. Toward a dynamically reconfigurable cache proposed recently, DOOC (Data- Object Oriented Cache), this paper proposes a quantitative framework for analyzing the cache requirement of data-objects, which includes cache capacity, block size, associativity and coherence protocol. And a kind of graph coloring algorithm dealing with the competition between data-objects in the DOOC is proposed as well. Finally, we apply our approaches to the compiler management of DOOC. We test our approaches on both a single-core platform and a four-core platform. Compared with the traditional caches, the DOOC in both platforms achieves an average reduction of 44.98% and 49.69% in miss rate respectively. And its performance is very close to the ideal optimal cache.
文摘Content-centric network (CCN) is a new Inter- net architecture in which content is treated as the primitive of communication. In CCN, routers are equipped with con- tent stores at the content level, which act as caches for fre- quently requested content. Based on this design, the Internet is available to provide content distribution services without any application-layer support. In addition, as caches are inte- grated into routers, the overall performance of CCN will be deeply affected by the caching efficiency. In this paper, our aim is to gain some insights on how caches should be designed to maintain a high performance in a cost-efficient way. We try to model the two-layer cache hi- erarchy composed of CCN touters using a two-dimensional discrete-time Markov chain, and develop an efficient algo- rithm to calculate the hit ratios of these caches. Simulations validate the accuracy of our modeling method, and convey some meaningful information which can help us better un- derstand the caching mechanism of CCN.
基金The work was supported by the Young Scientists Fund of the National Natural Science Foundation of China under Grant No. 61502393 and the Aeronautical Science Foundation of China under Grant No. 2014ZD53049.
文摘In-network caching is a fundamental mechanism advocated by information-centric networks (ICNs) for efficient content delivery. However, this new mechanism also brings serious privacy risks due to cache snooping attacks. One effective solution to this problem is random-cache, where the cache in a router randomly mimics a cache hit or a cache miss for each content request/probe. In this paper, we investigate the effectiveness of using multiple random-caches to protect cache privacy in a multi-path ICN. We propose models for characterizing the privacy of multi-path ICNs with random-caches, and analyze two different attack scenarios: 1) prefix-based attacks and 2) suffix-based attacks. Both homogeneous and heterogeneous caches are considered. Our analysis shows that in a multi-path ICN an adversary can potentially gain more privacy information by adopting prefix-based attacks. Furthermore, heterogeneous caches provide much better privacy protection than homogeneous ones under both attacks. The effect of different parameters on the privacy of multi-path random-caches is further investigated, and the comparison with its single-path counterpart is carried out based on numerical evaluations. The analysis and results in this paper provide insights in designing and evaluating multi-path ICNs when we take privacy into consideration.
文摘针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立数学模型,并分析了算法的划分流程。仿真实验结果表明,MT-FTP算法在系统吞吐率方面表现较好,其平均IPC(Instructions Per Cycles)值比UCP(Use Case Point)算法高1.3%,比LRU(Least Recently Used)算法高11.6%。MT-FTP算法对应的系统平均公平性比LRU算法的系统平均公平性高17%,比UCP算法的平均公平性高16.5%。该算法实现了共享Cache划分公平性并兼顾了系统的吞吐率。