In this paper, we present a distributed multi-level cache system based on cloud storage, which is aimed at the low access efficiency of small spatio-temporal data files in information service system of Smart City. Tak...In this paper, we present a distributed multi-level cache system based on cloud storage, which is aimed at the low access efficiency of small spatio-temporal data files in information service system of Smart City. Taking classification attribute of small spatio-temporal data files in Smart City as the basis of cache content selection, the cache system adopts different cache pool management strategies in different levels of cache. The results of experiment in prototype system indicate that multi-level cache in this paper effectively increases the access bandwidth of small spatio-temporal files in Smart City and greatly improves service quality of multiple concurrent access in system.展开更多
In this paper, a study related to the expected performance behaviour of present 3-level cache system for multi-core systems is presented. For this a queuing model for present 3-level cache system for multi-core proces...In this paper, a study related to the expected performance behaviour of present 3-level cache system for multi-core systems is presented. For this a queuing model for present 3-level cache system for multi-core processors is developed and its possible performance has been analyzed with the increase in number of cores. Various important performance parameters like access time and utilization of individual cache at different level and overall average access time of the cache system is determined. Results for up to 1024 cores have been reported in this paper.展开更多
With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for opti...With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for optimizing video content distribution.However,the limited edge node cache capacity and dynamic user requests make edge caching more complex.Therefore,we propose a recommendation-driven edge Caching network architecture for the Full life cycle of video streaming(FlyCache)designed to improve users’Quality of Experience(QoE)and reduce backhaul traffic consumption.FlyCache implements intelligent caching management across three key stages:before-playback,during-playback,and after-playback.Specifically,we introduce a cache placement policy for the before-playback stage,a dynamic prefetching and cache admission policy for the during-playback stage,and a progressive cache eviction policy for the after-playback stage.To validate the effectiveness of FlyCache,we developed a user behavior-driven edge caching simulation framework incorporating recommendation mechanisms.Experiments conducted on the MovieLens and synthetic datasets demonstrate that FlyCache outperforms other caching strategies in terms of byte hit rate,backhaul traffic,and delayed startup rate.展开更多
In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope wit...In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope with computation-intensive and/or time-sensitive tasks,part of tasks is offloaded to the UAV side,and UAV process them with its own computing resources and caching resources.Thus,the burden of IoTDs gets relieved under the satisfaction of the quality of service(QoS)require-ments.However,owing to the limited resources of UAV,the cost of whole system,i.e.,that is defined as the weighted sum of energy consumption and time de-lay with caching,should be further optimized while the objective function and the constraints are non-convex.Therefore,we first jointly optimize commu-nication resources B,computing resources F and of-floading rates X with alternating iteration and convex optimization method,and then determine the value of caching decision Y with branch-and-bound(BB)al-gorithm.Numerical results show that UAV assisting partial task offloading with content caching is supe-rior to local computing and full offloading mechanism without caching,and meanwhile the cost of whole sys-tem gets further optimized with our proposed scheme.展开更多
In the Satellite-integrated Internet of Things(S-IoT),data freshness in the time-sensitive scenarios could not be guaranteed over the timevarying topology with current distribution strategies aiming to reduce the tran...In the Satellite-integrated Internet of Things(S-IoT),data freshness in the time-sensitive scenarios could not be guaranteed over the timevarying topology with current distribution strategies aiming to reduce the transmission delay.To address this problem,in this paper,we propose an age-optimal caching distribution mechanism for the high-timeliness data collection in S-IoT by adopting a freshness metric,as called age of information(AoI)through the caching-based single-source multidestinations(SSMDs)transmission,namely Multi-AoI,with a well-designed cross-slot directed graph(CSG).With the proposed CSG,we make optimizations on the locations of cache nodes by solving a nonlinear integer programming problem on minimizing Multi-AoI.In particular,we put up forward three specific algorithms respectively for improving the Multi-AoI,i.e.,the minimum queuing delay algorithm(MQDA)based on node deviation from average level,the minimum propagation delay algorithm(MPDA)based on the node propagation delay reduction,and a delay balanced algorithm(DBA)based on node deviation from average level and propagation delay reduction.The simulation results show that the proposed mechanism can effectively improve the freshness of information compared with the random selection algorithm.展开更多
Memory-based key-value cache systems, such as Memcached and Redis, have become indispensable components of data center infrastructures and have been used to cache performance-critical data to avoid expensive back-end ...Memory-based key-value cache systems, such as Memcached and Redis, have become indispensable components of data center infrastructures and have been used to cache performance-critical data to avoid expensive back-end database accesses. As the memory is usually not large enough to hold all the items, cache replacement must be performed to evict some cached items to make room for the newly coming items when there is no free space. Many real-world workloads target small items and have frequent bursts of scans (a scan is a sequence of one-time access requests). The commonly used LRU policy does not work well under such workloads since LRU needs a large amount of metadata and tends to discard hot items with scans. Small decreases in hit ratio can result in large end-to-end losses in these systems. This paper presents MemSC, which is a scan-resistant and compact cache replacement framework for Memcached. MemSC assigns a multi-granularity reference flag for each item, which requires only a few bits (two bits are enough for general use) per item to support scanresistant cache replacement policies. To evaluate MemSC, we implement three representative cache replacement policies (MemSC-HM, MemSC-LH, and MemSC-LF) on MemSC and test them using various workloads. The experimental results show that MemSC outperforms prior techniques. Compared with the optimized LRU policy in Memcached, MemSC-LH reduces the cache miss ratio and the memory usage of the resulting system by up to 23% and 14% respectively.展开更多
Dynamic resource allocation(DRA) is a key technology to improve system performances in GEO multi-beam satellite systems. And, since the cache resource on the satellite is very valuable and limited, DRA problem under r...Dynamic resource allocation(DRA) is a key technology to improve system performances in GEO multi-beam satellite systems. And, since the cache resource on the satellite is very valuable and limited, DRA problem under restricted cache resources is also an important issue to be studied. This paper mainly investigates the DRA problem of carrier resources under certain cache constraints. What's more, with the aim to satisfy all users' traffic demands as more as possible, and to maximize the utilization of the bandwidth, we formulate a multi-objective optimization problem(MOP) where the satisfaction index and the spectrum efficiency are jointly optimized. A modified strategy SA-NSGAII which combines simulated annealing(SA) and non-dominated sorted genetic algorithm-II(NSGAII) is proposed to approximate the Pareto solution to this MOP problem. Simulation results show the effectiveness of the proposed algorithm in terms of satisfaction index, spectrum efficiency, occupied cache, and etc.展开更多
This paper analyzes cache coherency mechanism from the view of system. It firstly discusses caehe-memory hierarchy of Pentium Ⅲ SMP system, including memory area distribution, cache attributes control and bus transac...This paper analyzes cache coherency mechanism from the view of system. It firstly discusses caehe-memory hierarchy of Pentium Ⅲ SMP system, including memory area distribution, cache attributes control and bus transaction. Secondly it analyzes hardware snoopy mechanism of P6 bus and MESI state transitions adopted by Pentium Ⅲ. Based on these, it focuses on how muhiprocessors and the P6 bus cooperate to ensure cache coherency of the whole system, and gives the key of cache coherency design.展开更多
This paper introduces a novel architecture of metadata management system based on intelligent cache called Metadata Intelligent Cache Controller (MICC). By using an intelligent cache to control the metadata system, ...This paper introduces a novel architecture of metadata management system based on intelligent cache called Metadata Intelligent Cache Controller (MICC). By using an intelligent cache to control the metadata system, MICC can deal with different scenarios such as splitting and merging of queries into sub-queries for available metadata sets in local, in order to reduce access time of remote queries. Application can find results patially from local cache and the remaining portion of the metadata that can be fetched from remote locations. Using the existing metadata, it can not only enhance the fault tolerance and load balancing of system effectively, but also improve the efficiency of access while ensuring the access quality.展开更多
针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立...针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立数学模型,并分析了算法的划分流程。仿真实验结果表明,MT-FTP算法在系统吞吐率方面表现较好,其平均IPC(Instructions Per Cycles)值比UCP(Use Case Point)算法高1.3%,比LRU(Least Recently Used)算法高11.6%。MT-FTP算法对应的系统平均公平性比LRU算法的系统平均公平性高17%,比UCP算法的平均公平性高16.5%。该算法实现了共享Cache划分公平性并兼顾了系统的吞吐率。展开更多
In the big data era,data unavailability,either temporary or permanent,becomes a normal occurrence on a daily basis.Unlike the permanent data failure,which is fixed through a background job,temporarily unavailable data...In the big data era,data unavailability,either temporary or permanent,becomes a normal occurrence on a daily basis.Unlike the permanent data failure,which is fixed through a background job,temporarily unavailable data is recovered on-the-fly to serve the ongoing read request.However,those newly revived data is discarded after serving the request,due to the assumption that data experiencing temporary failures could come back alive later.Such disposal of failure data prevents the sharing of failure information among clients,and leads to many unnecessary data recovery processes,(e.g.caused by either recurring unavailability of a data or multiple data failures in one stripe),thereby straining system performance.To this end,this paper proposes GFCache to cache corrupted data for the dual purposes of failure information sharing and eliminating unnecessary data recovery processes.GFCache employs a greedy caching approach of opportunism to promote not only the failed data,but also sequential failure-likely data in the same stripe.Additionally,GFCache includes a FARC(Failure ARC)catch replacement algorithm,which features a balanced consideration of failure recency,frequency to accommodate data corruption with good hit ratio.The stored data in GFCache is able to support fast read of the normal data access.Furthermore,since GFCache is a generic failure cache,it can be used anywhere erasure coding is deployed with any specific coding schemes and parameters.Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to significantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache.展开更多
At present,the database cache model of power information system has problems such as slow running speed and low database hit rate.To this end,this paper proposes a database cache model for power information systems ba...At present,the database cache model of power information system has problems such as slow running speed and low database hit rate.To this end,this paper proposes a database cache model for power information systems based on deep machine learning.The caching model includes program caching,Structured Query Language(SQL)preprocessing,and core caching modules.Among them,the method to improve the efficiency of the statement is to adjust operations such as multi-table joins and replacement keywords in the SQL optimizer.Build predictive models using boosted regression trees in the core caching module.Generate a series of regression tree models using machine learning algorithms.Analyze the resource occupancy rate in the power information system to dynamically adjust the voting selection of the regression tree.At the same time,the voting threshold of the prediction model is dynamically adjusted.By analogy,the cache model is re-initialized.The experimental results show that the model has a good cache hit rate and cache efficiency,and can improve the data cache performance of the power information system.It has a high hit rate and short delay time,and always maintains a good hit rate even under different computer memory;at the same time,it only occupies less space and less CPU during actual operation,which is beneficial to power The information system operates efficiently and quickly.展开更多
A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity...A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity of contemporary high-performance spacecraft processors.To harness these non-uniform access behaviors,an efficient cache replacement framework featuring an auxiliary cache specifically designed to retain evicted hot data was proposed.This framework reconstructs the cache replacement policy,facilitating data migration between the main cache and the auxiliary cache.Unlike traditional cacheline-granularity policies,the approach excels at identifying and evicting infrequently used data,thereby optimizing cache utilization.The evaluation shows impressive performance improvement,especially on workloads with irregular access patterns.Benefiting from fine granularity,the proposal achieves superior storage efficiency compared with commonly used cache management schemes,providing a potential optimization opportunity for modern resource-constrained processors,such as spacecraft processors.Furthermore,the framework complements existing modern cache replacement policies and can be seamlessly integrated with minimal modifications,enhancing their overall efficacy.展开更多
The cache-based covert channel is one of the common vulnerabilities exploited in the Spectre attacks.Current mitigation strategies focus on blocking the eviction-based channel by using a random/encrypted mapping funct...The cache-based covert channel is one of the common vulnerabilities exploited in the Spectre attacks.Current mitigation strategies focus on blocking the eviction-based channel by using a random/encrypted mapping function to translate memory address to the cache address,while the updated-based channel is still vulnerable.In addition,some mitigation strategies are also costly as it needs software and hardware modifications.In this paper,our objective is to devise low-cost,comprehensive-protection techniques for mitigating the Spectre attacks.We proposed a novel cache structure,named EBCache,which focuses on the RISC-V processor and applies the address encryption and blacklist to resist the Spectre attacks.The addresses encryption mechanism increases the difficulty of pruning a minimal eviction set.The blacklist mechanism makes the updated cache lines loaded by the malicious updates invisible.Our experiments demonstrated that the EBCache can prevent malicious modifications.The EBCache,however,reduces the processor’s performance by about 23%but involves only a low-cost modification in the hardware.展开更多
基金Supported by the Natural Science Foundation of Hubei Province(2012FFC034,2014CFC1100)
文摘In this paper, we present a distributed multi-level cache system based on cloud storage, which is aimed at the low access efficiency of small spatio-temporal data files in information service system of Smart City. Taking classification attribute of small spatio-temporal data files in Smart City as the basis of cache content selection, the cache system adopts different cache pool management strategies in different levels of cache. The results of experiment in prototype system indicate that multi-level cache in this paper effectively increases the access bandwidth of small spatio-temporal files in Smart City and greatly improves service quality of multiple concurrent access in system.
文摘In this paper, a study related to the expected performance behaviour of present 3-level cache system for multi-core systems is presented. For this a queuing model for present 3-level cache system for multi-core processors is developed and its possible performance has been analyzed with the increase in number of cores. Various important performance parameters like access time and utilization of individual cache at different level and overall average access time of the cache system is determined. Results for up to 1024 cores have been reported in this paper.
基金supported by the National Natural Science Foundation of China(NSFC)[Grant No.62072469].
文摘With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for optimizing video content distribution.However,the limited edge node cache capacity and dynamic user requests make edge caching more complex.Therefore,we propose a recommendation-driven edge Caching network architecture for the Full life cycle of video streaming(FlyCache)designed to improve users’Quality of Experience(QoE)and reduce backhaul traffic consumption.FlyCache implements intelligent caching management across three key stages:before-playback,during-playback,and after-playback.Specifically,we introduce a cache placement policy for the before-playback stage,a dynamic prefetching and cache admission policy for the during-playback stage,and a progressive cache eviction policy for the after-playback stage.To validate the effectiveness of FlyCache,we developed a user behavior-driven edge caching simulation framework incorporating recommendation mechanisms.Experiments conducted on the MovieLens and synthetic datasets demonstrate that FlyCache outperforms other caching strategies in terms of byte hit rate,backhaul traffic,and delayed startup rate.
基金supported by National Natural Science Foundation of China(No.61821001)Science and Technology Key Project of Guangdong Province,China(2019B010157001).
文摘In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope with computation-intensive and/or time-sensitive tasks,part of tasks is offloaded to the UAV side,and UAV process them with its own computing resources and caching resources.Thus,the burden of IoTDs gets relieved under the satisfaction of the quality of service(QoS)require-ments.However,owing to the limited resources of UAV,the cost of whole system,i.e.,that is defined as the weighted sum of energy consumption and time de-lay with caching,should be further optimized while the objective function and the constraints are non-convex.Therefore,we first jointly optimize commu-nication resources B,computing resources F and of-floading rates X with alternating iteration and convex optimization method,and then determine the value of caching decision Y with branch-and-bound(BB)al-gorithm.Numerical results show that UAV assisting partial task offloading with content caching is supe-rior to local computing and full offloading mechanism without caching,and meanwhile the cost of whole sys-tem gets further optimized with our proposed scheme.
基金supports from the Major Key Project of PCL (PCL2021A031)Shenzhen Science Technology Program (GXWD20201230155427003-20200824093323001)
文摘In the Satellite-integrated Internet of Things(S-IoT),data freshness in the time-sensitive scenarios could not be guaranteed over the timevarying topology with current distribution strategies aiming to reduce the transmission delay.To address this problem,in this paper,we propose an age-optimal caching distribution mechanism for the high-timeliness data collection in S-IoT by adopting a freshness metric,as called age of information(AoI)through the caching-based single-source multidestinations(SSMDs)transmission,namely Multi-AoI,with a well-designed cross-slot directed graph(CSG).With the proposed CSG,we make optimizations on the locations of cache nodes by solving a nonlinear integer programming problem on minimizing Multi-AoI.In particular,we put up forward three specific algorithms respectively for improving the Multi-AoI,i.e.,the minimum queuing delay algorithm(MQDA)based on node deviation from average level,the minimum propagation delay algorithm(MPDA)based on the node propagation delay reduction,and a delay balanced algorithm(DBA)based on node deviation from average level and propagation delay reduction.The simulation results show that the proposed mechanism can effectively improve the freshness of information compared with the random selection algorithm.
文摘Memory-based key-value cache systems, such as Memcached and Redis, have become indispensable components of data center infrastructures and have been used to cache performance-critical data to avoid expensive back-end database accesses. As the memory is usually not large enough to hold all the items, cache replacement must be performed to evict some cached items to make room for the newly coming items when there is no free space. Many real-world workloads target small items and have frequent bursts of scans (a scan is a sequence of one-time access requests). The commonly used LRU policy does not work well under such workloads since LRU needs a large amount of metadata and tends to discard hot items with scans. Small decreases in hit ratio can result in large end-to-end losses in these systems. This paper presents MemSC, which is a scan-resistant and compact cache replacement framework for Memcached. MemSC assigns a multi-granularity reference flag for each item, which requires only a few bits (two bits are enough for general use) per item to support scanresistant cache replacement policies. To evaluate MemSC, we implement three representative cache replacement policies (MemSC-HM, MemSC-LH, and MemSC-LF) on MemSC and test them using various workloads. The experimental results show that MemSC outperforms prior techniques. Compared with the optimized LRU policy in Memcached, MemSC-LH reduces the cache miss ratio and the memory usage of the resulting system by up to 23% and 14% respectively.
基金supported by the National Science and Technology Major Project under Grant 2018ZX03001016
文摘Dynamic resource allocation(DRA) is a key technology to improve system performances in GEO multi-beam satellite systems. And, since the cache resource on the satellite is very valuable and limited, DRA problem under restricted cache resources is also an important issue to be studied. This paper mainly investigates the DRA problem of carrier resources under certain cache constraints. What's more, with the aim to satisfy all users' traffic demands as more as possible, and to maximize the utilization of the bandwidth, we formulate a multi-objective optimization problem(MOP) where the satisfaction index and the spectrum efficiency are jointly optimized. A modified strategy SA-NSGAII which combines simulated annealing(SA) and non-dominated sorted genetic algorithm-II(NSGAII) is proposed to approximate the Pareto solution to this MOP problem. Simulation results show the effectiveness of the proposed algorithm in terms of satisfaction index, spectrum efficiency, occupied cache, and etc.
文摘This paper analyzes cache coherency mechanism from the view of system. It firstly discusses caehe-memory hierarchy of Pentium Ⅲ SMP system, including memory area distribution, cache attributes control and bus transaction. Secondly it analyzes hardware snoopy mechanism of P6 bus and MESI state transitions adopted by Pentium Ⅲ. Based on these, it focuses on how muhiprocessors and the P6 bus cooperate to ensure cache coherency of the whole system, and gives the key of cache coherency design.
基金Supported by the National High-Technology Re-search and Development Programof China (2002AA1Z2308 ,2002AA118030)the Natural Science Foundation of Liaoning Province(20022027)
文摘This paper introduces a novel architecture of metadata management system based on intelligent cache called Metadata Intelligent Cache Controller (MICC). By using an intelligent cache to control the metadata system, MICC can deal with different scenarios such as splitting and merging of queries into sub-queries for available metadata sets in local, in order to reduce access time of remote queries. Application can find results patially from local cache and the remaining portion of the metadata that can be fetched from remote locations. Using the existing metadata, it can not only enhance the fault tolerance and load balancing of system effectively, but also improve the efficiency of access while ensuring the access quality.
文摘针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立数学模型,并分析了算法的划分流程。仿真实验结果表明,MT-FTP算法在系统吞吐率方面表现较好,其平均IPC(Instructions Per Cycles)值比UCP(Use Case Point)算法高1.3%,比LRU(Least Recently Used)算法高11.6%。MT-FTP算法对应的系统平均公平性比LRU算法的系统平均公平性高17%,比UCP算法的平均公平性高16.5%。该算法实现了共享Cache划分公平性并兼顾了系统的吞吐率。
基金We would like to greatly appreciate the anonymous reviewers for their insightful comments.This work is supported by The National Key Research and Development Program of China(2016YFB1000302)The National Natural Science Foundation of China(61433019,U1435217).
文摘In the big data era,data unavailability,either temporary or permanent,becomes a normal occurrence on a daily basis.Unlike the permanent data failure,which is fixed through a background job,temporarily unavailable data is recovered on-the-fly to serve the ongoing read request.However,those newly revived data is discarded after serving the request,due to the assumption that data experiencing temporary failures could come back alive later.Such disposal of failure data prevents the sharing of failure information among clients,and leads to many unnecessary data recovery processes,(e.g.caused by either recurring unavailability of a data or multiple data failures in one stripe),thereby straining system performance.To this end,this paper proposes GFCache to cache corrupted data for the dual purposes of failure information sharing and eliminating unnecessary data recovery processes.GFCache employs a greedy caching approach of opportunism to promote not only the failed data,but also sequential failure-likely data in the same stripe.Additionally,GFCache includes a FARC(Failure ARC)catch replacement algorithm,which features a balanced consideration of failure recency,frequency to accommodate data corruption with good hit ratio.The stored data in GFCache is able to support fast read of the normal data access.Furthermore,since GFCache is a generic failure cache,it can be used anywhere erasure coding is deployed with any specific coding schemes and parameters.Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to significantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache.
文摘At present,the database cache model of power information system has problems such as slow running speed and low database hit rate.To this end,this paper proposes a database cache model for power information systems based on deep machine learning.The caching model includes program caching,Structured Query Language(SQL)preprocessing,and core caching modules.Among them,the method to improve the efficiency of the statement is to adjust operations such as multi-table joins and replacement keywords in the SQL optimizer.Build predictive models using boosted regression trees in the core caching module.Generate a series of regression tree models using machine learning algorithms.Analyze the resource occupancy rate in the power information system to dynamically adjust the voting selection of the regression tree.At the same time,the voting threshold of the prediction model is dynamically adjusted.By analogy,the cache model is re-initialized.The experimental results show that the model has a good cache hit rate and cache efficiency,and can improve the data cache performance of the power information system.It has a high hit rate and short delay time,and always maintains a good hit rate even under different computer memory;at the same time,it only occupies less space and less CPU during actual operation,which is beneficial to power The information system operates efficiently and quickly.
文摘A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity of contemporary high-performance spacecraft processors.To harness these non-uniform access behaviors,an efficient cache replacement framework featuring an auxiliary cache specifically designed to retain evicted hot data was proposed.This framework reconstructs the cache replacement policy,facilitating data migration between the main cache and the auxiliary cache.Unlike traditional cacheline-granularity policies,the approach excels at identifying and evicting infrequently used data,thereby optimizing cache utilization.The evaluation shows impressive performance improvement,especially on workloads with irregular access patterns.Benefiting from fine granularity,the proposal achieves superior storage efficiency compared with commonly used cache management schemes,providing a potential optimization opportunity for modern resource-constrained processors,such as spacecraft processors.Furthermore,the framework complements existing modern cache replacement policies and can be seamlessly integrated with minimal modifications,enhancing their overall efficacy.
基金This work was supported in part by the China Ministry of Science and Technology under Grant 2015GA600002。
文摘The cache-based covert channel is one of the common vulnerabilities exploited in the Spectre attacks.Current mitigation strategies focus on blocking the eviction-based channel by using a random/encrypted mapping function to translate memory address to the cache address,while the updated-based channel is still vulnerable.In addition,some mitigation strategies are also costly as it needs software and hardware modifications.In this paper,our objective is to devise low-cost,comprehensive-protection techniques for mitigating the Spectre attacks.We proposed a novel cache structure,named EBCache,which focuses on the RISC-V processor and applies the address encryption and blacklist to resist the Spectre attacks.The addresses encryption mechanism increases the difficulty of pruning a minimal eviction set.The blacklist mechanism makes the updated cache lines loaded by the malicious updates invisible.Our experiments demonstrated that the EBCache can prevent malicious modifications.The EBCache,however,reduces the processor’s performance by about 23%but involves only a low-cost modification in the hardware.