期刊文献+
共找到386篇文章
< 1 2 20 >
每页显示 20 50 100
Dynamic Metadata Prefetching and Data Placement Algorithms for High-Performance Wide-Area Applications
1
作者 Bing Wei Yubin Li +2 位作者 Yi Wu Ming Zhong Ning Luo 《Computers, Materials & Continua》 2025年第9期4773-4804,共32页
Metadata prefetching and data placement play a critical role in enhancing access performance for file systems operating over wide-area networks.However,developing effective strategies for metadata prefetching in envir... Metadata prefetching and data placement play a critical role in enhancing access performance for file systems operating over wide-area networks.However,developing effective strategies for metadata prefetching in environments with concurrent workloads and for data placement across distributed networks remains a significant challenge.This study introduces novel and efficient methodologies for metadata prefetching and data placement,leveraging fine-grained control of prefetching strategies and variable-sized data fragment writing to optimize the I/O bandwidth of distributed file systems.The proposed metadata prefetching technique employs dynamic workload analysis to identify dominant workload patterns and adaptively refines prefetching policies,thereby boosting metadata access efficiency under concurrent scenarios.Meanwhile,the data placement strategy improves write performance by storing data fragments locally within the nearest data center and transmitting only the fragment location metadata to the remote data center hosting the original file.Experimental evaluations using real-world system traces demonstrate that the proposed approaches reduce metadata access times by up to 33.5%and application data access times by 17.19%compared to state-of-the-art techniques. 展开更多
关键词 Metadata prefetching data placement wide-area network file system(WANFS) concurrent workload optimization
在线阅读 下载PDF
Windows10中Prefetch文件的变化及对取证分析的影响
2
作者 张俊 朱勇宇 《警察技术》 2021年第5期67-70,共4页
预读取,是Windows用来提高操作系统和应用程序启动性能的一项重要机制。Windows通过Prefetch文件在系统和应用程序启动前将所需的文件提前缓存到内存中,从而实现这一机制,因此Prefetch文件中记录了大量有关应用程序运行的痕迹,这些痕迹... 预读取,是Windows用来提高操作系统和应用程序启动性能的一项重要机制。Windows通过Prefetch文件在系统和应用程序启动前将所需的文件提前缓存到内存中,从而实现这一机制,因此Prefetch文件中记录了大量有关应用程序运行的痕迹,这些痕迹便是宝贵的电子数据证据。相较于其它版本的Prefetch文件,Windows10中Prefetch文件的结构和功能发生了较大的变化,但是其相关的研究和解析工作相对较少。主要针对Windows10中Prefetch文件的结构和功能进行分析,并进一步阐述Prefetch文件在电子数据取证中的重要作用。 展开更多
关键词 prefetch 文件结构 Windows10取证
在线阅读 下载PDF
Massive Files Prefetching Model Based on LSTM Neural Network with Cache Transaction Strategy 被引量:3
3
作者 Dongjie Zhu Haiwen Du +6 位作者 Yundong Sun Xiaofang Li Rongning Qu Hao Hu Shuangshuang Dong Helen Min Zhou Ning Cao 《Computers, Materials & Continua》 SCIE EI 2020年第5期979-993,共15页
In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches d... In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches data before it is needed according to the file access pattern,which can reduce the I/O waiting time and increase the system concurrency.However,prefetching model needs to mine the degree of association between files to ensure the accuracy of prefetching.In the massive small file situation,the sheer volume of files poses a challenge to the efficiency and accuracy of relevance mining.In this paper,we propose a massive files prefetching model based on LSTM neural network with cache transaction strategy to improve file access efficiency.Firstly,we propose a file clustering algorithm based on temporal locality and spatial locality to reduce the computational complexity.Secondly,we propose a definition of cache transaction according to files occurrence in cache instead of time-offset distance based methods to extract file block feature accurately.Lastly,we innovatively propose a file access prediction algorithm based on LSTM neural network which predict the file that have high possibility to be accessed.Experiments show that compared with the traditional LRU and the plain grouping methods,the proposed model notably increase the cache hit rate and effectively reduces the I/O wait time. 展开更多
关键词 Massive files prefetching model cache transaction distributed storage systems LSTM neural network
在线阅读 下载PDF
Windows系统下Prefetch文件取证方法初探 被引量:1
4
作者 陈俊珊 黄君灿 +1 位作者 苏再添 吴少华 《网络空间安全》 2019年第3期63-68,共6页
在电子数据取证中对痕迹的提取和分析是非常重要的一项工作,通过对应用程序运行痕迹的提取,可以分析出用户的行为特征,对计算机取证具有重要的意义。Prefetch(简称PF)是微软Windows操作系统用来存放系统预读信息的一种文件,该文件中包... 在电子数据取证中对痕迹的提取和分析是非常重要的一项工作,通过对应用程序运行痕迹的提取,可以分析出用户的行为特征,对计算机取证具有重要的意义。Prefetch(简称PF)是微软Windows操作系统用来存放系统预读信息的一种文件,该文件中包含可执行文件的名称、所调用DLL文件列表(Unicode)、路径、运行次数和最后一次运行时间等信息。由于PF文件格式没有官方的文档描述,在不同Windows系统下结构也有区别,而且国内目前对Windows10下压缩型PF文件的研究较少。文章主要研究了Windows操作系统下的几种PF文件格式,提出了一种针对Prefetch文件取证的方法,通过提取并分析应用程序的运行痕迹,为案件的侦破提供重要的线索。 展开更多
关键词 prefetch Windows10 运行痕迹 取证
在线阅读 下载PDF
Windows系统中Prefetch信息的提取与分析 被引量:1
5
作者 李艳 《信息安全与技术》 2012年第5期46-48,共3页
在Windows系统的Prefetch文件夹中存在大量的预读取文件,这些文件中实际上记录了具有一定取证价值的信息。文章试图通过运用一些分析工具来发现和提取文件中所包含的内容。
关键词 prefetch prefetch PARSER 取证
在线阅读 下载PDF
A Comparison Study between Informed and Predictive Prefetching Mechanisms for I/O Storage Systems 被引量:1
6
作者 Maen M. Al Assaf Ali Rodan +1 位作者 Mohammad Qatawneh Mohamed Riduan Abid 《International Journal of Communications, Network and System Sciences》 2015年第5期181-186,共6页
In this paper, we present a comparative study between informed and predictive prefetching mechanisms that were presented to leverage the performance gap between I/O storage systems and CPU. In particular, we will focu... In this paper, we present a comparative study between informed and predictive prefetching mechanisms that were presented to leverage the performance gap between I/O storage systems and CPU. In particular, we will focus on transparent informed prefetching (TIP) and predictive prefetching using probability graph approach (PG). Our main objective is to show the main features, motivations, and implementation overview of each mechanism. We also conducted a performance evaluation discussion that shows a comparison between both mechanisms performance when using different cache size values. 展开更多
关键词 INFORMED prefetchING PREDICTIVE prefetchING PROBABILITY GRAPH Parallel Storage Systems
暂未订购
Occlusion Culling Algorithm Using Prefetching and Adaptive Level of Detail Technique
7
作者 郑福仁 战守义 杨兵 《Journal of Beijing Institute of Technology》 EI CAS 2006年第4期425-430,共6页
A novel approach that integrates occlusion culling within the view-dependent rendering framework is proposed. The algorithm uses the prioritized-layered projection(PLP) algorithm to occlude those obscured objects, a... A novel approach that integrates occlusion culling within the view-dependent rendering framework is proposed. The algorithm uses the prioritized-layered projection(PLP) algorithm to occlude those obscured objects, and uses an approximate visibility technique to accurately and efficiently determine which objects will be visible in the coming future and prefetch those objects from disk before they are rendered, view-dependent rendering technique provides the ability to change level of detail over the surface seamlessly and smoothly in real-time according to cell solidity value. 展开更多
关键词 occlusion culling prefetchING adaptive level of detail(LOD) approximate algorithm conservative algorithm
在线阅读 下载PDF
Correlation-Aware Replica Prefetching Strategy to Decrease Access Latency in Edge Cloud
8
作者 Yang Liang Zhigang Hu +1 位作者 Xinyu Zhang Hui Xiao 《China Communications》 SCIE CSCD 2021年第9期249-264,共16页
With the number of connected devices increasing rapidly,the access latency issue increases drastically in the edge cloud environment.Massive low time-constrained and data-intensive mobile applications require efficien... With the number of connected devices increasing rapidly,the access latency issue increases drastically in the edge cloud environment.Massive low time-constrained and data-intensive mobile applications require efficient replication strategies to decrease retrieval time.However,the determination of replicas is not reasonable in many previous works,which incurs high response delay.To this end,a correlation-aware replica prefetching(CRP)strategy based on the file correlation principle is proposed,which can prefetch the files with high access probability.The key is to determine and obtain the implicit high-value files effectively,which has a significant impact on the performance of CRP.To achieve the goal of accelerating the acquisition of implicit highvalue files,an access rule management method based on consistent hashing is proposed,and then the storage and query mechanisms for access rules based on adjacency list storage structure are further presented.The theoretical analysis and simulation results corroborate that CRP shortens average response time over 4.8%,improves average hit ratio over 4.2%,reduces transmitting data amount over 8.3%,and maintains replication frequency at a reasonable level when compared to other schemes. 展开更多
关键词 edge cloud access latency replica prefetching correlation-aware access rule
在线阅读 下载PDF
Predictive Prefetching for Parallel Hybrid Storage Systems
9
作者 Maen M. Al Assaf 《International Journal of Communications, Network and System Sciences》 2015年第5期161-180,共20页
In this paper, we present a predictive prefetching mechanism that is based on probability graph approach to perform prefetching between different levels in a parallel hybrid storage system. The fundamental concept of ... In this paper, we present a predictive prefetching mechanism that is based on probability graph approach to perform prefetching between different levels in a parallel hybrid storage system. The fundamental concept of our approach is to invoke parallel hybrid storage system’s parallelism and prefetch data among multiple storage levels (e.g. solid state disks, and hard disk drives) in parallel with the application’s on-demand I/O reading requests. In this study, we show that a predictive prefetching across multiple storage levels is an efficient technique for placing near future needed data blocks in the uppermost levels near the application. Our PPHSS approach extends previous ideas of predictive prefetching in two ways: (1) our approach reduces applications’ execution elapsed time by keeping data blocks that are predicted to be accessed in the near future cached in the uppermost level;(2) we propose a parallel data fetching scheme in which multiple fetching mechanisms (i.e. predictive prefetching and application’s on-demand data requests) can work in parallel;where the first one fetches data blocks among the different levels of the hybrid storage systems (i.e. low-level (slow) to high-level (fast) storage devices) and the other one fetches the data from the storage system to the application. Our PPHSS strategy integrated with the predictive prefetching mechanism significantly reduces overall I/O access time in a hybrid storage system. Finally, we developed a simulator to evaluate the performance of the proposed predictive prefetching scheme in the context of hybrid storage systems. Our results show that our PPHSS can improve system performance by 4% across real-world I/O traces without the need of using large size caches. 展开更多
关键词 PREDICTIVE prefetchING PROBABILITY GRAPH PARALLEL STORAGE Systems Hybrid STORAGE System
暂未订购
Method for improving MapReduce performance by prefetching before scheduling
10
作者 张霄宏 Feng Shengzhong +1 位作者 Fan Jianping Huang Zhexue 《High Technology Letters》 EI CAS 2012年第4期343-349,共7页
In this paper, a prefetching technique is proposed to solve the performance problem caused by remote data access delay. In the technique, the map tasks which will cause the delay are predicted first and then the input... In this paper, a prefetching technique is proposed to solve the performance problem caused by remote data access delay. In the technique, the map tasks which will cause the delay are predicted first and then the input data of these tasks will be preloaded before the tasks are scheduled. During the execution, the input data can be read from local nodes. Therefore, the delay can be hidden. The technique has been implemented in Hadoop-0. 20.1. The experiment results have shown that the technique reduces map tasks causing delay, and improves the performance of Hadoop MapRe- duce by 20%. 展开更多
关键词 cloud computing distributed computing prefetchING MAPREDUCE SCHEDULING
在线阅读 下载PDF
Web Acceleration by Prefetching in Extremely Large Latency Network
11
作者 Fumiaki Nagase Takefumi Hiraguri +1 位作者 Kentaro Nishimori Hideo Makino 《American Journal of Operations Research》 2012年第3期339-347,共9页
A scheme for high-speed data transfer via the Internet for Web service in an extremely large delay environment is proposed. With the wide-spread use of Internet services in recent years, WLAN Internet service in high-... A scheme for high-speed data transfer via the Internet for Web service in an extremely large delay environment is proposed. With the wide-spread use of Internet services in recent years, WLAN Internet service in high-speed trains has commenced. The system for this is composed of a satellite communication system between the train and the ground station, which is characterized by extremely large latency of several hundred milliseconds due to long propagation latency. High-speed web access is not available to users in a train in such an extremely large latency network system. Thus, a prefetch scheme for performance acceleration of Web services in this environment is proposed. A test-bed system that implements the proposed scheme is implemented and is its performance in this test-bed is evaluated. The proposed scheme is verified to enable high-speed Web access in the extremely large delay environment compared to conventional schemes. 展开更多
关键词 Extremely-Large-Latency NETWORK Satellite Communication HTTP Web prefetchING prefetchING Proxy SERVER Information Storage SERVER
在线阅读 下载PDF
Adaptive Cache Allocation with Prefetching Policy over End-to-End Data Processing
12
作者 Hang Qin Li Zhu 《Journal of Signal and Information Processing》 2017年第3期152-160,共9页
With the speed gap between storage system access and processor computing, end-to-end data processing has become a bottleneck to improve the total performance of computer systems over the Internet. Based on the analysi... With the speed gap between storage system access and processor computing, end-to-end data processing has become a bottleneck to improve the total performance of computer systems over the Internet. Based on the analysis of data processing behavior, an adaptive cache organization scheme is proposed with fast address calculation. This scheme can make full use of the characteristics of stack space data access, adopt fast address calculation strategy, and reduce the hit time of stack access. Adaptively, the stack cache can be turned off from beginning to end, when a stack overflow occurs to avoid the effect of stack switching on processor performance. Also, through the instruction cache and the failure behavior for the data cache, a prefetching policy is developed, which is combined with the data capture of the failover queue state. Finally, the proposed method can maintain the order of instruction and data access, which facilitates the extraction of prefetching in the end-to-end data processing. 展开更多
关键词 END-TO-END Data Processing STORAGE System CACHE prefetchING
在线阅读 下载PDF
面向混合专家模型的流行专家预取策略 被引量:1
13
作者 叶进 李温良 +1 位作者 余天添 彭涯军 《小型微型计算机系统》 北大核心 2025年第7期1760-1766,共7页
在混合专家模型训练中,引入专家并行可以有效减轻单节点的内存压力并提高模型性能.然而,专家并行训练存在因令牌(Token)频繁跨节点传输及节点间负载不均衡而导致的高通信开销问题.针对此问题,本文提出了一种基于流行度的预取专家策略(Pr... 在混合专家模型训练中,引入专家并行可以有效减轻单节点的内存压力并提高模型性能.然而,专家并行训练存在因令牌(Token)频繁跨节点传输及节点间负载不均衡而导致的高通信开销问题.针对此问题,本文提出了一种基于流行度的预取专家策略(Prefetch Expert, PE).该策略根据专家的流行度智能预测并提前拉取当前训练所需的专家,以提高训练效率.此外,针对预取不成功的情况,PE策略引入了一种异步拉取机制,允许专家计算的同时进行其他专家的拉取操作,实现专家间通信与计算的重叠,有效降低由网络争用引起的通信延迟.在CIFAR-100、WikiText-103和SQUAD数据集上的大规模实验表明,较对比方案,采用PE策略能够使主流深度学习模型的收敛时间至少减少30%. 展开更多
关键词 专家并行 通信开销 专家流行度 专家预取 深度学习
在线阅读 下载PDF
Taxonomy of Data Prefetching for Multicore Processors 被引量:1
14
作者 Surendra Byna 陈勇 孙贤和 《Journal of Computer Science & Technology》 SCIE EI CSCD 2009年第3期405-417,共13页
Data prefetching is an effective data access latency hiding technique to mask the CPU stall caused by cache misses and to bridge the performance gap between processor and memory. With hardware and/or software support,... Data prefetching is an effective data access latency hiding technique to mask the CPU stall caused by cache misses and to bridge the performance gap between processor and memory. With hardware and/or software support, data prefetching brings data closer to a processor before it is actually needed. Many prefetching techniques have been developed for single-core processors. Recent developments in processor technology have brought multicore processors into mainstream. While some of the single-core prefetching techniques are directly applicable to multicore processors, numerous novel strategies have been proposed in the past few years to take advantage of multiple cores. This paper aims to provide a comprehensive review of the state-of-the-art prefetching techniques, and proposes a taxonomy that classifies various design concerns in developing a prefetching strategy, especially for multicore processors. We compare various existing methods through analysis as well. 展开更多
关键词 taxonomy of prefetching strategies multicore processors data prefetching memory hierarchy
原文传递
处理器数据预取器安全研究综述
15
作者 刘畅 黄祺霖 +4 位作者 刘煜川 林世鸿 秦中元 陈立全 吕勇强 《电子与信息学报》 北大核心 2025年第9期3038-3056,共19页
数据预取器是现代处理器用于提高性能的重要微架构组件。然而,由于在设计之初缺乏系统性的安全评估与考量,主流商用处理器中的预取器近年来被揭示出存在严重安全隐患,已被用于针对浏览器、操作系统和可信执行环境的侧信道攻击。面对这... 数据预取器是现代处理器用于提高性能的重要微架构组件。然而,由于在设计之初缺乏系统性的安全评估与考量,主流商用处理器中的预取器近年来被揭示出存在严重安全隐患,已被用于针对浏览器、操作系统和可信执行环境的侧信道攻击。面对这类新型微架构攻击,处理器安全研究亟需解决以下关键问题:如何系统性地分析攻击方法,全面认识预取器潜在风险,量化评估预取器安全程度,从而设计更加安全的数据预取器。为解决这些问题,该文系统调研了商用处理器中已知预取器设计及相关侧信道攻击,通过提取内存访问模式,为7种预取器建立行为模型,并基于此为20种侧信道攻击建立攻击模型,系统整理了各类攻击的触发条件和泄露信息,并分析可能存在的其他攻击方法。在此基础上,该文提出1套包含3个维度和24个指标的安全性评估体系,为数据预取器的安全性提供全面量化评估。最后,该文深入探讨了防御策略、安全预取器设计思路及未来研究方向。作为首篇聚焦于商用处理器数据预取器安全问题的综述性文章,该文有助于深入了解数据预取器面临的安全挑战,推动预取器的安全性量化评估体系构建,从而为设计更加安全的数据预取器提供指导。 展开更多
关键词 计算机体系结构 处理器 数据预取器 微架构安全 侧信道攻击
在线阅读 下载PDF
A Prefetch-Adaptive Intelligent Cache Replacement Policy Based on Machine Learning 被引量:2
16
作者 杨会静 方娟 +1 位作者 蔡旻 才智 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第2期391-404,共14页
Hardware prefetching and replacement policies are two techniques to improve the performance of the memory subsystem.While prefetching hides memory latency and improves performance,interactions take place with the cach... Hardware prefetching and replacement policies are two techniques to improve the performance of the memory subsystem.While prefetching hides memory latency and improves performance,interactions take place with the cache replacement policies,thereby introducing performance variability in the application.To improve the accuracy of reuse of cache blocks in the presence of hardware prefetching,we propose Prefetch-Adaptive Intelligent Cache Replacement Policy(PAIC).PAIC is designed with separate predictors for prefetch and demand requests,and uses machine learning to optimize reuse prediction in the presence of prefetching.By distinguishing reuse predictions for prefetch and demand requests,PAIC can better combine the performance benefits from prefetching and replacement policies.We evaluate PAIC on a set of 27 memory-intensive programs from the SPEC 2006 and SPEC 2017.Under single-core configuration,PAIC improves performance over Least Recently Used(LRU)replacement policy by 37.22%,compared with improvements of 32.93%for Signature-based Hit Predictor(SHiP),34.56%for Hawkeye,and 34.43%for Glider.Under the four-core configuration,PAIC improves performance over LRU by 20.99%,versus 13.23%for SHiP,17.89%for Hawkeye and 15.50%for Glider. 展开更多
关键词 hardware prefetching machine learning prefetch-Adaptive Intelligent Cache Replacement Policy(PAIC) replacement policy
原文传递
An SPN-Based Integrated Model for Web Prefetching and Caching 被引量:15
17
作者 石磊 韩英杰 +2 位作者 丁晓光 卫琳 古志民 《Journal of Computer Science & Technology》 SCIE EI CSCD 2006年第4期482-489,共8页
The World Wide Web has become the primary means for information dissemination. Due to the limited resources of the network bandwidth, users always suffer from long time waiting. Web prefetching and web caching are the... The World Wide Web has become the primary means for information dissemination. Due to the limited resources of the network bandwidth, users always suffer from long time waiting. Web prefetching and web caching are the primary approaches to reducing the user perceived access latency and improving the quality of services. In this paper, a Stochastic Petri Nets (SPN) based integrated web prefetching and caching model (IWPCM) is presented and the performance evaluation of IWPCM is made. The performance metrics, access latency, throughput, HR (hit ratio) and BHR (byte hit ratio) are analyzed and discussed. Simulations show that compared with caching only model (CM), IWPCM can further improve the throughput, HR and BHR efficiently and reduce the access latency. The performance evaluation based on the SPN model can provide a basis for implementation of web prefetching and caching and the combination of web prefetching and caching holds the promise of improving the QoS of web systems. 展开更多
关键词 stochastic Petri nets web prefetching web caching performance evaluation
原文传递
Prefetching J^+-Tree:A Cache-Optimized Main Memory Database Index Structure 被引量:3
18
作者 栾华 杜小勇 王珊 《Journal of Computer Science & Technology》 SCIE EI CSCD 2009年第4期687-707,共21页
As the speed gap between main memory and modern processors continues to widen, the cache behavior becomes more important for main memory database systems (MMDBs). Indexing technique is a key component of MMDBs. Unfo... As the speed gap between main memory and modern processors continues to widen, the cache behavior becomes more important for main memory database systems (MMDBs). Indexing technique is a key component of MMDBs. Unfortunately, the predominant indexes -B^+-trees and T-trees -- have been shown to utilize cache poorly, which triggers the development of many cache-conscious indexes, such as CSB^+-trees and pB^+-trees. Most of these cache-conscious indexes are variants of conventional B^+-trees, and have better cache performance than B^+-trees. In this paper, we develop a novel J^+-tree index, inspired by the Judy structure which is an associative array data structure, and propose a more cacheoptimized index -- Prefetching J^+-tree (pJ^+-tree), which applies prefetching to J^+-tree to accelerate range scan operations. The J^+-tree stores all the keys in its leaf nodes and keeps the reference values of leaf nodes in a Judy structure, which makes J^+-tree not only hold the advantages of Judy (such as fast single value search) but also outperform it in other aspects. For example, J^+-trees can achieve better performance on range queries than Judy. The pJ^+-tree index exploits prefetching techniques to further improve the cache behavior of J^+-trees and yields a speedup of 2.0 on range scans. Compared with B^+-trees, CSB^+-trees, pB^+-trees and T-trees, our extensive experimental Study shows that pJ^+-trees can provide better performance on both time (search, scan, update) and space aspects. 展开更多
关键词 index structure pJ^+-tree prefetchING cache conscious main memory database
原文传递
Optimizing the Copy-on-Write Mechanism of Docker by Dynamic Prefetching 被引量:3
19
作者 Yan Jiang Wei Liu +1 位作者 Xuanhua Shi Weizhong Qiang 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2021年第3期266-274,共9页
Docker,as a mainstream container solution,adopts the Copy-on-Write(CoW)mechanism in its storage drivers.This mechanism satisfies the need of different containers to share the same image.However,when a single container... Docker,as a mainstream container solution,adopts the Copy-on-Write(CoW)mechanism in its storage drivers.This mechanism satisfies the need of different containers to share the same image.However,when a single container performs operations such as modification of an image file,a duplicate is created in the upper readwrite layer,which contributes to the runtime overhead.When the accessed image file is fairly large,this additional overhead becomes non-negligible.Here we present the concept of Dynamic Prefetching Strategy Optimization(DPSO),which optimizes the Co W mechanism for a Docker container on the basis of the dynamic prefetching strategy.At the beginning of the container life cycle,DPSO pre-copies up the image files that are most likely to be copied up later to eliminate the overhead caused by performing this operation during application runtime.The experimental results show that DPSO has an average prefetch accuracy of greater than 78%in complex scenarios and could effectively eliminate the overhead caused by the CoW mechanism. 展开更多
关键词 DOCKER CONTAINER Copy-on-Write(CoW) storage driver prefetch strategy
原文传递
I/O Acceleration via Multi-Tiered Data Buffering and Prefetching 被引量:2
20
作者 Anthony Kougkas Hariharan Devarajan Xian-He Sun 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第1期92-120,共29页
Modern High-Performance Computing(HPC)systems are adding extra layers to the memory and storage hierarchy,named deep memory and storage hierarchy(DMSH),to increase I/O performance.New hardware technologies,such as NVM... Modern High-Performance Computing(HPC)systems are adding extra layers to the memory and storage hierarchy,named deep memory and storage hierarchy(DMSH),to increase I/O performance.New hardware technologies,such as NVMe and SSD,have been introduced in burst buffer installations to reduce the pressure for external storage and boost the burstiness of modern I/O systems.The DMSH has demonstrated its strength and potential in practice.However,each layer of DMSH is an independent heterogeneous system and data movement among more layers is significantly more complex even without considering heterogeneity.How to efficiently utilize the DMSH is a subject of research facing the HPC community.Further,accessing data with a high-throughput and low-latency is more imperative than ever.Data prefetching is a well-known technique for hiding read latency by requesting data before it is needed to move it from a high-latency medium(e.g.,disk)to a low-latency one(e.g.,main memory).However,existing solutions do not consider the new deep memory and storage hierarchy and also suffer from under-utilization of prefetching resources and unnecessary evictions.Additionally,existing approaches implement a client-pull model where understanding the application's I/O behavior drives prefetching decisions.Moving towards exascale,where machines run multiple applications concurrently by accessing files in a workflow,a more data-centric approach resolves challenges such as cache pollution and redundancy.In this paper,we present the design and implementation of Hermes:a new,heterogeneous-aware,multi-tiered,dynamic,and distributed I/O buffering system.Hermes enables,manages,supervises,and,in some sense,extends I/O buffering to fully integrate into the DMSH.We introduce three novel data placement policies to efficiently utilize all layers and we present three novel techniques to perform memory,metadata,and communication management in hierarchical buffering systems.Additionally,we demonstrate the benefits of a truly hierarchical data prefetcher that adopts a server-push approach to data prefetching.Our evaluation shows that,in addition to automatic data movement through the hierarchy,Hermes can significantly accelerate I/O and outperforms by more than 2x state-of-the-art buffering platforms.Lastly,results show 10%-35%performance gains over existing prefetchers and over 50%when compared to systems with no prefetching. 展开更多
关键词 I/O BUFFERING heterogeneous BUFFERING layered BUFFERING deep memory hierarchy BURST BUFFERS hierarchical data prefetchING DATA-CENTRIC architecture
原文传递
上一页 1 2 20 下一页 到第
使用帮助 返回顶部