期刊文献+
共找到387篇文章
< 1 2 20 >
每页显示 20 50 100
Dynamic Metadata Prefetching and Data Placement Algorithms for High-Performance Wide-Area Applications
1
作者 Bing Wei Yubin Li +2 位作者 Yi Wu Ming Zhong Ning Luo 《Computers, Materials & Continua》 2025年第9期4773-4804,共32页
Metadata prefetching and data placement play a critical role in enhancing access performance for file systems operating over wide-area networks.However,developing effective strategies for metadata prefetching in envir... Metadata prefetching and data placement play a critical role in enhancing access performance for file systems operating over wide-area networks.However,developing effective strategies for metadata prefetching in environments with concurrent workloads and for data placement across distributed networks remains a significant challenge.This study introduces novel and efficient methodologies for metadata prefetching and data placement,leveraging fine-grained control of prefetching strategies and variable-sized data fragment writing to optimize the I/O bandwidth of distributed file systems.The proposed metadata prefetching technique employs dynamic workload analysis to identify dominant workload patterns and adaptively refines prefetching policies,thereby boosting metadata access efficiency under concurrent scenarios.Meanwhile,the data placement strategy improves write performance by storing data fragments locally within the nearest data center and transmitting only the fragment location metadata to the remote data center hosting the original file.Experimental evaluations using real-world system traces demonstrate that the proposed approaches reduce metadata access times by up to 33.5%and application data access times by 17.19%compared to state-of-the-art techniques. 展开更多
关键词 Metadata prefetching data placement wide-area network file system(WANFS) concurrent workload optimization
在线阅读 下载PDF
Windows10中Prefetch文件的变化及对取证分析的影响
2
作者 张俊 朱勇宇 《警察技术》 2021年第5期67-70,共4页
预读取,是Windows用来提高操作系统和应用程序启动性能的一项重要机制。Windows通过Prefetch文件在系统和应用程序启动前将所需的文件提前缓存到内存中,从而实现这一机制,因此Prefetch文件中记录了大量有关应用程序运行的痕迹,这些痕迹... 预读取,是Windows用来提高操作系统和应用程序启动性能的一项重要机制。Windows通过Prefetch文件在系统和应用程序启动前将所需的文件提前缓存到内存中,从而实现这一机制,因此Prefetch文件中记录了大量有关应用程序运行的痕迹,这些痕迹便是宝贵的电子数据证据。相较于其它版本的Prefetch文件,Windows10中Prefetch文件的结构和功能发生了较大的变化,但是其相关的研究和解析工作相对较少。主要针对Windows10中Prefetch文件的结构和功能进行分析,并进一步阐述Prefetch文件在电子数据取证中的重要作用。 展开更多
关键词 prefetch 文件结构 Windows10取证
在线阅读 下载PDF
Massive Files Prefetching Model Based on LSTM Neural Network with Cache Transaction Strategy 被引量:3
3
作者 Dongjie Zhu Haiwen Du +6 位作者 Yundong Sun Xiaofang Li Rongning Qu Hao Hu Shuangshuang Dong Helen Min Zhou Ning Cao 《Computers, Materials & Continua》 SCIE EI 2020年第5期979-993,共15页
In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches d... In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches data before it is needed according to the file access pattern,which can reduce the I/O waiting time and increase the system concurrency.However,prefetching model needs to mine the degree of association between files to ensure the accuracy of prefetching.In the massive small file situation,the sheer volume of files poses a challenge to the efficiency and accuracy of relevance mining.In this paper,we propose a massive files prefetching model based on LSTM neural network with cache transaction strategy to improve file access efficiency.Firstly,we propose a file clustering algorithm based on temporal locality and spatial locality to reduce the computational complexity.Secondly,we propose a definition of cache transaction according to files occurrence in cache instead of time-offset distance based methods to extract file block feature accurately.Lastly,we innovatively propose a file access prediction algorithm based on LSTM neural network which predict the file that have high possibility to be accessed.Experiments show that compared with the traditional LRU and the plain grouping methods,the proposed model notably increase the cache hit rate and effectively reduces the I/O wait time. 展开更多
关键词 Massive files prefetching model cache transaction distributed storage systems LSTM neural network
在线阅读 下载PDF
Windows系统下Prefetch文件取证方法初探 被引量:1
4
作者 陈俊珊 黄君灿 +1 位作者 苏再添 吴少华 《网络空间安全》 2019年第3期63-68,共6页
在电子数据取证中对痕迹的提取和分析是非常重要的一项工作,通过对应用程序运行痕迹的提取,可以分析出用户的行为特征,对计算机取证具有重要的意义。Prefetch(简称PF)是微软Windows操作系统用来存放系统预读信息的一种文件,该文件中包... 在电子数据取证中对痕迹的提取和分析是非常重要的一项工作,通过对应用程序运行痕迹的提取,可以分析出用户的行为特征,对计算机取证具有重要的意义。Prefetch(简称PF)是微软Windows操作系统用来存放系统预读信息的一种文件,该文件中包含可执行文件的名称、所调用DLL文件列表(Unicode)、路径、运行次数和最后一次运行时间等信息。由于PF文件格式没有官方的文档描述,在不同Windows系统下结构也有区别,而且国内目前对Windows10下压缩型PF文件的研究较少。文章主要研究了Windows操作系统下的几种PF文件格式,提出了一种针对Prefetch文件取证的方法,通过提取并分析应用程序的运行痕迹,为案件的侦破提供重要的线索。 展开更多
关键词 prefetch Windows10 运行痕迹 取证
在线阅读 下载PDF
Windows系统中Prefetch信息的提取与分析 被引量:1
5
作者 李艳 《信息安全与技术》 2012年第5期46-48,共3页
在Windows系统的Prefetch文件夹中存在大量的预读取文件,这些文件中实际上记录了具有一定取证价值的信息。文章试图通过运用一些分析工具来发现和提取文件中所包含的内容。
关键词 prefetch prefetch PARSER 取证
在线阅读 下载PDF
A Comparison Study between Informed and Predictive Prefetching Mechanisms for I/O Storage Systems 被引量:1
6
作者 Maen M. Al Assaf Ali Rodan +1 位作者 Mohammad Qatawneh Mohamed Riduan Abid 《International Journal of Communications, Network and System Sciences》 2015年第5期181-186,共6页
In this paper, we present a comparative study between informed and predictive prefetching mechanisms that were presented to leverage the performance gap between I/O storage systems and CPU. In particular, we will focu... In this paper, we present a comparative study between informed and predictive prefetching mechanisms that were presented to leverage the performance gap between I/O storage systems and CPU. In particular, we will focus on transparent informed prefetching (TIP) and predictive prefetching using probability graph approach (PG). Our main objective is to show the main features, motivations, and implementation overview of each mechanism. We also conducted a performance evaluation discussion that shows a comparison between both mechanisms performance when using different cache size values. 展开更多
关键词 INFORMED prefetchING PREDICTIVE prefetchING PROBABILITY GRAPH Parallel Storage Systems
暂未订购
Occlusion Culling Algorithm Using Prefetching and Adaptive Level of Detail Technique
7
作者 郑福仁 战守义 杨兵 《Journal of Beijing Institute of Technology》 EI CAS 2006年第4期425-430,共6页
A novel approach that integrates occlusion culling within the view-dependent rendering framework is proposed. The algorithm uses the prioritized-layered projection(PLP) algorithm to occlude those obscured objects, a... A novel approach that integrates occlusion culling within the view-dependent rendering framework is proposed. The algorithm uses the prioritized-layered projection(PLP) algorithm to occlude those obscured objects, and uses an approximate visibility technique to accurately and efficiently determine which objects will be visible in the coming future and prefetch those objects from disk before they are rendered, view-dependent rendering technique provides the ability to change level of detail over the surface seamlessly and smoothly in real-time according to cell solidity value. 展开更多
关键词 occlusion culling prefetchING adaptive level of detail(LOD) approximate algorithm conservative algorithm
在线阅读 下载PDF
Correlation-Aware Replica Prefetching Strategy to Decrease Access Latency in Edge Cloud
8
作者 Yang Liang Zhigang Hu +1 位作者 Xinyu Zhang Hui Xiao 《China Communications》 SCIE CSCD 2021年第9期249-264,共16页
With the number of connected devices increasing rapidly,the access latency issue increases drastically in the edge cloud environment.Massive low time-constrained and data-intensive mobile applications require efficien... With the number of connected devices increasing rapidly,the access latency issue increases drastically in the edge cloud environment.Massive low time-constrained and data-intensive mobile applications require efficient replication strategies to decrease retrieval time.However,the determination of replicas is not reasonable in many previous works,which incurs high response delay.To this end,a correlation-aware replica prefetching(CRP)strategy based on the file correlation principle is proposed,which can prefetch the files with high access probability.The key is to determine and obtain the implicit high-value files effectively,which has a significant impact on the performance of CRP.To achieve the goal of accelerating the acquisition of implicit highvalue files,an access rule management method based on consistent hashing is proposed,and then the storage and query mechanisms for access rules based on adjacency list storage structure are further presented.The theoretical analysis and simulation results corroborate that CRP shortens average response time over 4.8%,improves average hit ratio over 4.2%,reduces transmitting data amount over 8.3%,and maintains replication frequency at a reasonable level when compared to other schemes. 展开更多
关键词 edge cloud access latency replica prefetching correlation-aware access rule
在线阅读 下载PDF
Predictive Prefetching for Parallel Hybrid Storage Systems
9
作者 Maen M. Al Assaf 《International Journal of Communications, Network and System Sciences》 2015年第5期161-180,共20页
In this paper, we present a predictive prefetching mechanism that is based on probability graph approach to perform prefetching between different levels in a parallel hybrid storage system. The fundamental concept of ... In this paper, we present a predictive prefetching mechanism that is based on probability graph approach to perform prefetching between different levels in a parallel hybrid storage system. The fundamental concept of our approach is to invoke parallel hybrid storage system’s parallelism and prefetch data among multiple storage levels (e.g. solid state disks, and hard disk drives) in parallel with the application’s on-demand I/O reading requests. In this study, we show that a predictive prefetching across multiple storage levels is an efficient technique for placing near future needed data blocks in the uppermost levels near the application. Our PPHSS approach extends previous ideas of predictive prefetching in two ways: (1) our approach reduces applications’ execution elapsed time by keeping data blocks that are predicted to be accessed in the near future cached in the uppermost level;(2) we propose a parallel data fetching scheme in which multiple fetching mechanisms (i.e. predictive prefetching and application’s on-demand data requests) can work in parallel;where the first one fetches data blocks among the different levels of the hybrid storage systems (i.e. low-level (slow) to high-level (fast) storage devices) and the other one fetches the data from the storage system to the application. Our PPHSS strategy integrated with the predictive prefetching mechanism significantly reduces overall I/O access time in a hybrid storage system. Finally, we developed a simulator to evaluate the performance of the proposed predictive prefetching scheme in the context of hybrid storage systems. Our results show that our PPHSS can improve system performance by 4% across real-world I/O traces without the need of using large size caches. 展开更多
关键词 PREDICTIVE prefetchING PROBABILITY GRAPH PARALLEL STORAGE Systems Hybrid STORAGE System
暂未订购
Method for improving MapReduce performance by prefetching before scheduling
10
作者 张霄宏 Feng Shengzhong +1 位作者 Fan Jianping Huang Zhexue 《High Technology Letters》 EI CAS 2012年第4期343-349,共7页
In this paper, a prefetching technique is proposed to solve the performance problem caused by remote data access delay. In the technique, the map tasks which will cause the delay are predicted first and then the input... In this paper, a prefetching technique is proposed to solve the performance problem caused by remote data access delay. In the technique, the map tasks which will cause the delay are predicted first and then the input data of these tasks will be preloaded before the tasks are scheduled. During the execution, the input data can be read from local nodes. Therefore, the delay can be hidden. The technique has been implemented in Hadoop-0. 20.1. The experiment results have shown that the technique reduces map tasks causing delay, and improves the performance of Hadoop MapRe- duce by 20%. 展开更多
关键词 cloud computing distributed computing prefetchING MAPREDUCE SCHEDULING
在线阅读 下载PDF
Web Acceleration by Prefetching in Extremely Large Latency Network
11
作者 Fumiaki Nagase Takefumi Hiraguri +1 位作者 Kentaro Nishimori Hideo Makino 《American Journal of Operations Research》 2012年第3期339-347,共9页
A scheme for high-speed data transfer via the Internet for Web service in an extremely large delay environment is proposed. With the wide-spread use of Internet services in recent years, WLAN Internet service in high-... A scheme for high-speed data transfer via the Internet for Web service in an extremely large delay environment is proposed. With the wide-spread use of Internet services in recent years, WLAN Internet service in high-speed trains has commenced. The system for this is composed of a satellite communication system between the train and the ground station, which is characterized by extremely large latency of several hundred milliseconds due to long propagation latency. High-speed web access is not available to users in a train in such an extremely large latency network system. Thus, a prefetch scheme for performance acceleration of Web services in this environment is proposed. A test-bed system that implements the proposed scheme is implemented and is its performance in this test-bed is evaluated. The proposed scheme is verified to enable high-speed Web access in the extremely large delay environment compared to conventional schemes. 展开更多
关键词 Extremely-Large-Latency NETWORK Satellite Communication HTTP Web prefetchING prefetchING Proxy SERVER Information Storage SERVER
在线阅读 下载PDF
Adaptive Cache Allocation with Prefetching Policy over End-to-End Data Processing
12
作者 Hang Qin Li Zhu 《Journal of Signal and Information Processing》 2017年第3期152-160,共9页
With the speed gap between storage system access and processor computing, end-to-end data processing has become a bottleneck to improve the total performance of computer systems over the Internet. Based on the analysi... With the speed gap between storage system access and processor computing, end-to-end data processing has become a bottleneck to improve the total performance of computer systems over the Internet. Based on the analysis of data processing behavior, an adaptive cache organization scheme is proposed with fast address calculation. This scheme can make full use of the characteristics of stack space data access, adopt fast address calculation strategy, and reduce the hit time of stack access. Adaptively, the stack cache can be turned off from beginning to end, when a stack overflow occurs to avoid the effect of stack switching on processor performance. Also, through the instruction cache and the failure behavior for the data cache, a prefetching policy is developed, which is combined with the data capture of the failover queue state. Finally, the proposed method can maintain the order of instruction and data access, which facilitates the extraction of prefetching in the end-to-end data processing. 展开更多
关键词 END-TO-END Data Processing STORAGE System CACHE prefetchING
在线阅读 下载PDF
A Prefetch-Adaptive Intelligent Cache Replacement Policy Based on Machine Learning 被引量:2
13
作者 杨会静 方娟 +1 位作者 蔡旻 才智 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第2期391-404,共14页
Hardware prefetching and replacement policies are two techniques to improve the performance of the memory subsystem.While prefetching hides memory latency and improves performance,interactions take place with the cach... Hardware prefetching and replacement policies are two techniques to improve the performance of the memory subsystem.While prefetching hides memory latency and improves performance,interactions take place with the cache replacement policies,thereby introducing performance variability in the application.To improve the accuracy of reuse of cache blocks in the presence of hardware prefetching,we propose Prefetch-Adaptive Intelligent Cache Replacement Policy(PAIC).PAIC is designed with separate predictors for prefetch and demand requests,and uses machine learning to optimize reuse prediction in the presence of prefetching.By distinguishing reuse predictions for prefetch and demand requests,PAIC can better combine the performance benefits from prefetching and replacement policies.We evaluate PAIC on a set of 27 memory-intensive programs from the SPEC 2006 and SPEC 2017.Under single-core configuration,PAIC improves performance over Least Recently Used(LRU)replacement policy by 37.22%,compared with improvements of 32.93%for Signature-based Hit Predictor(SHiP),34.56%for Hawkeye,and 34.43%for Glider.Under the four-core configuration,PAIC improves performance over LRU by 20.99%,versus 13.23%for SHiP,17.89%for Hawkeye and 15.50%for Glider. 展开更多
关键词 hardware prefetching machine learning prefetch-Adaptive Intelligent Cache Replacement Policy(PAIC) replacement policy
原文传递
An SPN-Based Integrated Model for Web Prefetching and Caching 被引量:15
14
作者 石磊 韩英杰 +2 位作者 丁晓光 卫琳 古志民 《Journal of Computer Science & Technology》 SCIE EI CSCD 2006年第4期482-489,共8页
The World Wide Web has become the primary means for information dissemination. Due to the limited resources of the network bandwidth, users always suffer from long time waiting. Web prefetching and web caching are the... The World Wide Web has become the primary means for information dissemination. Due to the limited resources of the network bandwidth, users always suffer from long time waiting. Web prefetching and web caching are the primary approaches to reducing the user perceived access latency and improving the quality of services. In this paper, a Stochastic Petri Nets (SPN) based integrated web prefetching and caching model (IWPCM) is presented and the performance evaluation of IWPCM is made. The performance metrics, access latency, throughput, HR (hit ratio) and BHR (byte hit ratio) are analyzed and discussed. Simulations show that compared with caching only model (CM), IWPCM can further improve the throughput, HR and BHR efficiently and reduce the access latency. The performance evaluation based on the SPN model can provide a basis for implementation of web prefetching and caching and the combination of web prefetching and caching holds the promise of improving the QoS of web systems. 展开更多
关键词 stochastic Petri nets web prefetching web caching performance evaluation
原文传递
Prefetching J^+-Tree:A Cache-Optimized Main Memory Database Index Structure 被引量:3
15
作者 栾华 杜小勇 王珊 《Journal of Computer Science & Technology》 SCIE EI CSCD 2009年第4期687-707,共21页
As the speed gap between main memory and modern processors continues to widen, the cache behavior becomes more important for main memory database systems (MMDBs). Indexing technique is a key component of MMDBs. Unfo... As the speed gap between main memory and modern processors continues to widen, the cache behavior becomes more important for main memory database systems (MMDBs). Indexing technique is a key component of MMDBs. Unfortunately, the predominant indexes -B^+-trees and T-trees -- have been shown to utilize cache poorly, which triggers the development of many cache-conscious indexes, such as CSB^+-trees and pB^+-trees. Most of these cache-conscious indexes are variants of conventional B^+-trees, and have better cache performance than B^+-trees. In this paper, we develop a novel J^+-tree index, inspired by the Judy structure which is an associative array data structure, and propose a more cacheoptimized index -- Prefetching J^+-tree (pJ^+-tree), which applies prefetching to J^+-tree to accelerate range scan operations. The J^+-tree stores all the keys in its leaf nodes and keeps the reference values of leaf nodes in a Judy structure, which makes J^+-tree not only hold the advantages of Judy (such as fast single value search) but also outperform it in other aspects. For example, J^+-trees can achieve better performance on range queries than Judy. The pJ^+-tree index exploits prefetching techniques to further improve the cache behavior of J^+-trees and yields a speedup of 2.0 on range scans. Compared with B^+-trees, CSB^+-trees, pB^+-trees and T-trees, our extensive experimental Study shows that pJ^+-trees can provide better performance on both time (search, scan, update) and space aspects. 展开更多
关键词 index structure pJ^+-tree prefetchING cache conscious main memory database
原文传递
Optimizing the Copy-on-Write Mechanism of Docker by Dynamic Prefetching 被引量:4
16
作者 Yan Jiang Wei Liu +1 位作者 Xuanhua Shi Weizhong Qiang 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2021年第3期266-274,共9页
Docker,as a mainstream container solution,adopts the Copy-on-Write(CoW)mechanism in its storage drivers.This mechanism satisfies the need of different containers to share the same image.However,when a single container... Docker,as a mainstream container solution,adopts the Copy-on-Write(CoW)mechanism in its storage drivers.This mechanism satisfies the need of different containers to share the same image.However,when a single container performs operations such as modification of an image file,a duplicate is created in the upper readwrite layer,which contributes to the runtime overhead.When the accessed image file is fairly large,this additional overhead becomes non-negligible.Here we present the concept of Dynamic Prefetching Strategy Optimization(DPSO),which optimizes the Co W mechanism for a Docker container on the basis of the dynamic prefetching strategy.At the beginning of the container life cycle,DPSO pre-copies up the image files that are most likely to be copied up later to eliminate the overhead caused by performing this operation during application runtime.The experimental results show that DPSO has an average prefetch accuracy of greater than 78%in complex scenarios and could effectively eliminate the overhead caused by the CoW mechanism. 展开更多
关键词 DOCKER CONTAINER Copy-on-Write(CoW) storage driver prefetch strategy
原文传递
I/O Acceleration via Multi-Tiered Data Buffering and Prefetching 被引量:2
17
作者 Anthony Kougkas Hariharan Devarajan Xian-He Sun 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第1期92-120,共29页
Modern High-Performance Computing(HPC)systems are adding extra layers to the memory and storage hierarchy,named deep memory and storage hierarchy(DMSH),to increase I/O performance.New hardware technologies,such as NVM... Modern High-Performance Computing(HPC)systems are adding extra layers to the memory and storage hierarchy,named deep memory and storage hierarchy(DMSH),to increase I/O performance.New hardware technologies,such as NVMe and SSD,have been introduced in burst buffer installations to reduce the pressure for external storage and boost the burstiness of modern I/O systems.The DMSH has demonstrated its strength and potential in practice.However,each layer of DMSH is an independent heterogeneous system and data movement among more layers is significantly more complex even without considering heterogeneity.How to efficiently utilize the DMSH is a subject of research facing the HPC community.Further,accessing data with a high-throughput and low-latency is more imperative than ever.Data prefetching is a well-known technique for hiding read latency by requesting data before it is needed to move it from a high-latency medium(e.g.,disk)to a low-latency one(e.g.,main memory).However,existing solutions do not consider the new deep memory and storage hierarchy and also suffer from under-utilization of prefetching resources and unnecessary evictions.Additionally,existing approaches implement a client-pull model where understanding the application's I/O behavior drives prefetching decisions.Moving towards exascale,where machines run multiple applications concurrently by accessing files in a workflow,a more data-centric approach resolves challenges such as cache pollution and redundancy.In this paper,we present the design and implementation of Hermes:a new,heterogeneous-aware,multi-tiered,dynamic,and distributed I/O buffering system.Hermes enables,manages,supervises,and,in some sense,extends I/O buffering to fully integrate into the DMSH.We introduce three novel data placement policies to efficiently utilize all layers and we present three novel techniques to perform memory,metadata,and communication management in hierarchical buffering systems.Additionally,we demonstrate the benefits of a truly hierarchical data prefetcher that adopts a server-push approach to data prefetching.Our evaluation shows that,in addition to automatic data movement through the hierarchy,Hermes can significantly accelerate I/O and outperforms by more than 2x state-of-the-art buffering platforms.Lastly,results show 10%-35%performance gains over existing prefetchers and over 50%when compared to systems with no prefetching. 展开更多
关键词 I/O BUFFERING heterogeneous BUFFERING layered BUFFERING deep memory hierarchy BURST BUFFERS hierarchical data prefetchING DATA-CENTRIC architecture
原文传递
Modeling and application of moderate prefetching strategy based on video slicing for P2P VoD systems 被引量:1
18
作者 DENGGuang-qing WEITing +3 位作者 CHENChang-jia ZHUWei WANGBin WUDeng-rong 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2012年第2期57-66,共10页
In peer-to-peer (P2P) video-on-demand (VoD) streaming systems, each peer contributes a fixed amount of hard disk storage (usually 2 GB) to store viewed videos and then uploads them to other requesting peers. How... In peer-to-peer (P2P) video-on-demand (VoD) streaming systems, each peer contributes a fixed amount of hard disk storage (usually 2 GB) to store viewed videos and then uploads them to other requesting peers. However, the daily hits (namely popularity) of different segments of a video is highly diverse, which means that taking the whole video as the basic storage unit may lead to redundancy of unpopular segment replicas and scarcity of popular segment replicas in the P2P storage network. To address this issue, we propose a video slicing mechanism (VSM) in which the whole video is sliced into small blocks (20 MB, for instance). Under VSM, peers can moderately remove unpopular blocks from and accordingly add popular ones into their contributed hard disk storage, which increases the usage of peers' contributed resource (storage and bandwidth). To reasonably assign bandwidth among peers with different download capacity, we propose a moderate prefetching strategy (MPS) based on VSM. Under MPS, when the amount of prefetched content reaches the predefined threshold, peers immediately stop prefetching video content and then release occupied bandwidth for others. A stochastic model is established to analyze the performance of the MPS and it is found that perfect playback continuity can be got under MPS. Then the MPS is applied to PPLive VoD system (one of the largest P2P VoD systems in China) and measurement results demonstrate that low server load and perfect user satisfaction can be achieved. Also, the server bandwidth contribution of PPLive VoD system under MPS (namely 5%) is much lower than that of UUSee VoD system (namely 30%). 展开更多
关键词 BANDWIDTH P2P VOD SLICING prefetch
原文传递
Dynamic Data Prefetching in Home-Based Software DSMs 被引量:1
19
作者 胡伟武 张福新 刘海明 《Journal of Computer Science & Technology》 SCIE EI CSCD 2001年第3期231-241,共11页
A major overhead in software DSM (Distributed Shared Memory) is the cost of remote memory accesses necessitated by the protocol as well as induced by false sharing. This paper introduces a dynamic prefetching method i... A major overhead in software DSM (Distributed Shared Memory) is the cost of remote memory accesses necessitated by the protocol as well as induced by false sharing. This paper introduces a dynamic prefetching method implemented in the JIAJIA software DSM to reduce system overhead caused by remote accesses. The prefetching method records the interleaving string of INV (invalidation) and GETP (getting a remote page) operations for each cached page and analyzes the periodicity of the string when a page is invalidated on a lock or barrier. A prefetching request is issued after the lock or barrier if the periodicity analysis indicates that GETP will be the next operation in the string. Multiple prefetching requests are merged into the same message if they are to the same host. Performance evaluation with eight well-accepted benchmarks in a cluster of sixteen PowerPC workstations shows that the prefetching scheme can significantly reduce the page fault overhead and as a result achieves a performance increase of 15%-20% in three benchmarks and around 8%-10% in another three. The average extra traffic caused by useless prefetches is only 7%-13% in the evaluation. 展开更多
关键词 software DSM remote access prefetchING performance evaluation
原文传递
Prefetch-aware fingerprint cache management for data deduplication systems 被引量:1
20
作者 Mei LI Hongjun ZHANG +1 位作者 Yanjun WU Chen ZHAO 《Frontiers of Computer Science》 SCIE EI CSCD 2019年第3期500-515,共16页
Data deduplication has been widely utilized in large-scale storage systems, particularly backup systems. Data deduplication systems typically divide data streams into chunks and identify redundant chunks by comparing ... Data deduplication has been widely utilized in large-scale storage systems, particularly backup systems. Data deduplication systems typically divide data streams into chunks and identify redundant chunks by comparing chunk fingerprints. Maintaining all fingerprints in memory is not cost-effective because fingerprint indexes are typically very large. Many data deduplication systems maintain a fingerprint cache in memory and exploit fingerprint prefetching to accelerate the deduplication process. Although fingerprint prefetching can improve the performance of data deduplication systems by leveraging the locality of workloads, inaccurately prefetched fingerprints may pollute the cache by evicting useful fingerprints. We observed that most of the prefetched fingerprints in a wide variety of applications are never used or used only once, which severely limits the performance of data deduplication systems. We introduce a prefetch-aware fingerprint cache management scheme for data deduplication systems (PreCache) to alleviate prefetch-related cache pollution. We propose three prefetch-aware fingerprint cache replacement policies (PreCache-UNU, PreCache-UOO, and PreCache-MIX) to handle different types of cache pollution. Additionally, we propose an adaptive policy selector to select suitable policies for prefetch requests. We implement PreCache on two representative data deduplication systems (Block Locality Caching and SiLo) and evaluate its performance utilizing three real-world workloads (Kernel, MacOS, and Homes). The experimental results reveal that PreCache improves deduplication throughput by up to 32.22% based on a reduction of on-disk fingerprint index lookups and improvement of the deduplication ratio by mitigating prefetch-related fingerprint cache pollution. 展开更多
关键词 DATA DEDUPLICATION FINGERPRINT prefetch FINGERPRINT CACHE
原文传递
上一页 1 2 20 下一页 到第
使用帮助 返回顶部