This paper presents a novel hierarchy cache architecture for the purpose of optimizing IO performance. The main idea of the hierarchy cache is to use a few megabytes of RAM and a pagefile to form a two-level cache arc...This paper presents a novel hierarchy cache architecture for the purpose of optimizing IO performance. The main idea of the hierarchy cache is to use a few megabytes of RAM and a pagefile to form a two-level cache architecture, The pagefile is equivalent to the cache disk in DCD (Disk Caching Disk). The pagefile outperforms data disks, because data are accessed in different units and different ways. Small writes are collected in the RAM cache first, and data will be transferred to the pagefile in large writes later. When the system is idle, it will destage data from the pagefile to data disks. The performance test results show that the hierarchy cache can improve IO performance dramatically for small writes, and the mail server using the hierarchy cache driver can handle transactions about 2.2 times faster than the normal mail server. The hierarchy cache is implemented as a filter driver, so it's transparent to the current Windows 2000/ Windows XP operating system. Key words hierarchy cache - pagefile - small write - disk caching disk - filter driver CLC number TP 311 Foundation item: Supported by the National Natural Science Foundation of China (60273073)Biography: XIE chang-sheng (1945-), male Professor, research direction: storage system, network storage.展开更多
At present, I/O is the performance bottleneck limiting the speed of computer systems. A large number of I/O operations are synchronous read/write operations of only small data blocks. However, reducing the latency of ...At present, I/O is the performance bottleneck limiting the speed of computer systems. A large number of I/O operations are synchronous read/write operations of only small data blocks. However, reducing the latency of synchronous I/O operation is a non-trivial problem. In this paper, we propose two methods to address this problem. The first method, FastSync, uses a cache disk optimized for write operation via use of a disk-head position prediction algorithm. In this way, disk capacity is traded for synchronous I/O performance. The second method, LND, uses free memory capacity in a network environment as a cache disk for the buffering of synchronous I/O operation. Data integrity in FastSync is ensured by using a data log on the cache disk, whereas in LND, integrity is ensured by the storage in distributed memory of multiple copies of each data block. Both methods succeed in dramatically increasing the performance of synchronous I/O operation. The performance of LND is limited by the network speed, whereas performance of FastSync is determined mostly by the data block size.展开更多
文摘This paper presents a novel hierarchy cache architecture for the purpose of optimizing IO performance. The main idea of the hierarchy cache is to use a few megabytes of RAM and a pagefile to form a two-level cache architecture, The pagefile is equivalent to the cache disk in DCD (Disk Caching Disk). The pagefile outperforms data disks, because data are accessed in different units and different ways. Small writes are collected in the RAM cache first, and data will be transferred to the pagefile in large writes later. When the system is idle, it will destage data from the pagefile to data disks. The performance test results show that the hierarchy cache can improve IO performance dramatically for small writes, and the mail server using the hierarchy cache driver can handle transactions about 2.2 times faster than the normal mail server. The hierarchy cache is implemented as a filter driver, so it's transparent to the current Windows 2000/ Windows XP operating system. Key words hierarchy cache - pagefile - small write - disk caching disk - filter driver CLC number TP 311 Foundation item: Supported by the National Natural Science Foundation of China (60273073)Biography: XIE chang-sheng (1945-), male Professor, research direction: storage system, network storage.
基金the National Key Basic Research and Development Program of China (No. G1999032702) the National High-Tech Research and Development Program of China (No.2001AA111010) and the National Natural Science Foundation of China (No.60131160743)
文摘At present, I/O is the performance bottleneck limiting the speed of computer systems. A large number of I/O operations are synchronous read/write operations of only small data blocks. However, reducing the latency of synchronous I/O operation is a non-trivial problem. In this paper, we propose two methods to address this problem. The first method, FastSync, uses a cache disk optimized for write operation via use of a disk-head position prediction algorithm. In this way, disk capacity is traded for synchronous I/O performance. The second method, LND, uses free memory capacity in a network environment as a cache disk for the buffering of synchronous I/O operation. Data integrity in FastSync is ensured by using a data log on the cache disk, whereas in LND, integrity is ensured by the storage in distributed memory of multiple copies of each data block. Both methods succeed in dramatically increasing the performance of synchronous I/O operation. The performance of LND is limited by the network speed, whereas performance of FastSync is determined mostly by the data block size.