摘要
复制是实现海量数据管理的关键技术之一,多副本之间的数据一致性维护是提高分布式系统的容错能力与性能的重要保证。强一致性确保并发的修改操作不会发生冲突,但是限制了系统的可用性、连通性以及副本数量;弱一致性确保副本的最终一致,提高了系统的容错能力。本文从已有的一致性维护方法出发,结合海量数据的特点,对一致性维护过程中所涉及的更新发布、更新传播方式、更新传播内容以及更新冲突解决等几个方面进行了分析,提出了相应的解决方法。
Replication is the key technology of massive data management. Data consistency of replicas is one of the keys for achieving fault tolerance and performance enhancement in distributed systems. Strong consistency ensures that cow current updates will not conflict but limits system availability, throughput, and the practical degree of replication, while weak consistency only guarantees eventual agreement but provides strong fault tolerance. The current techniques of research in data consistency are studied. Then, according to the characteristic of massive data, the problems such as update issuer, update propagation manner, update propagation content and conflict resolution that should be studied in data consistency are brought forward and the feasible methods are proposed respectively.
出处
《计算机科学》
CSCD
北大核心
2006年第4期137-140,161,共5页
Computer Science
基金
国家"九七三"重点基础研究发展规划基金项目(2002CB312105)
高等学校全国优秀博士学位论文作者专项基金项目(200141)
国家自然科学基金项目(69903011)
关键词
海量数据
复制
数据一致性
Massive data, Replication, Data consistency