期刊文献+

MPICH2-CMEX:可扩展消息传递接口实现技术

MPICH2-CMEX:implementation technology of scalable message passing interface
在线阅读 下载PDF
导出
摘要 在大规模并行计算系统中,为了更有效地利用系统的并行性,实现一个高性能、可扩展的MPI系统是非常重要的。CMEX是无连接模式的用户级通讯软件接口,提供了高性能的报文传输和RDMA通讯操作,MPICH2-CMEX是基于CMEX的MPI实现,结合RDMA读和RDMA写通讯操作的特性,MPICH2-CMEX实现了多种数据传输通道,并利用并行应用的近邻通讯模式,实现了混合通道数据传输方法,实际的应用测试表明,MPICH2-CMEX系统具有良好的性能和可扩展性。 In large scale parallel computing systems,a high-performance scalable MPI implementation is crucial for parallel applications in order to exploit the massive parallelism in these systems effeetively.CMEX is a user level communication software interface which provides connection-less high-performance packet transmission and RDMA operations.MPICH2-CMEX is a MPI implementation based on CMEX,it implements several high speed data transmission channels utilizing the property of RDMA read and RDMA write operation.Near-neighbor communication.an important application feature,is also utilized to implement hybrid channel data transmission.Experimental results demonstrate the good performance and scalability of MPICH2-CMEX system.
出处 《计算机工程与应用》 CSCD 北大核心 2008年第2期123-125,201,共4页 Computer Engineering and Applications
基金 国家高技术研究发展计划( 863)( the National High-Tech Research and Development Plan of China under Grant No.2006AA01A106) 。
关键词 MPI RDMA 近邻通讯 MPI RDMA near-neighbor communication
  • 相关文献

参考文献5

  • 1Gropp W,Lusk E,Skjellum A.Using MPI:portable parallel programming with the message passing interface[M].2nd ed.Cambridge,MA:MIT Press,1999.
  • 2谢旻,刘路,卢宇彤,傅清朝,周恩强.Communication Express通讯软件接口:实现技术与性能评测[J].计算机工程与科学,2007,29(11):140-144. 被引量:3
  • 3Liu J,Jiang W,Wyckoff P,et al.Deisgn and implementation of MPICH2 over infiniBand with RDMA support[C]//International Parallel and Distributed Processing Symposium(IPDPS 04),Apr 2004.
  • 4Liu J,Wu J,Panda D K.High performance RDMA-Based MPI implementation over infiniBand[J].International Journal of Parallel Programming,2004 (6).
  • 5NASA.NAS parallel benchmark[EB/OL].http://www.nas.nasa.gov/Software/NPB/.

二级参考文献7

  • 1Araki S, Biias A, Dubnicki C, et al. User-Space Communication: A Quantitative Study[A]. Proc of SuperC.omputing' 98 [C]. 1998. 1-16.
  • 2Brightwell R, Maccabe A. Scalability Limitations of VIABased Technologles in Supporting MPI[A]. Proc of the 4th MPI Developer's and User's Conf[C]. 2000.
  • 3Liu J, Vishnu A, Panda D K. Building Multirail InfiniBand Clusters: MPI-Level Design and Performance Evaluation[A]. Proe of SuperComputing'04[C]. 2004. 33-43.
  • 4Mainwaring A M, Culler D E. Design Challenges of Virtual Networks: Fast, General-Purpose Communication[A]. Proc of the ACM SIGPLAN'99 Syrup on Principles and Practice of Parallel Programming[C]. 1999. 119-130.
  • 5Tezuka H,O' Carroll F, Hori A, et al. Pin-Down Cache: A Virtual Memory Management Technique for Zero-Copy Communication[A]. Proc of 12th Int'l Parallel Processing Symp [C]. 1998. 308-314.
  • 6Quadries QsNet II:Pushing the Limit of the Design of HighPerformance Networks for Supercomputers[J]. IEEE Micro, 2005,25 (4) : 34-47.
  • 7Performance Evaluation of InfiniBand with PCI Express[A]. Proc of HOTI'04[C]. 2004. 13-19.

共引文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部