MVAPICH-PRISM: A proxy-based communication framework using InfiniBand and SCIF for Intel MIC clusters

High Performance Computing, Networking, Storage and Analysis(2013)

引用 53|浏览31
暂无评分
摘要
Xeon Phi, based on the Intel Many Integrated Core (MIC) architecture, packs up to 1TFLOPs of performance on a single chip while providing x86__64 compatibility. On the other hand, InfiniBand is one of the most popular choices of interconnect for supercomputing systems. The software stack on Xeon Phi allows processes to directly access an InfiniBand HCA on the node and thus, provides a low latency path for internode communication. However, drawbacks in the state-of-the-art chipsets like Sandy Bridge limit the bandwidth available for these transfers. In this paper, we propose MVAPICH-PRISM, a novel proxy-based framework to optimize the communication performance on such systems. We present several designs and evaluate them using micro-benchmarks and application kernels. Our designs improve internode latency between Xeon Phi processes by up to 65% and internode bandwidth by up to five times. Our designs improve the performance of MPI_Alltoall operation by up to 65%, with 256 processes. They improve the performance of a 3D Stencil communication kernel and the P3DFFT library by 56% and 22% with 1,024 and 512 processes, respectively.
更多
查看译文
关键词
application program interfaces,message passing,multiprocessing systems,multiprocessor interconnection networks,parallel programming,1TFLOPs,3D Stencil communication kernel,InfiniBand HCA,Intel MIC clusters,Intel many integrated core architecture,MIC architecture,MPI_Alltoall operation,MVAPICH-PRISM,P3DFFT library,SCIF,Sandy Bridge,Xeon Phi processes,application kernels,chipsets,communication performance optimization,internode communication,low latency path,microbenchmarks,proxy-based communication framework,proxy-based framework,single chip,software stack,supercomputing systems,x86_64 compatibility,Clusters,InfiniBand,MIC,MPI,PCIe,RDMA
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要