Comparative Research on High-Speed Networks of High Performance Computing Cluster Based on MPIGRAPH

ieee international conference computer and communications(2020)

引用 1|浏览0
暂无评分
摘要
With the rapid development of high performance computing, many fields of scientific and engineering computing have been inseparable from the support of high performance computing. The internal high-speed internet has become one of the key elements restricting the high performance computing cluster to a certain extent. In this paper, two sets of high performance computing clusters with the same processor and memory and internal high-speed interconnection network are Infiniband and Intel Omni-Path respectively are taken as test object. Benchmark software mpigraph 1.4 is compiled with different MPI software such as Intel MPI 2019 update 5, mvapich2 2.3.4 and openmpi 3.1.6 to test the differences between the two networks and the differences between different MPI software. The results show that Infiniband has better performance in small-scale parallel, and Intel Omni-Path has better performance in large-scale parallel. The performance of intelmpi is similar to that of openmpi, which is better than mvapich2 in Intel Omni-Path and worse than mvapich2 in Infiniband. This is very important for cluster usage, application software compilation and optimization.
更多
查看译文
关键词
High performance computing,Infiniband,Intel Omni-Path,MPI,high speed network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要