A preliminary analysis of the infinipath and XD1 network interfaces

IPDPS(2006)

引用 21|浏览5
暂无评分
摘要
Two recently delivered systems have begun a new trend in cluster interconnects. Both the InfiniPath network from PathScale, Inc., and the RapidArray fabric in the XD1 system from Cray, Inc., leverage commodity network fabrics while customizing the network interface in an attempt to add value specifically for the high performance computing (HPC) cluster market. Both network interfaces are compatible with standard InfiniBand (IB) switches, but neither use the traditional programming interfaces to support MPI. Another fundamental difference between these networks and other modern network adapters is that much of the processing needed for the network protocol stack is performed on the host processor(s) rather than by the network interface itself. This approach stands in stark contrast to the current direction of most high-performance networking activities, which is to offload as much protocol processing as possible to the network interface. In this paper, we provide an initial performance comparison of the two partially custom networks (PathScale's InfiniPath and Cray's XD1) with a more commodity network (standard IB) and a more custom network (Quadrics Elan4). Our evaluation includes several micro-benchmark results as well as some initial application performance data.
更多
查看译文
关键词
message passing,network interfaces,protocols,workstation clusters,InfiniPath network interface,Quadrics Elan4,and XD1 network interface,cluster interconnection,commodity network fabrics,high performance computing cluster,host processor,infiniband switches,message passing interface,modern network adapters,network protocol stack,programming interface,protocol processing,rapidarray fabric
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要