Optimizing deep learning recommender systems training on CPU cluster architectures

SC(2020)

引用 41|浏览127
暂无评分
摘要
ABSTRACTDuring the last two years, the goal of many researchers has been to squeeze the last bit of performance out of HPC system for AI tasks. Often this discussion is held in the context of how fast ResNet50 can be trained. Unfortunately, ResNet50 is no longer a representative workload in 2020. Thus, we focus on Recommender Systems which account for most of the AI cycles in cloud computing centers. More specifically, we focus on Facebook's DLRM benchmark. By enabling it to run on latest CPU hardware and software tailored for HPC, we are able to achieve up to two-orders of magnitude improvement in performance on a single socket compared to the reference CPU implementation, and high scaling efficiency up to 64 sockets, while fitting ultra-large datasets which cannot be held in single node's memory. Therefore, this paper discusses and analyzes novel optimization and parallelization techniques for the various operators in DLRM. Several optimizations (e.g. tensor-contraction accelerated MLPs, framework MPI progression, BFLOAT16 training with up to 1.8x speed-up) are general and transferable to many other deep learning topologies.
更多
查看译文
关键词
magnitude improvement,single socket,ultra-large datasets,single node,parallelization techniques,BFLOAT16 training,Recommender Systems training,CPU cluster architectures,HPC system,AI tasks,ResNet50,representative workload,AI cycles,cloud computing centers,scaling efficiency,novel optimization analysis,deep learning topology,Facebook DLRM benchmark
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要