Optimizing Sparse Tensor Times Matrix On Gpus

Journal of Parallel and Distributed Computing(2019)

引用 18|浏览62
暂无评分
摘要
This work optimizes tensor-times-dense matrix multiply (TTM) for general sparse and semi-sparse tensors on CPU and NVIDIA GPU platforms. TTM is a computational kernel in tensor methods-based data analytics and data mining applications, such as the popular Tucker decomposition. We first design an in-place sequential SPTTM to avoid explicit data reorganizing between a tensor and a matrix in its conventional approach. We further optimize SPTTM on NVIDIA GPU platforms. Five approaches including employing fine thread granularity, arranging coalesced memory access, rank blocking, and using fast GPU shared memory are developed for GPU-SPTTM. We also optimize semi-sparse tensor-times-dense matrix multiply (SSPTTM) to take advantage of the inside dense sub-structures. The optimized SPTTM and SSPTTM are applied to Tucker decomposition to improve its overall performance.Our sequential SPTTM is 3-120x faster than the SPTTM from Tensor Toolbox library. GPU-SPTTM obtains 6-19x speedup on NVIDIA K40c and 23-67x speedup on NVIDIA P100 over CPU-SPTTM respectively. Our GPU-SPTTM is 3.9x faster than the state-of-the-art GPU implementation. Our SSPTTM implementations outperform SPTTMS by up to 4.5x, which handles the input semi-sparse tensor in a general way. Tucker decomposition achieves up to 3.2x speedup after applying the optimized TTMS. The code will be publicly released in PARTI! library: https://github.com/hpcgarage/ParTI. (C) 2018 Elsevier Inc. All rights reserved.
更多
查看译文
关键词
Sparse tensors,Irregular algorithms,Tensor decomposition,GPU
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要