SparseACC: A Generalized Linear Model Accelerator for Sparse Datasets

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS(2024)

引用 0|浏览8
暂无评分
摘要
Stochastic gradient descent (SGD) is widely used for training generalized linear models (GLMs), such as support vector machine and logistic regression, on large industry datasets. Such a training consumes plenty of computing power and therefore plenty of accelerators are proposed to accelerate the GLM training. However, real-world datasets are always highly sparse. For example, YouTube's social network connectivity contains only 2.31% nonzero elements (NZs). It is not trivial to design an accelerator that is able to efficiently train on a sparse dataset that is stored in a compressed sparse format (e.g., compressed sparse row (CSR) format). The design of such an accelerator faces three challenges: 1) bank conflicts, which may happen when multiple processing engines in the accelerator access multiple memory banks; 2) complex interconnections, which are necessary to allow all processing engines to access any memory bank; and 3) high-synchronization overhead, since each sample in sparse dataset has a different number of NZs and these elements have different distributions, thus it is hard to overlap gradient computation and model update of neighboring batches. To this end, we propose SparseACC, a sparsity-aware accelerator for training generalized linear models (GLMs). SparseACC is based on two key mechanisms. First, a software/hardware co-design approach solves the first two design challenges by proposing a novel bank-conflict-free (BCF) and bank-balanced CSR format. Second, a weight-aware ping-pong model solves the third challenge, thus maximizing the utilization of the processing engines. SparseACC leverages these two mechanisms to orchestrate training over sparse datasets, such that the training time decreases linearly with the sparsity of the dataset. We prototype SparseACC on a Xilinx Alveo U280 FPGA (Xilinx, 2020). The experimental evaluation shows that SparseACC converges up to 3.5 x , 18 x , 38 x , and 110x faster than the state-of-the-art counterparts on a sparse accelerator, a Tesla V100 GPU, an Intel i9-10900k CPU, and a dense accelerator, respectively.
更多
查看译文
关键词
Accelerator,linear model,sparse,training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要