Automatic Mapping of the Sum-Product Network Inference Problem to FPGA-Based Accelerators
2018 IEEE 36th International Conference on Computer Design (ICCD)(2018)
摘要
In recent years, FPGAs have been successfully employed for the implementation of efficient, application-specific accelerators for a wide range of machine learning tasks. In this work, we consider probabilistic models, namely, (Mixed) Sum-Product Networks (SPN), a deep architecture that can provide tractable inference for multivariate distributions over mixed data-sources. We develop a fully pipelined FPGA accelerator architecture, including a pipelined interface to external memory, for the inference in (mixed) SPNs. To meet the precision constraints of SPNs, all computations are conducted using double-precision floating point arithmetic. Starting from an input description, the custom FPGA-accelerator is synthesized fully automatically by our tool flow. To the best of our knowledge, this work is the first approach to offload the SPN inference problem to FPGA-based accelerators. Our evaluation shows that the SPN inference problem benefits from offloading to our pipelined FPGA accelerator architecture.
更多查看译文
关键词
FPGA, SPN, Machine Learning, Graphical Models, Deep Models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络