Accelerating Stochastic Gradient Descent Based Matrix Factorization on FPGA

IEEE Transactions on Parallel and Distributed Systems(2020)

引用 10|浏览32
暂无评分
摘要
Matrix Factorization (MF) based on Stochastic Gradient Descent (SGD) is a powerful machine learning technique to derive hidden features of objects from observations. In this article, we design a highly parallel architecture based on Field-Programmable Gate Array (FPGA) to accelerate the training process of the SGD-based MF algorithm. We identify the challenges for the acceleration and propose nove...
更多
查看译文
关键词
Field programmable gate arrays,Acceleration,Training,System-on-chip,Optimization,Partitioning algorithms,Bipartite graph
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要