FPGA Acceleration of Approximate KNN Indexing on High- Dimensional Vectors

2019 14th International Symposium on Reconfigurable Communication-centric Systems-on-Chip (ReCoSoC)(2019)

引用 5|浏览9
暂无评分
摘要
Accurate and efficient Machine Learning algorithms are of vital importance to many problems, especially on classification or clustering tasks. One the most important algorithms used for similarity search is known as K-Nearest Neighbor algorithm (KNN) which is widely adopted for predictive analysis, text categorization, image recognition etc. but comes at the cost of high computation. Large companies that process big data on modern data centers adopt this technique combined with approximations on algorithm level in order to compute critical workloads every second. However, a significant computation and energy overhead is formed further with the high dimensional nearest neighbor queries. In this paper, we deploy a hardware accelerated approximate KNN algorithm built upon FAISS framework (Facebook Artificial Intelligence Similarity Search) using FPGA-OpenCL platforms. The FPGA architecture on this framework addresses the problem of vector indexing on training and adding large-scale high-dimensional data. The proposed solution uses an in-memory FPGA format that outperforms other high performance systems in terms of speed and energy efficiency. The experiments were done on Xilinx Alveo U200 FPGA achieving up to 115× accelerator-only speed-up over single-core CPU and 2.4× end-to-end system speed-up over a 36-thread Xeon CPU. Also, the performance/watt of the design was 4.1 × from the same CPU and 1.4× from a Kepler-class GPU.
更多
查看译文
关键词
approximate KNN,nearest neighbor index,machine learning,FPGA,hardware accelerator
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要