Budget Restricted Incremental Learning with Pre-Trained Convolutional Neural Networks and Binary Associative Memories

Journal of Signal Processing Systems(2019)

引用 9|浏览64
暂无评分
摘要
For the past few years, Deep Neural Networks (DNNs) have achieved state-of-art performance in numerous challenging domains. To reach this performance, DNNs consist in large sets of parameters and complex architectures, which are trained offline on huge datasets. The complexity and size of DNNs architectures make it difficult to implement such approaches for budget-restricted applications such as embedded systems. Furthermore, DNNs cannot learn incrementally new data, without forgetting previously acquired knowledge, which makes embedded applications even more challenging due to the need of storing the whole dataset. To tackle this problem, we introduce an incremental learning method that combines pre-trained DNNs, binary associative memories, and product quantizing (PQ) as a bridge between them. The obtained method requires less computational power and memory requirements, and reaches good performances on challenging vision datasets. Moreover, we present a hardware implementation validated on a FPGA target, which uses few hardware resources, while providing substantial processing acceleration compared to a CPU counterpart.
更多
查看译文
关键词
Computer vision,Deep learning,Transfer learning,Incremental learning,Learning on chip
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要