EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models

arxiv(2023)

引用 0|浏览13
暂无评分
摘要
With the advent of deep learning applications on edge devices, researchers actively try to optimize deep learning model deployment on low-power and restricted memory devices. There are established compression methods such as quantization, pruning, and architecture search that leverage commodity hardware. Apart from conventional compression algorithms, one may redesign the operations of deep learning models, leading to more efficient hardware implementation. To this end, we propose EuclidNet, an efficient computing method designed to be implemented on hardware that replaces multiplication, with squared difference. We show that EuclidNet is aligned with matrix multiplication and can be used as a measure of similarity in the case of convolutional layers. Furthermore, we show that under various transformations and noise scenarios, EuclidNet exhibits the same performance compared to the deep learning models designed with multiplication operations.
更多
查看译文
关键词
Efficient inference,Euclidean distance,Convolutional neural network,Hardware efficient algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要