MT-DLA: An Efficient Multi-Task Deep Learning Accelerator Design

GLSVLSI(2021)

引用 1|浏览6
暂无评分
摘要
ABSTRACTMulti-task learning systems are commonly adopted in many real-world AI applications such as intelligent robots and self-driving vehicles. Instead of improving single-network performance, this work proposes a specialized Multi-Task Deep Learning Accelerator architecture, MT-DLA, to improve the performance of concurrent networks by exploiting the shared feature and parameters across these models. It is shown in our evaluation with realistic multi-task workloads, MT-DLA dramatically eliminates the memory and computation overhead caused by the shared parameters, activations and computation result. In the experiments with real-world multi-task learning workloads, MT-DLA brings about 1.4x-7.0x energy efficiency boost when compared to the baseline neural network accelerator without multi-task support.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络