GreenLightningAI: An Efficient AI System with Decoupled Structural and Quantitative Knowledge
CoRR(2023)
摘要
The number and complexity of artificial intelligence (AI) applications is
growing relentlessly. As a result, even with the many algorithmic and
mathematical advances experienced over past decades as well as the impressive
energy efficiency and computational capacity of current hardware accelerators,
training the most powerful and popular deep neural networks comes at very high
economic and environmental costs. Recognising that additional optimisations of
conventional neural network training is very difficult, this work takes a
radically different approach by proposing GreenLightningAI, a new AI system
design consisting of a linear model that is capable of emulating the behaviour
of deep neural networks by subsetting the model for each particular sample. The
new AI system stores the information required to select the system subset for a
given sample (referred to as structural information) separately from the linear
model parameters (referred to as quantitative knowledge). In this paper we
present a proof of concept, showing that the structural information stabilises
far earlier than the quantitative knowledge. Additionally, we show
experimentally that the structural information can be kept unmodified when
re-training the AI system with new samples while still achieving a validation
accuracy similar to that obtained when re-training a neural network with
similar size. Since the proposed AI system is based on a linear model, multiple
copies of the model, trained with different datasets, can be easily combined.
This enables faster and greener (re)-training algorithms, including incremental
re-training and federated incremental re-training.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要