Cooperative Learning for Cost-Adaptive Inference
CoRR(2023)
摘要
We propose a cooperative training framework for deep neural network
architectures that enables the runtime network depths to change to satisfy
dynamic computing resource requirements. In our framework, the number of layers
participating in computation can be chosen dynamically to meet performance-cost
trade-offs at inference runtime. Our method trains two Teammate nets and a
Leader net, and two sets of Teammate sub-networks with various depths through
knowledge distillation. The Teammate nets derive sub-networks and transfer
knowledge to them, and to each other, while the Leader net guides Teammate nets
to ensure accuracy. The approach trains the framework atomically at once
instead of individually training various sizes of models; in a sense, the
various-sized networks are all trained at once, in a "package deal." The
proposed framework is not tied to any specific architecture but can incorporate
any existing models/architectures, therefore it can maintain stable results and
is insensitive to the size of a dataset's feature map. Compared with other
related approaches, it provides comparable accuracy to its full network while
various sizes of models are available.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要