Achieving Pareto Optimality using Efficient Parameter Reduction for DNNs in Resource-Constrained Edge Environment
arxiv(2024)
摘要
This paper proposes an optimization of an existing Deep Neural Network (DNN)
that improves its hardware utilization and facilitates on-device training for
resource-constrained edge environments. We implement efficient parameter
reduction strategies on Xception that shrink the model size without sacrificing
accuracy, thus decreasing memory utilization during training. We evaluate our
model in two experiments: Caltech-101 image classification and PCB defect
detection and compare its performance against the original Xception and
lightweight models, EfficientNetV2B1 and MobileNetV2. The results of the
Caltech-101 image classification show that our model has a better test accuracy
(76.21
Xception (874.6MB), and has faster training and inference times. The
lightweight models overfit with EfficientNetV2B1 having a 30.52
and MobileNetV2 having a 58.11
better memory usage than our model and Xception. On the PCB defect detection,
our model has the best test accuracy (90.30
EfficientNetV2B1 (55.25
average memory usage (849.4MB), followed by our model (865.8MB), then
EfficientNetV2B1 (874.8MB), and Xception has the highest (893.6MB). We further
experiment with pre-trained weights and observe that memory usage decreases
thereby showing the benefits of transfer learning. A Pareto analysis of the
models' performance shows that our optimized model architecture satisfies
accuracy and low memory utilization objectives.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要