Towards Faster Training Algorithms Exploiting Bandit Sampling From Convex to Strongly Convex Conditions

IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE(2023)

引用 0|浏览18
暂无评分
摘要
The training process for deep learning and pattern recognition normally involves the use of convex and strongly convex optimization algorithms such as AdaBelief and SAdam to handle lots of "uninformative" samples that should be ignored, thus incurring extra calculations. To solve this open problem, we propose to design bandit sampling method to make these algorithms focus on "informative" samples during training process. Our contribution is twofold: first, we propose a convex optimization algorithm with bandit sampling, termed AdaBeliefBS, and prove that it converges faster than its original version; second, we prove that bandit sampling works well for strongly convex algorithms, and propose a generalized SAdam, called SAdamBS, that converges faster than SAdam. Finally, we conduct a series of experiments on various benchmark datasets to verify the fast convergence rate of our proposed algorithms.
更多
查看译文
关键词
Bandit sampling, convex optimization, image processing, training algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要