Online Non-Convex Learning - Following the Perturbed Leader is Optimal

Arun Sai Suggala
Arun Sai Suggala

ALT, pp. 845-861, 2020.

Cited by: 0|Bibtex|Views17|Links
EI

Abstract:

We study the problem of online learning with non-convex losses, where the learner has access to an offline optimization oracle. We show that the classical Follow the Perturbed Leader (FTPL) algorithm achieves optimal regret rate of $O(T^{-1/2})$ in this setting. This improves upon the previous best-known regret rate of $O(T^{-1/3})$ for...More

Code:

Data:

Your rating :
0

 

Tags
Comments