Learning functions across many orders of magnitudes.

arXiv: Learning(2016)

引用 24|浏览95
暂无评分
摘要
Learning non-linear functions can be hard when the magnitude of the target function is unknown beforehand, as most learning algorithms are not scale invariant. We propose an algorithm to adaptively normalize these targets. This is complementary to recent advances in input normalization. Importantly, the proposed method preserves the unnormalized outputs whenever the normalization is updated to avoid instability caused by non-stationarity. It can be combined with any learning algorithm and any non-linear function approximation, including the important special case of deep learning. We empirically validate the method in supervised learning and reinforcement learning and apply it to learning how to play Atari 2600 games. Previous work on applying deep learning to this domain relied on clipping the rewards to make learning in different games more homogeneous, but this uses the domain-specific knowledge that in these games counting rewards is often almost as informative as summing these. Using our adaptive normalization we can remove this heuristic without diminishing overall performance, and even improve performance on some games, such as Ms. Pac-Man and Centipede, on which previous methods did not perform well.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要