Variational Smoothing in Recurrent Neural Network Language Models

Gábor Melis
Gábor Melis

ICLR, 2019.

Cited by: 3|Views84
EI

Abstract:

We present a new theoretical perspective of data noising in recurrent neural network language models (Xie et al., 2017). We show that each variant of data noising is an instance of Bayesian recurrent neural networks with a particular variational distribution (i.e., a mixture of Gaussians whose weights depend on statistics derived from the...More

Code:

Data:

Your rating :
0

 

Tags
Comments