Smaller Text Classifiers with Discriminative Cluster Embeddings
NAACL-HLT(2019)
摘要
Word embedding parameters often dominate overall model sizes in neural methods for natural language processing. We reduce deployed model sizes of text classifiers by learning a hard word clustering in an end-to-end manner. We use the Gumbel-Softmax distribution to maximize over the latent clustering while minimizing the task loss. We propose variations that selectively assign additional parameters to words, which further improves accuracy while still remaining parameter-efficient.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络