Adversarial Subword Regularization for Robust Neural Machine Translation

EMNLP(2020)

引用 7|浏览433
暂无评分
摘要
Exposing diverse subword segmentations to neural machine translation (NMT) models often improves the robustness of machine translation. As NMT models experience various subword candidates, they become more robust to segmentation errors. However, the distribution of subword segmentations heavily relies on the subword language models from which erroneous segmentations of unseen words are less likely to be sampled. In this paper, we present adversarial subword regularization (ADVSR) to study whether gradient signals during training can be a substitute criterion for choosing segmentation among candidates. We experimentally show that our model-based adversarial samples effectively encourage NMT models to be less sensitive to segmentation errors and improve the robustness of NMT models in low-resource datasets.
更多
查看译文
关键词
robust neural machine translation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要