On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models
arXiv: Computation and Language, 2019.
This paper highlights the importance of performing meaning-preserving adversarial perturbations for natural language processing models
Adversarial examples --- perturbations to the input of a model that elicit large changes in the output --- have been shown to be an effective way of assessing the robustness of sequence-to-sequence (seq2seq) models. However, these perturbations only indicate weaknesses in the model if they do not change the input so significantly that it ...More
PPT (Upload PPT)