On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models

arXiv: Computation and Language, 2019.

Cited by: 50|Views77
EI
Weibo:
This paper highlights the importance of performing meaning-preserving adversarial perturbations for natural language processing models

Abstract:

Adversarial examples --- perturbations to the input of a model that elicit large changes in the output --- have been shown to be an effective way of assessing the robustness of sequence-to-sequence (seq2seq) models. However, these perturbations only indicate weaknesses in the model if they do not change the input so significantly that it ...More

Code:

Data:

0
Your rating :
0

 

Tags
Comments