Robust Secret Data Hiding for Transformer-based Neural Machine Translation.

IJCNN(2023)

引用 0|浏览18
暂无评分
摘要
Hiding secret information in text is a research area of significant importance and a great challenge. In recent years, there have been huge developments and exciting advances in generation-based text information hiding techniques. Current generative text information hiding methods mainly establish correspondence between token and secret bits based on probability distributions given by language models. However, the semantic control of such methods is weak, and their robustness is not discussed. In this paper, we investigate an end-to-end generation-based text information hiding scheme. The proposed method uses a sequence-to-sequence model with adversarial training as a machine translation model. It converts the secret information into an embedding vector to be added to each position of the hidden state representation of the source language text, which in turn allows the model to automatically learn to produce translation results with the embedded secret information without using fixed rules. The semantics of the text with embedded secret messages obtained by translation can be controlled by the meaning of the source language text. Our experiments show that the proposed method can embed the secret message into the translation results with little loss of the translation quality and is robust to active attacks such as word deletion or synonym substitution.
更多
查看译文
关键词
robust text information hiding,end-to-end,semantic controllable
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要