Cross-Domain Audio Deepfake Detection: Dataset and Analysis
arxiv(2024)
摘要
Audio deepfake detection (ADD) is essential for preventing the misuse of
synthetic voices that may infringe on personal rights and privacy. Recent
zero-shot text-to-speech (TTS) models pose higher risks as they can clone
voices with a single utterance. However, the existing ADD datasets are
outdated, leading to suboptimal generalization of detection models. In this
paper, we construct a new cross-domain ADD dataset comprising over 300 hours of
speech data that is generated by five advanced zero-shot TTS models. To
simulate real-world scenarios, we employ diverse attack methods and audio
prompts from different datasets. Experiments show that, through novel
attack-augmented training, the Wav2Vec2-large and Whisper-medium models achieve
equal error rates of 4.1% and 6.5% respectively. Additionally, we demonstrate
our models' outstanding few-shot ADD ability by fine-tuning with just one
minute of target-domain data. Nonetheless, neural codec compressors greatly
affect the detection accuracy, necessitating further research.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要