Audio Deepfake Detection System with Neural Stitching for ADD 2022
arXiv (Cornell University)(2022)
Beike
Abstract
This paper describes our best system and methodology for ADD 2022: The First Audio Deep Synthesis Detection Challenge[1]. The very same system was used for both two rounds of evaluation in Track 3.2 with similar training methodology. The first round of Track 3.2 data is generated from Text-to-Speech(TTS) or voice conversion (VC) algorithms, while the second round of data consists of generated fake audio from other participants in Track 3.1, aming to spoof our systems. Our systems uses a standard 34-layer ResNet [2], with multi-head attention pooling [3] to learn the discriminative embedding for fake audio and spoof detection. We further utilize neural stitching to boost the model’s generalization capability in order to perform equally well in different tasks, and more details will be explained in the following sessions. The experiments show that our proposed method outperforms all other systems with 10.1% equal error rate(EER) in Track 3.2.
MoreTranslated text
Key words
ADD 2022,deepfake audio,anti-spoofing,neural stitching
PDF
View via Publisher
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined