Probabilistically Robust Watermarking of Neural Networks
CoRR(2024)
摘要
As deep learning (DL) models are widely and effectively used in Machine
Learning as a Service (MLaaS) platforms, there is a rapidly growing interest in
DL watermarking techniques that can be used to confirm the ownership of a
particular model. Unfortunately, these methods usually produce watermarks
susceptible to model stealing attacks. In our research, we introduce a novel
trigger set-based watermarking approach that demonstrates resilience against
functionality stealing attacks, particularly those involving extraction and
distillation. Our approach does not require additional model training and can
be applied to any model architecture. The key idea of our method is to compute
the trigger set, which is transferable between the source model and the set of
proxy models with a high probability. In our experimental study, we show that
if the probability of the set being transferable is reasonably high, it can be
effectively used for ownership verification of the stolen model. We evaluate
our method on multiple benchmarks and show that our approach outperforms
current state-of-the-art watermarking techniques in all considered experimental
setups.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要