Guardians of the Quantum GAN
arxiv(2024)
摘要
Quantum Generative Adversarial Networks (qGANs) are at the forefront of
image-generating quantum machine learning models. To accommodate the growing
demand for Noisy Intermediate-Scale Quantum (NISQ) devices to train and infer
quantum machine learning models, the number of third-party vendors offering
quantum hardware as a service is expected to rise. This expansion introduces
the risk of untrusted vendors potentially stealing proprietary information from
the quantum machine learning models. To address this concern we propose a novel
watermarking technique that exploits the noise signature embedded during the
training phase of qGANs as a non-invasive watermark. The watermark is
identifiable in the images generated by the qGAN allowing us to trace the
specific quantum hardware used during training hence providing strong proof of
ownership. To further enhance the security robustness, we propose the training
of qGANs on a sequence of multiple quantum hardware, embedding a complex
watermark comprising the noise signatures of all the training hardware that is
difficult for adversaries to replicate. We also develop a machine learning
classifier to extract this watermark robustly, thereby identifying the training
hardware (or the suite of hardware) from the images generated by the qGAN
validating the authenticity of the model. We note that the watermark signature
is robust against inferencing on hardware different than the hardware that was
used for training. We obtain watermark extraction accuracy of 100
training the qGAN on individual and multiple quantum hardware setups (and
inferencing on different hardware), respectively. Since parameter evolution
during training is strongly modulated by quantum noise, the proposed watermark
can be extended to other quantum machine learning models as well.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要