Improving Robustness of Deep-Learning-Based Image Reconstruction

Raj Ankit
Raj Ankit

ICML 2020, 2020.

Cited by: 0|Bibtex|Views14|Links
Keywords:
adversarial exampleMachine LearningDiscrete Cosine Transformdeep networkcompressed sensingMore(14+)
Weibo:
We show that for such inverse problem solvers, one should analyze and study the effect of adversaries in the measurement-space, instead of the signal-space as in previous work

Abstract:

Deep-learning-based methods for different applications have been shown vulnerable to adversarial examples. These examples make deployment of such models in safety-critical tasks questionable. Use of deep neural networks as inverse problem solvers has generated much excitement for medical imaging including CT and MRI, but recently a simi...More

Code:

Data:

0
Introduction
  • Adversarial examples for deep learning based methods have been demonstrated for different problems (Szegedy et al, 2013; Kurakin et al, 2016; Cisse et al, 2017a; Eykholt et al, 2017; Xiao et al, 2018).
  • Image reconstruction involving the recovery of an image from indirect measurements is used in many applications, including critical applications such as medical imaging, e.g., Magnetic Resonance Imaging (MRI), Computerised Tomography (CT) etc.
  • Such applications demand the reconstruction to be stable and reliable.
Highlights
  • Adversarial examples for deep learning based methods have been demonstrated for different problems (Szegedy et al, 2013; Kurakin et al, 2016; Cisse et al, 2017a; Eykholt et al, 2017; Xiao et al, 2018)
  • Image reconstruction involving the recovery of an image from indirect measurements is used in many applications, including critical applications such as medical imaging, e.g., Magnetic Resonance Imaging (MRI), Computerised Tomography (CT) etc
  • One of the most powerful methods for training an adversarially robust network is adversarial training (Madry et al, 2017; Tramer et al, 2017; Sinha et al, 2017; Arnab et al, 2018). It involves training the network using adversarial examples, enhancing the robustness of the network to attacks during inference. This strategy has been quite effective in classification settings, where the goal is to make the network output the correct label corresponding to the adversarial example
  • We propose a min-max formulation to build a robust deep-learning-based image reconstruction models
  • We found the linear network to converge to the same solution
  • Extensive experiments with non-linear deep networks for Compressive Sensing (CS) using random Gaussian and Discrete Cosine Transform measurement matrices on MNIST and CelebA datasets show that the proposed scheme outperforms other methods for different perturbations ≥ 0, the behavior depends on the conditioning of matrices, as indicated by theory for the linear reconstruction scheme
Methods
  • Where L(·) represents the applicable loss function, e.g., cross-entropy for classification, and δ is the perturbation added to each sample, within an p-norm ball of radius.
  • This min-max formulation encompasses possible variants of adversarial training.
  • It consists of solving two optimization problems: an inner maximization and an outer minimization problem.
  • For an optimal θ∗ solving the equation 2, f (; θ∗) will be robust to all the xadv lying in the -radius of p-norm ball around the true x
Conclusion
  • The authors propose a min-max formulation to build a robust deep-learning-based image reconstruction models.
  • To make this more tractable, the authors reformulate this using an auxiliary network to generate adversarial examples for which the image reconstruction network tries to minimize the reconstruction loss.
  • The authors theoretically analyzed a simple linear network and found that using min-max formulation, it outputs singular-value(s) filter regularized solution which reduces the effect of adversarial examples for ill-conditioned matrices.
Summary
  • Introduction:

    Adversarial examples for deep learning based methods have been demonstrated for different problems (Szegedy et al, 2013; Kurakin et al, 2016; Cisse et al, 2017a; Eykholt et al, 2017; Xiao et al, 2018).
  • Image reconstruction involving the recovery of an image from indirect measurements is used in many applications, including critical applications such as medical imaging, e.g., Magnetic Resonance Imaging (MRI), Computerised Tomography (CT) etc.
  • Such applications demand the reconstruction to be stable and reliable.
  • Methods:

    Where L(·) represents the applicable loss function, e.g., cross-entropy for classification, and δ is the perturbation added to each sample, within an p-norm ball of radius.
  • This min-max formulation encompasses possible variants of adversarial training.
  • It consists of solving two optimization problems: an inner maximization and an outer minimization problem.
  • For an optimal θ∗ solving the equation 2, f (; θ∗) will be robust to all the xadv lying in the -radius of p-norm ball around the true x
  • Conclusion:

    The authors propose a min-max formulation to build a robust deep-learning-based image reconstruction models.
  • To make this more tractable, the authors reformulate this using an auxiliary network to generate adversarial examples for which the image reconstruction network tries to minimize the reconstruction loss.
  • The authors theoretically analyzed a simple linear network and found that using min-max formulation, it outputs singular-value(s) filter regularized solution which reduces the effect of adversarial examples for ill-conditioned matrices.
Reference
  • Antun, V., Renna, F., Poon, C., Adcock, B., and Hansen, A. C. On instabilities of deep learning in image reconstruction-does ai come at a cost? arXiv preprint arXiv:1902.05300, 2019.
    Findings
  • Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
    Findings
  • Arnab, A., Miksik, O., and Torr, P. H. On the robustness of semantic segmentation models to adversarial attacks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 888–897, 2018.
    Google ScholarLocate open access versionFindings
  • Athalye, A., Carlini, N., and Wagner, D. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
    Findings
  • Bora, A., Jalal, A., Price, E., and Dimakis, A. G. Compressed sensing using generative models. arXiv preprint arXiv:1703.03208, 2017.
    Findings
  • Candes, E. J., Romberg, J. K., and Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 59(8):1207–1223, 2006.
    Google ScholarLocate open access versionFindings
  • Choi, J.-H., Zhang, H., Kim, J.-H., Hsieh, C.-J., and Lee, J.S. Evaluating robustness of deep image super-resolution against adversarial attacks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 303– 311, 2019.
    Google ScholarLocate open access versionFindings
  • Cisse, M., Adi, Y., Neverova, N., and Keshet, J. Houdini: Fooling deep structured prediction models. arXiv preprint arXiv:1707.05373, 2017a.
    Findings
  • Cisse, M., Bojanowski, P., Grave, E., Dauphin, Y., and Usunier, N. Parseval networks: Improving robustness to adversarial examples. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 854–863. JMLR. org, 2017b.
    Google ScholarLocate open access versionFindings
  • Dabov, K., Foi, A., Katkovnik, V., and Egiazarian, K. Bm3d image denoising with shape-adaptive principal component analysis. In SPARS’09-Signal Processing with Adaptive Sparse Structured Representations, 2009.
    Google ScholarLocate open access versionFindings
  • Dong, W., Zhang, L., Shi, G., and Wu, X. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Transactions on Image Processing, 20(7):1838–1857, 2011.
    Google ScholarLocate open access versionFindings
  • Donoho, D. L. Compressed sensing. IEEE Transactions on information theory, 52(4):1289–1306, 2006.
    Google ScholarLocate open access versionFindings
  • Elad, M. Sparse and redundant representations: from theory to applications in signal and image processing. Springer Science & Business Media, 2010.
    Google ScholarFindings
  • Elad, M. and Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing, 15(12):3736–3745, 2006.
    Google ScholarLocate open access versionFindings
  • Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., and Song, D. Robust physical-world attacks on deep learning models. arXiv preprint arXiv:1707.08945, 2017.
    Findings
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.
    Google ScholarLocate open access versionFindings
  • Hammernik, K., Klatzer, T., Kobler, E., Recht, M. P., Sodickson, D. K., Pock, T., and Knoll, F. Learning a variational network for reconstruction of accelerated mri data. Magnetic resonance in medicine, 79(6):3055–3071, 2018.
    Google ScholarLocate open access versionFindings
  • Jang, Y., Zhao, T., Hong, S., and Lee, H. Adversarial defense via learning to generate diverse attacks. In The IEEE International Conference on Computer Vision (ICCV), October 2019a.
    Google ScholarLocate open access versionFindings
  • Jang, Y., Zhao, T., Hong, S., and Lee, H. Adversarial defense via learning to generate diverse attacks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2740–2749, 2019b.
    Google ScholarLocate open access versionFindings
  • Jiang, H., Chen, Z., Shi, Y., Dai, B., and Zhao, T. Learning to defense by learning to attack. arXiv preprint arXiv:1811.01213, 2018.
    Findings
  • Jin, K. H., McCann, M. T., Froustey, E., and Unser, M. Deep convolutional neural network for inverse problems in imaging. IEEE Transactions on Image Processing, 26 (9):4509–4522, 2017.
    Google ScholarLocate open access versionFindings
  • Krogh, A. and Hertz, J. A. A simple weight decay can improve generalization. In Advances in neural information processing systems, pp. 950–957, 1992.
    Google ScholarLocate open access versionFindings
  • Kurakin, A., Goodfellow, I., and Bengio, S. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
    Findings
  • LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
    Google ScholarLocate open access versionFindings
  • Li, C., Yin, W., and Zhang, Y. Users guide for tval3: Tv minimization by augmented lagrangian and alternating direction algorithms. CAAM report, 20(46-47):4, 2009.
    Google ScholarLocate open access versionFindings
  • Liu, D., Wen, B., Liu, X., Wang, Z., and Huang, T. S. When image denoising meets high-level vision tasks: A deep learning approach. arXiv preprint arXiv:1706.04284, 2017.
    Findings
  • Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
    Google ScholarLocate open access versionFindings
  • Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
    Findings
  • Raj, A., Li, Y., and Bresler, Y. Gan-based projector for faster recovery with convergence guarantees in linear inverse problems. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5602–5611, 2019.
    Google ScholarLocate open access versionFindings
  • Ravishankar, S. and Bresler, Y. Learning sparsifying transforms. IEEE Transactions on Signal Processing, 61(5): 1072–1086, 2012.
    Google ScholarLocate open access versionFindings
  • Rick Chang, J., Li, C.-L., Poczos, B., Vijaya Kumar, B., and Sankaranarayanan, A. C. One network to solve them all–solving linear inverse problems using deep projection models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5888–5897, 2017.
    Google ScholarLocate open access versionFindings
  • Sajjadi, M. S., Scholkopf, B., and Hirsch, M. Enhancenet: Single image super-resolution through automated texture synthesis. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4491–4500, 2017.
    Google ScholarLocate open access versionFindings
  • Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., and Madry, A. Adversarially robust generalization requires more data. In Advances in Neural Information Processing Systems, pp. 5014–5026, 2018.
    Google ScholarLocate open access versionFindings
  • Sinha, A., Namkoong, H., and Duchi, J. Certifying some distributional robustness with principled adversarial training. arXiv preprint arXiv:1710.10571, 2017.
    Findings
  • Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
    Findings
  • Tramer, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017.
    Findings
  • Wang, H. and Yu, C.-N. A direct approach to robust deep learning using adversarial networks. arXiv preprint arXiv:1905.09591, 2019.
    Findings
  • Wen, B., Ravishankar, S., and Bresler, Y. Structured overcomplete sparsifying transform learning with convergence guarantees and applications. International Journal of Computer Vision, 114(2-3):137–167, 2015.
    Google ScholarLocate open access versionFindings
  • Wen, B., Ravishankar, S., Pfister, L., and Bresler, Y. Transform learning for magnetic resonance image reconstruction: From model-based learning to building neural networks. arXiv preprint arXiv:1903.11431, 2019.
    Findings
  • Wong, E., Schmidt, F., Metzen, J. H., and Kolter, J. Z. Scaling provable adversarial defenses. In Advances in Neural Information Processing Systems, pp. 8400–8409, 2018.
    Google ScholarLocate open access versionFindings
  • Xiao, C., Li, B., Zhu, J.-Y., He, W., Liu, M., and Song, D. Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610, 2018.
    Findings
  • Xu, W., Evans, D., and Qi, Y. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017.
    Findings
  • Yang, G., Yu, S., Dong, H., Slabaugh, G., Dragotti, P. L., Ye, X., Liu, F., Arridge, S., Keegan, J., Guo, Y., et al. Dagan: deep de-aliasing generative adversarial networks for fast compressed sensing mri reconstruction. IEEE transactions on medical imaging, 37(6):1310–1321, 2017.
    Google ScholarLocate open access versionFindings
  • Yang, J., Wright, J., Huang, T. S., and Ma, Y. Image superresolution via sparse representation. IEEE transactions on image processing, 19(11):2861–2873, 2010.
    Google ScholarLocate open access versionFindings
  • Yao, H., Dai, F., Zhang, S., Zhang, Y., Tian, Q., and Xu, C. Dr2-net: Deep residual reconstruction network for image compressive sensing. Neurocomputing, 359:483–493, 2019.
    Google ScholarLocate open access versionFindings
  • Zhu, B., Liu, J. Z., Cauley, S. F., Rosen, B. R., and Rosen, M. S. Image reconstruction by domain-transform manifold learning. Nature, 555(7697):487, 2018.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments