Imperceptible Adversarial Examples by Spatial Chroma-Shift


Creative Commons License

AYDIN A., Sen D., Karli B. T. , Hanoglu O., TEMİZEL A.

1st International Workshop on Adversarial Learning for Multimedia, AdvM 2021, co-located with ACM MM 2021, Virtual, Online, China, 20 October 2021, pp.8-14 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Doi Number: 10.1145/3475724.3483604
  • City: Virtual, Online
  • Country: China
  • Page Numbers: pp.8-14
  • Keywords: adversarial examples, computer vision, neural networks
  • Middle East Technical University Affiliated: Yes

Abstract

© 2021 ACM.Deep Neural Networks have been shown to be vulnerable to various kinds of adversarial perturbations. In addition to widely studied additive noise based perturbations, adversarial examples can also be created by applying a per pixel spatial drift on input images. While spatial transformation based adversarial examples look more natural to human observers due to absence of additive noise, they still possess visible distortions caused by spatial transformations. Since the human vision is more sensitive to the distortions in the luminance compared to those in chrominance channels, which is one of the main ideas behind the lossy visual multimedia compression standards, we propose a spatial transformation based perturbation method to create adversarial examples by only modifying the color components of an input image. While having competitive fooling rates on CIFAR-10 and NIPS2017 Adversarial Learning Challenge datasets, examples created with the proposed method have better scores with regards to various perceptual quality metrics. Human visual perception studies validate that the examples are more natural looking and often indistinguishable from their original counterparts.