Just noticeable difference for machine perception and generation of regularized adversarial images with minimal perturbation


Akan A. K. , AKBAŞ E., YARMAN VURAL F. T.

SIGNAL IMAGE AND VIDEO PROCESSING, vol.16, no.6, pp.1595-1606, 2022 (Peer-Reviewed Journal) identifier identifier

  • Publication Type: Article / Article
  • Volume: 16 Issue: 6
  • Publication Date: 2022
  • Doi Number: 10.1007/s11760-021-02114-x
  • Journal Name: SIGNAL IMAGE AND VIDEO PROCESSING
  • Journal Indexes: Science Citation Index Expanded, Scopus, Compendex, INSPEC, zbMATH
  • Page Numbers: pp.1595-1606
  • Keywords: Adversarial image generation, Adversarial attacks, Just noticeable difference

Abstract

In this study, we introduce a measure for machine perception, inspired by the concept of Just Noticeable Difference (JND) of human perception. Based on this measure, we suggest an adversarial image generation algorithm, which iteratively distorts an image by an additive noise until the model detects the change in the image by outputting a false label. The noise added to the original image is defined as the gradient of the cost function of the model. A novel cost function is defined to explicitly minimize the amount of perturbation applied to the input image while enforcing the perceptual similarity between the adversarial and input images. For this purpose, the cost function is regularized by the well-known total variation and bounded range terms to meet the natural appearance of the adversarial image. We evaluate the adversarial images generated by our algorithm both qualitatively and quantitatively on CIFAR10, ImageNet, and MS COCO datasets. Our experiments on image classification and object detection tasks show that adversarial images generated by our JND method are both more successful in deceiving the recognition/detection models and less perturbed compared to the images generated by the state-of-the-art methods, namely, FGV, FSGM, and DeepFool methods.