Adversarial image generation by spatial transformation in perceptual colorspaces


Aydın A., Temizel A.

PATTERN RECOGNITION LETTERS, cilt.174, ss.92-98, 2023 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 174
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1016/j.patrec.2023.09.003
  • Dergi Adı: PATTERN RECOGNITION LETTERS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Applied Science & Technology Source, Compendex, Computer & Applied Sciences, INSPEC, zbMATH
  • Sayfa Sayıları: ss.92-98
  • Anahtar Kelimeler: Adversarial examples, Deep learning, Perceptual colorspace
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Deep neural networks are known to be vulnerable to adversarial perturbations. The amount of these perturbations are generally quantified using Cp metrics, such as C0, C2 and C infinity. However, even when the measured perturbations are small, they tend to be noticeable by human observers since Cp distance metrics are not representative of human perception. On the other hand, humans are less sensitive to changes in colorspace. In addition, pixel shifts in a constrained neighborhood are hard to notice. Motivated by these observations, we propose a method that creates adversarial examples by applying spatial transformations, which creates adversarial examples by changing the pixel locations independently to chrominance channels of perceptual colorspaces such as YCbCr and CIELAB, instead of making an additive perturbation or manipulating pixel values directly. In a targeted white-box attack setting, the proposed method is able to obtain competitive fooling rates with very high confidence. The experimental evaluations show that the proposed method has favorable results in terms of approximate perceptual distance between benign and adversarially generated images. The source code is publicly available at https://github.com/ayberkydn/stadv-torch.