PATTERN RECOGNITION LETTERS, vol.174, pp.92-98, 2023 (SCI-Expanded)
Deep neural networks are known to be vulnerable to adversarial perturbations. The amount of these perturbations are generally quantified using Cp metrics, such as C0, C2 and C infinity. However, even when the measured perturbations are small, they tend to be noticeable by human observers since Cp distance metrics are not representative of human perception. On the other hand, humans are less sensitive to changes in colorspace. In addition, pixel shifts in a constrained neighborhood are hard to notice. Motivated by these observations, we propose a method that creates adversarial examples by applying spatial transformations, which creates adversarial examples by changing the pixel locations independently to chrominance channels of perceptual colorspaces such as YCbCr and CIELAB, instead of making an additive perturbation or manipulating pixel values directly. In a targeted white-box attack setting, the proposed method is able to obtain competitive fooling rates with very high confidence. The experimental evaluations show that the proposed method has favorable results in terms of approximate perceptual distance between benign and adversarially generated images. The source code is publicly available at https://github.com/ayberkydn/stadv-torch.