TY - GEN
T1 - A Method for Adversarial Example Generation by Perturbing Selected Pixels
AU - Tomoki, Kamegawa
AU - Masaomi, Kimura
N1 - Publisher Copyright:
© 2022 Asia-Pacific of Signal and Information Processing Association (APSIPA).
PY - 2022
Y1 - 2022
N2 - Recent research has shown that deep neural networks can intentionally change their output by adding perturbation to the input. Such images are called adversarial examples. An attack method that uses sparse perturbations is Jacobian-based Saliency Map Attack(JSMA), which finds the pixel to perturb by generating a saliency map from the gradient of the output. It deceives the neural network by changing the pixel value to the maximum or minimum value. However, changing the value of a pixel to a maximum or minimum value is not optimal for generating adversarial examples because the perturbation is perceived by human eyes. In this study, we propose a new method to reduce perturbations and generate adversarial examples in which perturbations are not easily recognized by human eyes. Our method generates adversarial examples with smaller perturbations by improving the extraction conditions of pixels in JSMA to be perturbed and the method of adding perturbations. Experimental results show that the method can generate smaller perturbations with a misclassification rate comparable to that of JSMA. This makes the perturbations less recognizable to human eyes.
AB - Recent research has shown that deep neural networks can intentionally change their output by adding perturbation to the input. Such images are called adversarial examples. An attack method that uses sparse perturbations is Jacobian-based Saliency Map Attack(JSMA), which finds the pixel to perturb by generating a saliency map from the gradient of the output. It deceives the neural network by changing the pixel value to the maximum or minimum value. However, changing the value of a pixel to a maximum or minimum value is not optimal for generating adversarial examples because the perturbation is perceived by human eyes. In this study, we propose a new method to reduce perturbations and generate adversarial examples in which perturbations are not easily recognized by human eyes. Our method generates adversarial examples with smaller perturbations by improving the extraction conditions of pixels in JSMA to be perturbed and the method of adding perturbations. Experimental results show that the method can generate smaller perturbations with a misclassification rate comparable to that of JSMA. This makes the perturbations less recognizable to human eyes.
UR - http://www.scopus.com/inward/record.url?scp=85146297885&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85146297885&partnerID=8YFLogxK
U2 - 10.23919/APSIPAASC55919.2022.9980119
DO - 10.23919/APSIPAASC55919.2022.9980119
M3 - Conference contribution
AN - SCOPUS:85146297885
T3 - Proceedings of 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2022
SP - 1109
EP - 1114
BT - Proceedings of 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2022
Y2 - 7 November 2022 through 10 November 2022
ER -