Multiview Robust Adversarial Stickers for Arbitrary Objects in the Physical World
DOI:
https://doi.org/10.47852/bonviewJCCE2202322Keywords:
adversarial attacks, image classification, physical worldAbstract
Among different adversarial attacks on deep learning models for image classification, physical attacks have been considered easier
to implement without assuming access to victims’ devices. In this paper, we propose a practical new pipeline for launching multiview robust physical-world attacks, by creating printable adversarial stickers for arbitrary objects. In particular, a 3D model is used to estimate the camera pose in the photo. Then, by perturbing a part of the 3D model’s texture, rendering it, and overlaying the perturbation onto the physical images, realistic training images can be obtained for training a robust adversarial sticker. Experiments with our pipeline show that highly effective adversarial stickers can be generated for many different objects of different sizes and shapes while also achieving a higher attack success rate than attacks that do not utilize camera pose estimation and 3D models. In addition, by using different backgrounds in training and adding randomness to training images, the created stickers continue to function in varied environments. Attacks also remain robust in black-box tests.
Received: 13 July 2022 | Revised: 18 July 2022 | Accepted: 24 August 2022
Conflicts of Interest
The authors declare that they have no conflicts of interest to this work.
Metrics
Downloads
Published
Issue
Section
License
Copyright (c) 2022 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.