Multiview Robust Adversarial Stickers for Arbitrary Objects in the Physical World
Keywords:adversarial attacks, image classification, physical world
Among different adversarial attacks on deep learning models for image classification, physical attacks have been considered easier
to implement without assuming access to victims’ devices. In this paper, we propose a practical new pipeline for launching multiview robust physical-world attacks, by creating printable adversarial stickers for arbitrary objects. In particular, a 3D model is used to estimate the camera pose in the photo. Then, by perturbing a part of the 3D model’s texture, rendering it, and overlaying the perturbation onto the physical images, realistic training images can be obtained for training a robust adversarial sticker. Experiments with our pipeline show that highly effective adversarial stickers can be generated for many different objects of different sizes and shapes while also achieving a higher attack success rate than attacks that do not utilize camera pose estimation and 3D models. In addition, by using different backgrounds in training and adding randomness to training images, the created stickers continue to function in varied environments. Attacks also remain robust in black-box tests.
How to Cite
Copyright (c) 2022 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.