Multiview Robust Adversarial Stickers for Arbitrary Objects in the Physical World


  • Scott Oslund University of California, USA
  • Clayton Washington Ohio State University, USA
  • Andrew So California State Polytechnic University, USA
  • Tingting Chen California State Polytechnic University, USA
  • Hao Ji California State Polytechnic University, USA



adversarial attacks, image classification, physical world


Among different adversarial attacks on deep learning models for image classification, physical attacks have been considered easier
to implement without assuming access to victims’ devices. In this paper, we propose a practical new pipeline for launching multiview robust physical-world attacks, by creating printable adversarial stickers for arbitrary objects. In particular, a 3D model is used to estimate the camera pose in the photo. Then, by perturbing a part of the 3D model’s texture, rendering it, and overlaying the perturbation onto the physical images, realistic training images can be obtained for training a robust adversarial sticker. Experiments with our pipeline show that highly effective adversarial stickers can be generated for many different objects of different sizes and shapes while also achieving a higher attack success rate than attacks that do not utilize camera pose estimation and 3D models. In addition, by using different backgrounds in training and adding randomness to training images, the created stickers continue to function in varied environments. Attacks also remain robust in black-box tests.


Received: 13 July 2022 | Revised: 18 July 2022 | Accepted: 24 August 2022


Conflicts of Interest

The authors declare that they have no conflicts of interest to this work.


Metrics Loading ...




How to Cite

Oslund, S., Washington, C., So, A., Chen, T., & Ji, H. (2022). Multiview Robust Adversarial Stickers for Arbitrary Objects in the Physical World. Journal of Computational and Cognitive Engineering, 1(4), 152–158.



Research Articles