Block Switching: A Stochastic Approach for Deep Learning Security
DOI:
https://doi.org/10.47852/bonviewJCCE2202320Keywords:
deep learning security, adversarial attack, stochastic defense, DNN robustnessAbstract
Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models. That is, subtly crafted perturbations of the input can make a trained network with high accuracy and produce arbitrary incorrect predictions, while maintaining imperceptible to human vision system. In this paper, we introduce Block Switching (BS), a defense strategy against adversarial attacks based on stochasticity. BS replaces a block of model layers with multiple parallel channels, and the active channel is randomly assigned in the run time, hence unpredictable to the adversary. We show empirically that BS leads to a more dispersed input gradient distribution and superior defense effectiveness compared with other stochastic defenses such as stochastic activation pruning. Compared to other defenses, BS is also characterized by the following features: (i) BS causes less test accuracy drop; (ii) BS is attack-independent; and (iii) BS is compatible with other defenses and can be used jointly with others.
Received: 13 July 2022 | Revised: 18 July 2022 | Accepted: 24 August 2022
Conflicts of Interest
The authors declare that they have no conflicts of interest to this work.
Metrics
Downloads
Published
Issue
Section
License
Copyright (c) 2022 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.