Block Switching: A Stochastic Approach for Deep Learning Security

Authors

  • Xiao Wang Boston University, USA
  • Siyue Wang Northeastern University, USA
  • Pinyu Chen IBM Research, USA
  • Xue Lin Northeastern University, USA
  • Peter Chin Boston University, USA

DOI:

https://doi.org/10.47852/bonviewJCCE2202320

Keywords:

deep learning security, adversarial attack, stochastic defense, DNN robustness

Abstract

Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models. That is, subtly crafted perturbations of the input can make a trained network with high accuracy and produce arbitrary incorrect predictions, while maintaining imperceptible to human vision system. In this paper, we introduce Block Switching (BS), a defense strategy against adversarial attacks based on stochasticity. BS replaces a block of model layers with multiple parallel channels, and the active channel is randomly assigned in the run time, hence unpredictable to the adversary. We show empirically that BS leads to a more dispersed input gradient distribution and superior defense effectiveness compared with other stochastic defenses such as stochastic activation pruning. Compared to other defenses, BS is also characterized by the following features: (i) BS causes less test accuracy drop; (ii) BS is attack-independent; and (iii) BS is compatible with other defenses and can be used jointly with others.

 

Received: 13 July 2022 | Revised: 18 July 2022 | Accepted: 24 August 2022

 

Conflicts of Interest

The authors declare that they have no conflicts of interest to this work.

Metrics

Metrics Loading ...

Downloads

Published

2022-10-05

How to Cite

Wang, X., Wang, . S., Chen, P., Lin, X., & Chin, P. (2022). Block Switching: A Stochastic Approach for Deep Learning Security. Journal of Computational and Cognitive Engineering, 1(4), 159–164. https://doi.org/10.47852/bonviewJCCE2202320

Issue

Section

Research Articles