Fake Node Attacks on Graph Convolutional Networks


  • Xiaoyun Wang University of California, USA https://orcid.org/0000-0001-8148-9060
  • Minhao Cheng University of California, USA
  • Joe Eaton NVIDIA, USA
  • Cho-Jui Hsieh University of California, USA
  • S. Felix Wu University of California, USA




neural networks, adversarial attacks, graph convolutional networks


In this paper, we study the robustness of graph convolutional networks (GCNs). Previous works have shown that GCNs are vulnerable to adversarial perturbation on adjacency or feature matrices of existing nodes; however, such attacks are usually unrealistic in real applications. For instance, in social network applications, the attacker will need to hack into either the client or server to change existing links or features. In this paper, we propose a new type of “fake node attacks” to attack GCNs by adding malicious fake nodes. This is much more realistic than previous attacks; in social network applications, the attacker only needs to register a set of fake accounts and link to existing ones. To conduct fake node attacks, a greedy algorithm is proposed to generate edges of malicious nodes and their corresponding features aiming to minimize the classification accuracy on the target nodes. In addition, we introduce a discriminator to classify malicious nodes from real nodes and propose a Greedy-generative adversarial network attack to simultaneously update the discriminator and the attacker, to make malicious nodes indistinguishable from the real ones. Our non-targeted attack decreases the accuracy of GCN down to 0.03, and our targeted attack reaches a success rate of 78% on a group of 100 nodes and 90% on average for attacking a single target node.


Metrics Loading ...




How to Cite

Wang, X., Cheng, M., Eaton, J., Hsieh, C.-J., & Wu, S. F. (2022). Fake Node Attacks on Graph Convolutional Networks. Journal of Computational and Cognitive Engineering, 1(4), 165–173. https://doi.org/10.47852/bonviewJCCE2202321



Research Articles