Fake Node Attacks on Graph Convolutional Networks
DOI:
https://doi.org/10.47852/bonviewJCCE2202321Keywords:
neural networks, adversarial attacks, graph convolutional networksAbstract
In this paper, we study the robustness of graph convolutional networks (GCNs). Previous works have shown that GCNs are vulnerable to adversarial perturbation on adjacency or feature matrices of existing nodes; however, such attacks are usually unrealistic in real applications. For instance, in social network applications, the attacker will need to hack into either the client or server to change existing links or features. In this paper, we propose a new type of “fake node attacks” to attack GCNs by adding malicious fake nodes. This is much more realistic than previous attacks; in social network applications, the attacker only needs to register a set of fake accounts and link to existing ones. To conduct fake node attacks, a greedy algorithm is proposed to generate edges of malicious nodes and their corresponding features aiming to minimize the classification accuracy on the target nodes. In addition, we introduce a discriminator to classify malicious nodes from real nodes and propose a Greedy-generative adversarial network attack to simultaneously update the discriminator and the attacker, to make malicious nodes indistinguishable from the real ones. Our non-targeted attack decreases the accuracy of GCN down to 0.03, and our targeted attack reaches a success rate of 78% on a group of 100 nodes and 90% on average for attacking a single target node.
Received: 13 July 2022 | Revised: 18 July 2022 | Accepted: 24 August 2022
Conflicts of Interest
The authors declare that they have no conflicts of interest to this work.
Metrics
Downloads
Published
Issue
Section
License
Copyright (c) 2022 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.