A Study on the Efficiency of Combined Reconstruction and Poisoning Attacks in Federated Learning
DOI:
https://doi.org/10.47852/bonviewJDSIS52023970Keywords:
federated learning, reconstruction, poisoning, generative adversarial network, differential privacyAbstract
Federated learning (FL) is an approach to enable the training of shared Machine learning (ML) models on private data of multiple independent participants. While FL greatly improves data privacy, it also yields a multitude of new threats, such as the combined reconstruction and poisoning attack (CRPA). CRPA allows any FL participant to manipulate the central model, causing it to misclassify arbitrary class combinations. Any participant may reconstruct samples from an arbitrary unknown class and consequently use these samples to deceive the central model into misclassifying it for another class. Previously, proposed attack scenarios only featured a single class combination as well as a single static data distribution. However, in realistic scenarios, the adversary cannot know which class combinations are vulnerable to CRPA and must adapt to the existing data distribution among the participants. In this paper, we answer the question of whether CRPA is influenced by these two parameters and observe the attack accuracy. To this end, the efficiency of CRPA is examined, focusing on the vulnerability of all class combinations, the effect of unbalanced data distribution, and the application of differential privacy (DP). We show that the success of the attack for the MNIST and FMNIST datasets is strongly influenced by the selected class combination as well as the underlying data distribution, with differences of up to 46% in attack accuracy in the worst case. Additionally, we were able to prevent the reconstruction of human-identifiable images with DP, which in our case also mitigated the poisoning attack. Our results indicate that the success of reconstruction and poisoning attacks diminishes in scenarios with an unbalanced data distribution among participants and that DP can be an effective defense against the combined attack in any type of scenario.
Received: 26 July 2024 | Revised: 16 December 2024 | Accepted: 6 March 2025
Conflicts of Interest
The authors declare that they have no conflicts of interest to this work.
Data Availability Statement
The data supporting the findings of this study are openly available in the MNIST Database at http://yann.lecun.com/exdb/mnist/. The data supporting the findings of this study are openly available on GitHub at https://github.com/zalandoresearch/fashion-mnist.
Author Contribution Statement
Christian Becker: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing – original draft, Writing – review & editing, Visualization. José Antonio Peregrina: Conceptualization, Methodology, Resources, Writing – review & editing, Visualization, Supervision. Frauke Beccard: Conceptualization, Methodology, Resources, Writing – review & editing, Supervision, Project administration, Funding acquisition. Marisa Mohr: Conceptualization, Methodology, Resources, Writing – review & editing, Supervision, Project administration, Funding acquisition. Christian Zirpins: Conceptualization, Methodology, Resources, Writing – review & editing, Visualization, Supervision, Project administration, Funding acquisition.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Authors

This work is licensed under a Creative Commons Attribution 4.0 International License.
How to Cite
Funding data
-
Bundesministerium für Bildung und Forschung
Grant numbers Grant 16KIS1142K (project KIWI) -
European Regional Development Fund
Grant numbers Interreg Upper Rhine initiative (project aura.ai)