A Study on the Efficiency of Combined Reconstruction and Poisoning Attacks in Federated Learning

Authors

  • Christian Becker Karlsruhe University of Applied Sciences, Germany
  • José Antonio Peregrina Karlsruhe University of Applied Sciences, Germany
  • Frauke Beccard Inovex GmbH, Germany
  • Marisa Mohr Inovex GmbH, Germany
  • Christian Zirpins Karlsruhe University of Applied Sciences, Germany https://orcid.org/0000-0002-0838-2846

DOI:

https://doi.org/10.47852/bonviewJDSIS52023970

Keywords:

federated learning, reconstruction, poisoning, generative adversarial network, differential privacy

Abstract

Federated learning (FL) is an approach to enable the training of shared Machine learning (ML) models on private data of multiple independent participants. While FL greatly improves data privacy, it also yields a multitude of new threats, such as the combined reconstruction and poisoning attack (CRPA). CRPA allows any FL participant to manipulate the central model, causing it to misclassify arbitrary class combinations. Any participant may reconstruct samples from an arbitrary unknown class and consequently use these samples to deceive the central model into misclassifying it for another class. Previously, proposed attack scenarios only featured a single class combination as well as a single static data distribution. However, in realistic scenarios, the adversary cannot know which class combinations are vulnerable to CRPA and must adapt to the existing data distribution among the participants. In this paper, we answer the question of whether CRPA is influenced by these two parameters and observe the attack accuracy. To this end, the efficiency of CRPA is examined, focusing on the vulnerability of all class combinations, the effect of unbalanced data distribution, and the application of differential privacy (DP). We show that the success of the attack for the MNIST and FMNIST datasets is strongly influenced by the selected class combination as well as the underlying data distribution, with differences of up to 46% in attack accuracy in the worst case. Additionally, we were able to prevent the reconstruction of human-identifiable images with DP, which in our case also mitigated the poisoning attack. Our results indicate that the success of reconstruction and poisoning attacks diminishes in scenarios with an unbalanced data distribution among participants and that DP can be an effective defense against the combined attack in any type of scenario.

 

Received: 26 July 2024 | Revised: 16 December 2024 | Accepted: 6 March 2025

 

Conflicts of Interest

The authors declare that they have no conflicts of interest to this work.

 

Data Availability Statement

The data supporting the findings of this study are openly available in the MNIST Database at http://yann.lecun.com/exdb/mnist/. The data supporting the findings of this study are openly available on GitHub at https://github.com/zalandoresearch/fashion-mnist.

 

Author Contribution Statement

Christian Becker: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing – original draft, Writing – review & editing, Visualization. José Antonio Peregrina: Conceptualization, Methodology, Resources, Writing – review & editing, Visualization, Supervision. Frauke Beccard: Conceptualization, Methodology, Resources, Writing – review & editing, Supervision, Project administration, Funding acquisition. Marisa Mohr: Conceptualization, Methodology, Resources, Writing – review & editing, Supervision, Project administration, Funding acquisition. Christian Zirpins: Conceptualization, Methodology, Resources, Writing – review & editing, Visualization, Supervision, Project administration, Funding acquisition.


Downloads

Published

2025-03-20

Issue

Section

Research Articles

How to Cite

Becker, C., Peregrina, J. A., Beccard, F., Mohr, M., & Zirpins, C. (2025). A Study on the Efficiency of Combined Reconstruction and Poisoning Attacks in Federated Learning. Journal of Data Science and Intelligent Systems. https://doi.org/10.47852/bonviewJDSIS52023970

Funding data