Adaptive Client-Specific Augmentation (ACSA) for Mitigating Data Heterogeneity in Federated Learning

Authors

  • Aiswariya Milan Kummaya Department of Computer Science and Engineering, Amrita Vishwa Vidyapeetham–Bengaluru, India https://orcid.org/0000-0003-0763-1705
  • Amudha Joseph Department of Computer Science and Engineering, Amrita Vishwa Vidyapeetham–Bengaluru, India
  • George Ghinea Department of Computer Science, Brunel University of London, UK

DOI:

https://doi.org/10.47852/bonviewJCCE62028199

Keywords:

federated learning, deep learning, data heterogeneity, data augmentation

Abstract

Federated learning (FL) enables the training of deep learning models across multiple distributed clients while maintaining data privacy. Although this approach effectively protects sensitive information, it often yields lower accuracy compared to conventional centralized training. This performance gap is primarily due to the non-independent and non-identically distributed nature of client data—a central challenge in FL known as data heterogeneity. Because each client's dataset follows a distinct distribution, local models may diverge during training, which can degrade the performance of the aggregated global model. To address this issue, various data augmentation techniques have been proposed within the FL framework. However, many of these methods still require sharing of raw data or intermediate features among clients, potentially compromising privacy and conflicting with the fundamental principles of FL. To overcome this limitation, we introduce an additional module, Adaptive Client-Specific Augmentation, within the FedHetero, a refined framework that aims to identify the nature of heterogeneity present in the data and provides recommendations of augmentation policies instead of any data or feature-level information, which enhances privacy. Based on these shared policies, each client autonomously performs a dynamic data augmentation search to determine and apply the strategy most appropriate for its local dataset. This client-specific adaptation not only improves local and global model performance but also ensures strict privacy preservation. Experimental results on heterogeneous FL benchmarks demonstrate that our approach outperforms existing state-of-the-art methods in both communication efficiency and overall accuracy.



Received: 13 November 2025 | Revised: 28 January 2026 | Accepted: 13 March 2026



Conflicts of Interest

The authors declare that they have no conflicts of interest to this work.



Data Availability Statement

The CIFAR-10 and CIFAR-100 data that support the findings of this study are openly available at https://www.cs.toronto.edu/~kriz/cifar.html; The Riga data that support the findings of this study are openly available at https://deepblue.lib.umich.edu/data/concern/data_sets/3b591905z; The Dhristi data that support the findings of this study are openly available at https://www.kaggle.com/datasets/lokeshsaipureddi/drishtigs-retina-dataset-for-onh-segmentation; The Rim-one data that support the findings of this study are openly available at https://github.com/miag-ull/rim-one-dl; and The HRF data that support the findings of this study are openly available at https://www5.cs.fau.de/research/data/fundus-images.



Author Contribution Statement

Aiswariya Milan Kummaya: Conceptualization, Methodology, Software, Investigation, Resources, Data curation, Writing - original draft, Visualization. Amudha Joseph: Conceptualization, Methodology, Validation, Formal analysis, Writing – review & editing, Supervision, Project administration. George Ghinea: Supervision, Project administration.

Downloads

Published

2026-04-13

Issue

Section

Research Articles

How to Cite

Kummaya, A. M., Joseph, A., & Ghinea, G. (2026). Adaptive Client-Specific Augmentation (ACSA) for Mitigating Data Heterogeneity in Federated Learning. Journal of Computational and Cognitive Engineering. https://doi.org/10.47852/bonviewJCCE62028199