A Fairness-Aware Machine Learning Framework for Sexual and Reproductive Health: Evaluating Algorithmic Bias Across Models

Authors

  • Efosa Osagie Computer Science and Data Science Department, York St. John University, UK
  • Shemi Ayo-Ogbor Ministry of Health, Al Taif, Kingdom of Saudi Arabia
  • Rebecca Balasundaram Computer Science and Data Science Department, York St. John University, UK https://orcid.org/0000-0001-6618-2642

DOI:

https://doi.org/10.47852/bonviewJDSIS62027678

Keywords:

sexual and reproductive health, algorithmic bias, bias mitigation, health equity, fairness-aware framework

Abstract

Advances in computational infrastructure and the widespread adoption of electronic health record (EHR) systems have accelerated the integration of artificial intelligence (AI) and machine learning (ML) into sexual and reproductive health (SRH) services. These technologies enhance diagnostic accuracy, support clinical decision-making, and enable predictive analytics using diverse healthcare data. However, biases within training datasets can produce unfair outcomes, particularly for underrepresented groups. This study proposes a fairness-aware ML framework designed to detect and mitigate algorithmic bias in SRH services. The framework is evaluated using two open-source datasets: a large SRH dataset from England (2014–2015) containing 2,126,413 records and the PCOS dataset covering the top 75 countries, enabling assessment of generalizability and intersectional fairness. It integrates pre-processing, in-processing, and post-processing techniques, including model-specific and group-specific thresholding. Results show that on the SRH England dataset, logistic regression (LR) achieved near-optimal parity fairness with minimal performance loss, improving disparate impact from 0.99 to 1.00 while maintaining 0.66 accuracy. Random Forest (RF) and Gradient Boosting (GB) exhibited larger fairness shifts, with disparate impact decreasing from 0.94 to 0.66 (RF) and 0.93 to 0.77 (GB), though accuracy remained stable. On the PCOS dataset, LR reduced bias with only a 1.96% accuracy drop, while GB improved performance but saw fairness decline, with disparate impact falling from 1.08 to 0.57. RF improved fairness but experienced a 28% accuracy reduction. Overall, the findings show that fairness-aware ML can substantially reduce bias, though equity–performance trade-offs vary across models and datasets.

 

Received: 16 September 2025 | Revised: 21 January 2026 | Accepted: 31 March 2026

 

Conflicts of Interest

The authors declare that they have no conflicts of interest to this work.

 

Data Availability Statement

The PCOS dataset that support the findings of this study is  openly  available  on  Kaggle at https://www.kaggle.com/datasets/ankushpanday1/pcos-prediction-datasettop-75-countries. The NHS  Sexual  and  Reproductive  Health  (SRH)  Services  England  2014–15  dataset  that  support  the  findings  of this  study is  openly  available  through  NHS  Digital  at  https://digital.nhs. uk/data-and-information/publications/statistical/sexual-and-reproductive-health-services/sexual-and-reproductive-health-servicesengland-2014-15.

 

Author Contribution Statement

Efosa Osagie: Conceptualization, Methodology, Validation, Formal analysis, Investigation, Writing – original draft, Writing – review & editing, Visualization, Supervision, Project administration. Shemi Ayo-Ogbor: Investigation, Writing – original draft. Rebecca Balasundaram: Investigation, Writing – original draft.


Downloads

Published

2026-05-08

Issue

Section

Research Articles

How to Cite

Osagie, E., Ayo-Ogbor, S., & Balasundaram, R. (2026). A Fairness-Aware Machine Learning Framework for Sexual and Reproductive Health: Evaluating Algorithmic Bias Across Models. Journal of Data Science and Intelligent Systems. https://doi.org/10.47852/bonviewJDSIS62027678