Generative Adversarial-Based Ubiquitous Data Integration Model for Human Re-Identification

Authors

DOI:

https://doi.org/10.47852/bonviewJCCE42022872

Keywords:

ubiquitous data, human re-identification, generative adversarial networks, cross-modal pedestrian re-identification algorithms

Abstract

Person re-identification is a crucial field in computer vision with significant challenges, including illumination, occlusion, pose variation, and feature variation in complex backgrounds. Traditional methods have limitations in addressing these issues. However, thanks to the advent of deep learning, new avenues have opened up, making it a hot topic in research. Generative adversarial networks (GANs) have gained immense popularity in recent years for their ability to overcome these challenges. With their potential, GANs offer a promising solution to help revolutionize the field of person re-identification. This paper introduces two innovative approaches that significantly improve the network's performance and efficiency. Firstly, a dual-stream network is proposed that integrates GANs to learn features from multi-modal images. This approach is a breakthrough in network structure. Secondly, an angle-based heterogeneous center triplet loss is presented which enhances clustering effects in the feature space and reduces the effect of outliers on paired distances, making it an excellent loss function. These approaches collectively provide a new angle to limit the feature space’s positive and negative tests, resulting in a highly efficient and effective network. This paper proposes a technique for learning discriminative pedestrian features while minimizing cross-modal differences. Our model optimizes results for pedestrian re-identification datasets like SYSU-MM01 and RegDB, using a vast collection of visible light and infrared photos of pedestrians taken from multiple cameras. Our ablation experiments show that the dual-stream network in this paper achieves a Rank-1 accuracy of 57.27% using the SYSU dataset in all-search mode, surpassing many existing works. This demonstrates that the dual-pooling fused features obtained from feature fusion in our technique are richer and more discriminative.

 

Received: 12 March 2024 | Revised: 15 April 2024 | Accepted: 2 May 2024

 

Conflicts of Interest

The authors declare that they have no conflicts of interest to this work.

 

Data Availability Statement

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

 

Author Contribution Statement

Mohammad Shabaz: Conceptualization, Methodology, Resources, Writing – original draft, Visualization, Supervision, Project administration, Funding acquisition. Mukesh Soni: Software, Validation, Formal analysis, Investigation, Data curation, Writing – review & editing.


Metrics

Metrics Loading ...

Downloads

Published

2024-05-07

Issue

Section

Research Articles

How to Cite

Shabaz, M., & Soni, M. . (2024). Generative Adversarial-Based Ubiquitous Data Integration Model for Human Re-Identification. Journal of Computational and Cognitive Engineering. https://doi.org/10.47852/bonviewJCCE42022872