YOLOv5-Based Object Detection System for Visually Impaired Individuals Using Raspberry Pi

Authors

  • Shital Nivrutti Katkade Departmentof Computer Science & Information Technology, Dr. Babasaheb Ambedkar Marathwada University, India https://orcid.org/0009-0007-4588-681X
  • Ramesh R. Manza Departmentof Computer Science & Information Technology, Dr. Babasaheb Ambedkar Marathwada University, India
  • Chetan Pattebahadur Departmentof Computer Science & Information Technology, Dr. Babasaheb Ambedkar Marathwada University, India https://orcid.org/0009-0002-0192-2835

DOI:

https://doi.org/10.47852/bonviewAIA52024434

Keywords:

deep learning, Raspberry Pi, OpenCV, gTTS, YOLO

Abstract

Blindness or visual impairment is a problem that affects people everywhere. At least 2.2 billion people worldwide are suffering from some sort of vision impairment or blindness, of whom at least 1 billion are blind, according to the World Health Organization (WHO) [1]. There have been significant advancements in real-time object detection with the advent of deep learning. However, ensuring high accuracy and low latency on resource-constrained devices remains critically challenging, especially for outdoor navigation applications aimed at assisting the visually impaired. This article proposes a novel hybrid detection framework that integrates the strengths of region-based and regression-based methods, specifically designed for deployment on Raspberry Pi devices. Our method leverages an optimized version of the YOLO algorithm, combined with a lightweight feature extraction mechanism to balance detection accuracy and computational efficiency. Comparative evaluations
against state-of-the-art models, including Faster R-CNN, SSD, and YOLOv7, demonstrate the proposed framework’s superior performance in terms of mean Average Precision and frames per second metrics under constrained environments. The results underscore the practical feasibility of deploying deep learning models on low-power devices for real-world applications. Therefore, the proposed model leverages a two-stage detection mechanism by integrating YOLOv5 and SSD that has been trained on a blend of standard datasets (COCO, Pascal VOC) and a specialized custom dataset [2]. This method not only allows those who are visually impaired to move around but also lets them know that there are XYZ objects ahead rather than just barriers in their path.

 

Received: 25 September 2024 | Revised: 10 March 2025 | Accepted: 23 May 2025

 

Conflicts of Interest

The authors declare that they have no conflicts of interest to this work.

 

Data Availability Statement

The data that support the findings of this study are openly available in the COCO dataset repository at http://cocodataset.org.

 

Author Contribution Statement

Shital N. Katkade: Conceptualization, Methodology, Software, Formal analysis, Investigation, Data curation, Writing—original draft, Writing—review & editing, Visualization, Project administration. Ramesh R. Manza: Validation, Writing—original draft, Supervision, Project administration, Funding acquisition. Chetan Pattebahadur:  Resources, Software.


Metrics

Metrics Loading ...

Downloads

Published

2025-07-08

Issue

Section

Research Article

How to Cite

Katkade, S. N., Manza, R. R., & Pattebahadur, C. (2025). YOLOv5-Based Object Detection System for Visually Impaired Individuals Using Raspberry Pi. Artificial Intelligence and Applications. https://doi.org/10.47852/bonviewAIA52024434