Dual-Model Fusion for Ultra-Accurate Embedded Object Detection

Authors

DOI:

https://doi.org/10.47852/bonviewSWT52026032

Keywords:

artificial intelligence, deep learning, ensemble AI

Abstract

Numerous industries consider their needs ultra-high accuracy with regard to an artificial intelligence (AI)-detected object and pose a challenge with real-world variations. This study specifically focuses on industrial manufacturing lines, where quality control is critical. The methods we discuss here should be transferable to other industry domains with similar constraints, such as logistics or packaging. The primary objective is to achieve greater than 99.9% accuracy with object detection in real-time industrial environments, without significantly impacting the latency. For that, we worked on an SSD_MobileNet model that was refined to the utmost precision and implemented alongside a dual-model system that used a generalist surrogate trained on blurred synthetic images. To achieve blur efficacy, the second model had to be blur trained, blending contextual depth and resilience. Both models’ outputs are fused through a low-computational-cost, high-confidence detection using Intersection over Union metrics selection (>=0.8) to strike a balance between efficiency and detection reliability. Model fusion has better results compared to model stacking or score-based thresholds because it decides on the best detection by considering the spatial overlap of detections and the agreement of class IDs. On the Nvidia Jetson Orin NX platform, deploying this ensemble achieved 99.8% accuracy and further boosted the system to 99.97% without expanding inference passes. Smart dual-model implementation helps increase precision and fault-tolerant parameters while maintaining streamlined recalibrated embedded systems thresholds, proving non-breach. This work supports the shift toward AI-powered advanced industrial surveillance, and research focuses on multidisciplinary approaches toward precise, reliable object detection.

 

Received: 28 April 2025 | Revised: 26 June 2025 | Accepted: 15 July 2025

 

Conflicts of Interest

The author declares that he has no conflicts of interest to this work.

 

Data Availability Statement

The dataset used is proprietary and not available in the public domain.

 

Author Contribution Statement

Leendert Remmelzwaal: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing – original draft, Writing – review & editing, Visualization, Supervision, Project administration

Downloads

Published

2025-08-07

Issue

Section

Research Article

How to Cite

Remmelzwaal, L. (2025). Dual-Model Fusion for Ultra-Accurate Embedded Object Detection. Smart Wearable Technology. https://doi.org/10.47852/bonviewSWT52026032