Hierarchical Brain-Inspired Deep Learning for Autonomous Decision-Making in Complex Dynamic Environments

Authors

  • Dhaya Ramakrishnan School of Electrical and Communications Engineering, PNG University of Technology, Papua New Guinea
  • Kanthavel Radhakrishnan School of Electrical and Communications Engineering, PNG University of Technology, Papua New Guinea

DOI:

https://doi.org/10.47852/bonviewAIA62026154

Keywords:

brain-inspired computing, hierarchical deep learning (HDL), cognitive architecture, deep reinforcement learning (DRL), decisionmaking, complex environments

Abstract

The intersection of artificial intelligence and neuroscience has resulted in the development of brain-inspired computational frameworks that simulate the human brain’s hierarchical decision-making and learning. In this work, we propose a Hierarchical Brain-Inspired Reinforcement Learning (HBRL) architecture that combines the benefits of Deep Reinforcement Learning (DRL) with a biologically inspired cognitive hierarchy. The proposed architecture functions by simulating cortical–subcortical processing of information in which a high-level Policy-Gradient manager conducts abstract and long-term planning, and the low-level Deep Q-Network (DQN) agents complete real-time short-term actions. The proposed architecture’s multilayer structure includes temporal abstraction, modular learning, and the ability to refine policies to optimize experience, which makes it appropriate for dynamic and uncertain environments. We applied HBRL in three common scenarios: GridWorld, autonomous vehicle navigation, and smart-city infrastructure control to evaluate the proposed system design. Overall, we found that HBRL had a 15%–20% higher rate of completing tasks, 1.4–2.4 times faster learning efficiency, along with 70–100 points higher cumulative reward when high-level and low-level HBRL agents were compared to baseline approaches (e.g., DQN, Proximal Policy Optimization, and Soft Actor-Critic). A statistical analysis using two-tailed t-tests also assessed the significance of improvements (p < 0.01) among all tested environments. The hierarchical decomposition of tasks serves both to promote convergence and improve agents’ generalization capacity in unseen conditions. In its entirety, the proposed HBRL framework provides a scalable and cognitive-inspired learning paradigm for developing intelligent autonomous systems that exhibit human-like
adaptability and efficient decision-making capabilities in complex, nonstationary real-world environments.

 

Received: 14 May 2025 | Revised: 16 October 2025 | Accepted: 19 December 2025

 

Conflicts of Interest

The authors declare that they have no conflicts of interest to this work.

 

Data Availability Statement

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

 

Author Contribution Statement

Kanthavel Radhakrishnan: Conceptualization, Methodology, Formal analysis, Resources, Data curation, Writing – original draft, Writing – review & editing, Visualization. Dhaya Ramakrishnan: Software, Validation, Investigation, Writing – original draft, Writing – review & editing, Supervision, Project administration.


Downloads

Published

2026-01-12

Issue

Section

Research Article

How to Cite

Ramakrishnan, D., & Radhakrishnan, K. (2026). Hierarchical Brain-Inspired Deep Learning for Autonomous Decision-Making in Complex Dynamic Environments. Artificial Intelligence and Applications. https://doi.org/10.47852/bonviewAIA62026154