A Review on Bridging Brain-Inspired Mechanisms and Large-Scale Pre-trained Models: Toward Adaptive, Efficient, and Interpretable AI
DOI:
https://doi.org/10.47852/bonviewFSI52026630Keywords:
large-scale pre-trained models, brain-inspired optimization, artificial intelligence, event-driven computation, continual learning, interpretabilityAbstract
Large-scale pre-trained models, such as GPT, greatly improve numerous areas of artificial intelligence (AI), including natural language understanding, image recognition, and the integration of various data types. However, these models continue to encounter significant challenges, including excessive computational resource requirements, inadequate adaptability to dynamic environments, susceptibility to catastrophic forgetting, and limited internal interpretability. By comparison, the human brain exhibits efficient learning from sparse data, demonstrates robust adaptability across diverse contexts, operates with minimal energy consumption, retains information over extended periods, and can be elucidated through its underlying cognitive processes. This review examines recent research and is the first to categorize brain-inspired methods into three key dimensions. It explores how mechanisms of the human brain, such as hierarchical and modular designs, biologically inspired attention mechanisms, memory enhancement strategies, synaptic plasticity, and predictive coding, could inspire optimizations for large-scale models. This review not only synthesizes the current state of the field but also proposes potential directions for future research. There remains a need for stronger theories, better testing methods, improved hardware–software coordination, and careful consideration of ethical issues. Overcoming these challenges will require closer collaboration between neuroscientists, cognitive scientists, machine learning experts, and engineers. Through such collaboration, the field may develop the next generation of AI systems that perform at the highest levels while being more efficient, flexible, and transparent.
Received: 28 June 2025 | Revised: 27 October 2025 | Accepted: 21 November 2025
Conflicts of Interest
Zongwei Luo is the Editor-in-Chief of FinTech and Sustainable Innovation and was not involved in the editorial review or the decision to publish this article. The authors declare that they have no conflicts of interest to this work.
Data Availability Statement
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
Author Contribution Statement
Jingzhe Wang: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Writing – review & editing, Visualization. Yajing Wang: Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Writing – review & editing, Visualization. Zongwei Luo: Conceptualization, Methodology, Resources, Writing – review & editing, Supervision, Project administration, Funding acquisition.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Authors

This work is licensed under a Creative Commons Attribution 4.0 International License.