Exploration of Trust and Decision-Making in AI-Augmented Military Domain: A Framework for Human–Machine Teaming

Authors

  • Janar Pekarev Force Transformation Command, Estonian Defence Forces, Estonia
  • Priit Värno Department of Applied Research, Estonian Military Academy, Estonia

DOI:

https://doi.org/10.47852/bonviewAIA62025549

Keywords:

AI-augmented judgment, trust calibration, human–machine teaming, military decision-support systems

Abstract

This study presents a theoretical and methodological framework for examining trust and decision-making in AI-augmented human–machine teaming within military contexts. The framework combines a supervised learning backbone (a pre-trained Random Forest classifier) with a deterministic rule-based override layer that encodes non-negotiable constraints aligned with key laws and principles of armed conflict, including distinction, proportionality, and military necessity, as well as related rules-of-engagement logic. Using a structured generator of combat-relevant targeting scenarios, the system produces recommendations that can be accepted, escalated, vetoed, or deferred, allowing shifts in cognitive authority and reliance to be observed and measured. An interactive, scenario-driven interface exposes calibrated confidence, salient feature cues, and explicit override traces to support verification and controlled reliance in uncertain situations. On a large synthetic scenario corpus, the model exhibits stable performance and well-calibrated probability estimates. At the same time, the guardrail layer systematically redirects borderline engage outputs toward safer outcomes and audit-ready escalation states. The artifact is positioned as a research instrument rather than an operational decision-making authority. It is designed to elicit and quantify trust calibration behaviors, including cautious skepticism, confident alignment, and deliberative hesitation, across varying levels of complexity and ambiguity. The design is released for replication and iterative refinement, supporting interdisciplinary evaluation of transparent, doctrine-compatible AI decision support and providing a practical basis for controlled user studies on trust, bias, and ethical judgment in military human–AI teaming. 

 

Received: 27 February 2025 | Revised: 9 December 2025 | Accepted: 25 March 2026

 

Conflicts of Interest

The authors declare that they have no conflicts of interest to this work. 

 

Data Availability Statement

Data available on request from the corresponding author upon reasonable request. 

 

Author Contribution Statement

Janar Pekarev: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing – original draft, Writing – review & editing, Visualization, Supervision, Project administration. Priit Värno: Validation, Investigation, Writing – review & editing.


Downloads

Published

2026-04-28

Issue

Section

Research Article

How to Cite

Pekarev, J., & Värno, P. (2026). Exploration of Trust and Decision-Making in AI-Augmented Military Domain: A Framework for Human–Machine Teaming. Artificial Intelligence and Applications. https://doi.org/10.47852/bonviewAIA62025549