Transformer Attention-Driven Concept Extraction for Efficient Smishing Detection

Authors

  • Zahriya Lawal Hassan Faculty of Computer Science and Information Technology, Universiti Putra Malaysia, Malaysia and Faculty of Computing, Sokoto State University, Nigeria https://orcid.org/0000-0003-1170-9536
  • Nor Fazlida Mohd Sani Faculty of Computer Science and Information Technology, Universiti Putra Malaysia, Malaysia
  • Muhammad Daniel Hafiz Abdullah Faculty of Computer Science and Information Technology, Universiti Putra Malaysia, Malaysia
  • Norwati Mustapha Faculty of Computer Science and Information Technology, Universiti Putra Malaysia, Malaysia

DOI:

https://doi.org/10.47852/bonviewAIA62028760

Keywords:

concept representation, smishing, SMS phishing, BERT, explainable AI

Abstract

Short Message Service (SMS) phishing (smishing) is a form of phishing attack that uses mobile messaging as its delivery medium. Conventional detection methods for smishing attacks fall short at identifying such messages because they resemble spam. The conventional methods rely on static rules or shallow linguistic features. In this study, we propose a transformer-based attention-driven framework for detecting smishing. We develop a concept-level representation to improve both the accuracy and the explainability of our model. The approach extracts and sorts the message signatures uncovered in smishing attacks into three conceptual categories—textual, structural, and behavioral—each of which encompasses a different aspect of how smishing attacks attempt to accomplish. We utilize pretrained Bidirectional Encoder Representations from Transformers (BERT) to construct conceptual representations from the SMS messages. By leveraging BERT’s attention weights on the smishing concept categories, informative tokens and patterns that distinguish smishing from benign messages were identified. The final classification is performed using both a fully connected neural network layer and three classical machine learning baseline models, trained on the same features. The result demonstrates that our model achieves performance of (F1-score: 98.71%, accuracy: 99.32%) outperforming the baseline models. Ablation studies further confirmed that each concept category makes a meaningful contribution to the classification performance, with behavioral concept features having the highest impact. This work highlights the potential of attention-driven concept modeling for robust and explainable smishing detection.

 

Received: 16 December 2025 | Revised: 3 February 2026 | Accepted: 25 February 2026

 

Conflicts of Interest

The authors declare that they have no conflicts of interest to this work.

 

Data Availability Statement 

The data that support the findings of this study are openly available in the UCI Machine Learning Repository at https://doi.org/10.1145/2034691.2034742, reference number [27]. 

 

Author Contribution Statement

Zahriya Lawal Hassan: Conceptualization, Methodology, Resources, Writing – original draft, Visualization. Nor Fazlida Mohd Sani: Validation, Writing – review & editing, Supervision, Project administration, Funding acquisition. Muhammad Daniel Hafiz Abdullah: Validation, Writing – review & editing, Supervision, Project administration. Norwati Mustapha: Writing – review & editing, Supervision, Project administration.


Downloads

Published

2026-03-10

Issue

Section

Research Article

How to Cite

Hassan, Z. L., Sani, N. F. M., Abdullah, M. D. H., & Mustapha, N. (2026). Transformer Attention-Driven Concept Extraction for Efficient Smishing Detection. Artificial Intelligence and Applications. https://doi.org/10.47852/bonviewAIA62028760

Funding data