CT-γ-Net: A Hybrid Model Based on Convolutional Encoder-Decoder and Transformer Encoder for Brain Tumor Localization
DOI:
https://doi.org/10.47852/bonviewJDSIS42022514Keywords:
brain tumor segmentation, transformer, convolutional encoder–decoder, deep learning, disease diagnosis using artificial intelligenceAbstract
Brain tumor is a life-threatening disease, and its early diagnosis can save human life. Computer-aided brain tumor segmentation and localization in magnetic resonance imaging (MRI) images have emerged as pivotal approaches for expediting the disease diagnosis process. In the past few decades, various researchers combined the strengths of convolutional networks and transformer to perform brain tumor segmentation. However, these models require a large number of trainable weights parameters, and there is still scope for performance improvement in them. To bridge these research gaps, this paper proposes a novel hybrid model named "CT-γ-Net" for effective and efficient brain tumor localization. The proposed CT-γ-Net model follows an encoder-decoder structure in which the convolutional encoder (CE) and transformer encoder (TE) are used for encoding, whereas the convolutional decoder (CD) is utilized for decoding the combined output of CE and TE to generate the segmentation masks. In CE and CD components of the CT-γ-Net model, conventional convolutional layers are replaced by depth-wise separable convolutional layers, as these layers significantly reduce trainable weights parameters. The proposed model achieves 95.5% MeanIoU, 94.82% Dice score, and 99.24% pixel accuracy on a publicly available dataset named the Cancer Imaging Archive. These experimental results demonstrate that the CT-γ-Net model outperformed other state-of-the-art research works, despite using roughly 28% fewer trainable weights parameters. Hence, the proposed model’s lightweight nature and its high performance make it a suitable candidate for deployment on mobile devices, facilitating the precise localization of brain tumor regions in MRI images.
Received: 23 January 2024 | Revised: 18 March 2024 | Accepted: 29 March 2024
Conflicts of Interest
The authors declare that they have no conflicts of interest to this work.
Data Availability Statement
The data that support the findings of this study are openly available in Kaggle at https://www.kaggle.com/datasets/mateuszbuda/lgg-mri-segmentation.
Author Contribution Statement
Punam Bedi: Conceptualization, Validation, Writing - review & editing, Supervision, Project administration. Ningyao Ningshen: Conceptualization, Methodology, Software, Formal analysis, Writing - original draft, Visualization. Surbhi Rani: Conceptualization, Methodology, Software, Formal analysis, Writing - original draft, Visualization. Pushkar Gole: Conceptualization, Validation, Writing - review & editing. Veenu Bhasin: Conceptualization, Validation, Writing - review & editing, Supervision.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.