ResNet for Histopathologic Cancer Detection, the Deeper, the Better?

Authors

  • Ziying Wang School of Medical Imaging, Fujian Medical University, China
  • Jinghong Gao School of Medical Imaging, Fujian Medical University, China
  • Hangyi Kan School of Medical Imaging, Fujian Medical University, China
  • Yang Huang School of Medical Imaging, Fujian Medical University, China
  • Furong Tang School of Medicine, Tsinghua University, China
  • Wen Li Department of Pathology, Fujian Medical University Union Hospital, China
  • Fenglong Yang School of Medical Technology and Engineering, Fujian Medical University, China

DOI:

https://doi.org/10.47852/bonviewJDSIS3202744

Keywords:

histopathological cancer, image classification, residual neural network, transfer learning

Abstract

Histopathological image classification has become one of the most challenging tasks for researchers, due to the varied categories and detailed differences within diseases. In this study, we investigate the critical role of network depth in histopathological image classification, utilizing deep residual convolutional neural networks (ResNet). We evaluate the efficacy of two transfer learning strategies using ResNet with varying layers (18, 34, 50, 152) pretrained on ImageNet. Specifically, we analyze whether a deeper network or the fine-tuning of all layers in pre-trained ResNets enhances performance compared to freezing most layers and training only the last layer. Conducted on Kaggle's dataset of 220,025 labeled histopathology patches, our findings reveal that increasing the depth of ResNet does not guarantee better accuracy (ResNet-34 AUC: 0.992 vs. ResNet-152 AUC: 0.989). Instead, dataset-specific semantic features and the cost of training should guide model selection. Furthermore, deep ResNet outperforms traditional logistic regression (ResNet AUC: up to 0.992 vs. logistic regression AUC: 0.775), showcasing superior generalization and robustness. Notably, the strategy of freezing most layers doesn't improve the accuracy and efficiency of transfer learning and the performance of both transfer strategies depends largely on the types of data. Overall, both methods produce satisfactory results in comparison to models trained from scratch or conventional machine learning models.

 

Received: 17 January 2023 | Revised: 27 February 2023 | Accepted: 28 February 2023

 

Conflicts of Interest

The authors declare that they have no conflicts of interest to this work.

 

Data Availability Statement

The data that support the findings of this study are openly available in Kaggle HCD at https://www.kaggle.com/datasets/drbeane/hcd-cropped.

 

Author Contribution Statement

Ziying Wang: Methodology, Software, Formal analysis, Investigation, Data curation, Writing - original draft, Visualization. Jinghong Gao: Methodology, Software, Formal analysis, Writing - original draft, Visualization. Hangyi Kan: Validation, Investigation. Yang Huang: Formal analysis. Furong Tang: Writing - review & editing, Funding acquisition. Wen Li: Conceptualization, Resources, Data curation, Supervision, Project administration. Fenglong Yang: Conceptualization, Writing - review & editing, Supervision, Project administration, Funding acquisition.


Downloads

Published

2023-03-03

Issue

Section

Research Articles

How to Cite

Wang, Z., Gao, J., Kan, H., Huang, Y., Tang, F., Li, W., & Yang, F. (2023). ResNet for Histopathologic Cancer Detection, the Deeper, the Better?. Journal of Data Science and Intelligent Systems, 2(4), 212-220. https://doi.org/10.47852/bonviewJDSIS3202744