Decision Tree Regression with Residual Outlier Detection
DOI:
https://doi.org/10.47852/bonviewJDSIS42023861Keywords:
regression tree, anomaly detection, outlier, robust regressionAbstract
This paper introduces a framework for identifying outliers in predictions made by regression tree models. Existing robust regression approaches tend to focus on the construction stage, which builds regression models that are less sensitive to outliers. In contrast, our approach focuses on identifying outliers during the prediction stage. The process of our proposed approach begins with building a regression tree using a training dataset. Predictions significantly deviating from the mean within each terminal node are automatically labeled as outliers. We show how the labelled data can be explored to better understand the characteristics of the outliers. We also identify the situations under which the data exploration may not work well. Further, we make use of the outlier labels and training data to construct an anomaly detector. Our results show that the proposed method can effectively detect outliers that may exist within datasets. Such outliers, when removed, result in improved data quality. Insights into its effectiveness and potential caveats are also discussed.
Received: 18 July 2024 | Revised: 27 August 2024 | Accepted: 18 September 2024
Conflicts of Interest
The author declares that he has no conflicts of interest to this work.
Data Availability Statement
The data that support the findings of this study are openly available in Kaggle at https://doi.org/10.34740/KAGGLE/DSV/9355696.
Author Contribution Statement
Swee Chuan Tan: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing - original draft, Writing - review & editing, Visualization, Supervision, Project administration.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Author
This work is licensed under a Creative Commons Attribution 4.0 International License.