Number of Instances for Reliable Feature Ranking in a Given Problem

Authors

  • Marko Bohanec Salvirt Ltd., Ljubljana, Slovenia
  • Mirjana Kljajić Borštnar Faculty of Organizational Sciences, University of Maribor, Kranj, Slovenia
  • Marko Robnik-Šikonja Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia

Keywords:

machine learning, feature ranking, feature evaluation

Abstract

Background: In practical use of machine learning models, users may add new features to an existing classification model, reflecting their (changed) empirical understanding of a field. New features potentially increase classification accuracy of the model or improve its interpretability. Objectives: We have introduced a guideline for determination of the sample size needed to reliably estimate the impact of a new feature. Methods/Approach: Our approach is based on the feature evaluation measure ReliefF and the bootstrap-based estimation of confidence intervals for feature ranks. Results: We test our approach using real world qualitative business-to-business sales forecasting data and two UCI data sets, one with missing values. The results show that new features with a high or a low rank can be detected using a relatively small number of instances, but features ranked near the border of useful features need larger samples to determine their impact. Conclusions: A combination of the feature evaluation measure ReliefF and the bootstrap-based estimation of confidence intervals can be used to reliably estimate the impact of a new feature in a given problem.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Downloads

Published

2018-12-31