Deep Curious Feature Selection: A Recurrent, Intrinsic-Reward Reinforcement Learning Approach to Feature Selection

Michal Moran, Goren Gordon

Research output: Contribution to journalArticlepeer-review

Abstract

Feature selection (FS) is an important step in the process of building machine learning-based models. The goal of the FS step is to find a small subset of features that will provide good prediction results by removing noisy, irrelevant, or repetitive features.Commonly used wrappermethods use a machine learning model as a black box and its performance as the goal function for evaluating different features' subsets and selecting the best one. To avoid examining all possible subsets (an NP-hard problem), search algorithms are used to find the subsets to be examined, in a heuristic way. As exhaustive search methods are computationally complicated, most methods use simple and greedy search methods that yield only locally optimal results and are not sensitive to possible features interactions, which means that a feature may be chosen at the expense of two others that are more informative together. We analyze the problem of searching the features subset space with reference to two dimensions, namely, memory of past selected features subset and future selected features. We propose a new wrapper FS method based on the deep artificial curiosity framework, which implements intrinsic reward reinforcement learning, with long short-term memory unit. This novel algorithm integrates these two elements of memory and future step. We show that our method, called the deep curious FS algorithm, deals with feature interactions, and provides a feature subset which improves the accuracy of learning models on artificial and real datasets. Impact Statement-Feature selection can drastically improve the quality of learning models. In this contribution we present a novel framework wherein the selection of features depends on past selected features (via a recurrent neural network architecture) and future selected features (via deep reinforcement learning with a discount factor). We show how previous algorithms fit within this framework and suggest the Deep Curious Feature Selection algorithm that combines past and future selections. We show how this novel algorithm, compared to previous ones, overcomes challenges of features' interactions and improves learning model outcomes.

Original languageEnglish
Pages (from-to)1174-1184
Number of pages11
JournalIEEE Transactions on Artificial Intelligence
Volume5
Issue number3
DOIs
StatePublished - 1 Mar 2024

Keywords

  • Curiosity loop
  • deep reinforcement learning (RL)
  • feature selection (FS)
  • long short-term memory unit (LSTM).

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Deep Curious Feature Selection: A Recurrent, Intrinsic-Reward Reinforcement Learning Approach to Feature Selection'. Together they form a unique fingerprint.

Cite this