On how complexity affects the stability of a predictor

Research output: Contribution to conferencePaperpeer-review

Abstract

Given a finite random sample from a Markov chain environment, we select a predictor that minimizes a criterion function and refer to it as being calibrated to its environment. If its prediction error is not bounded by its criterion value, we say that the criterion fails. We define the predictor’s complexity to be the amount of uncertainty in detecting that the criterion fails given that it fails. We define a predictor’s stability to be the discrepancy between the average number of prediction errors that it makes on two random samples. We show that complexity is inversely proportional to the level of adaptivity of the calibrated predictor to its random environment. The calibrated predictor becomes less stable as its complexity increases or as its level of adaptivity decreases.

Original languageAmerican English
Pages161-167
Number of pages7
StatePublished - 1 Jan 2018
Externally publishedYes
Event21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018 - Playa Blanca, Lanzarote, Canary Islands, Spain
Duration: 9 Apr 201811 Apr 2018

Conference

Conference21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018
Country/TerritorySpain
CityPlaya Blanca, Lanzarote, Canary Islands
Period9/04/1811/04/18

All Science Journal Classification (ASJC) codes

  • Statistics and Probability
  • Artificial Intelligence

Cite this