Universal Batch Learning with Log-Loss

Yaniv Fogel, Meir Feder

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this paper we consider the problem of batch learning with log-loss, in a stochastic setting where given the data features, the outcome is generated by an unknown distribution from a class of models. Utilizing the minimax theorem and information-theoretical tools, we came up with the minimax universal learning solution, a redundancy capacity theorem and an upper bound on the performance of the optimal solution. The resulting universal learning solution is a mixture over the models in the considered class. Furthermore, we get a better bound on the generalization error that decays as O(\log N/N), where N is the sample size, instead of O(\sqrt{\log N/N}) which is commonly attained in statistical learning theory for the empirical risk minimizer.

Original languageEnglish
Title of host publication2018 IEEE International Symposium on Information Theory, ISIT 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages21-25
Number of pages5
ISBN (Print)9781538647806
DOIs
StatePublished - 15 Aug 2018
Event2018 IEEE International Symposium on Information Theory, ISIT 2018 - Vail, United States
Duration: 17 Jun 201822 Jun 2018

Publication series

NameIEEE International Symposium on Information Theory - Proceedings
Volume2018-June

Conference

Conference2018 IEEE International Symposium on Information Theory, ISIT 2018
Country/TerritoryUnited States
CityVail
Period17/06/1822/06/18

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Information Systems
  • Modelling and Simulation
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Universal Batch Learning with Log-Loss'. Together they form a unique fingerprint.

Cite this