On statistical learning via the lens of compression

Ofir David, Shay Moran, Amir Yehudayoff

Research output: Contribution to journalArticlepeer-review

Abstract

This work continues the study of the relationship between sample compression schemes and statistical learning, which has been mostly investigated within the framework of binary classification. The central theme of this work is establishing equivalences between learnability and compressibility, and utilizing these equivalences in the study of statistical learning theory. We begin with the setting of multiclass categorization (zero/one loss). We prove that in this case learnability is equivalent to compression of logarithmic sample size, and that uniform convergence implies compression of constant size. We then consider Vapnik's general learning setting: we show that in order to extend the compressibility-learnability equivalence to this case, it is necessary to consider an approximate variant of compression. Finally, we provide some applications of the compressibility-learnability equivalences.

Original languageEnglish
Pages (from-to)2792-2800
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume29
StatePublished - 2016
Event30th Annual Conference on Neural Information Processing Systems, NIPS 2016 - Barcelona, Spain
Duration: 5 Dec 201610 Dec 2016

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'On statistical learning via the lens of compression'. Together they form a unique fingerprint.

Cite this