Improper Multiclass Boosting

Nataly Brukhim, Steve Hanneke, Shay Moran

Research output: Contribution to journalConference articlepeer-review

Abstract

We study the setting of multiclass boosting with a possibly large number of classes. A recent work by Brukhim, Hazan, Moran, and Schapire, 2021, proved a hardness result for a large class of natural boosting algorithms we call proper. These algorithms output predictors that correspond to a plurality-vote aggregation of weak hypotheses. In particular, they showed that proper boosting algorithms must incur a large cost that scales with the number of classes. In this work we propose an efficient improper multiclass boosting algorithm that circumvents this hardness result. A key component of our algorithm is based on the technique of list learning. In list learning, instead of predicting a single outcome for a given unseen input, the goal is to provide a short menu of predictions. The resulting boosting algorithm has sample and oracle complexity bounds that are entirely independent of the number of classes. A corollary of the above is that plurality-vote over a learnable class is also learnable. We complement this result by showing that other simple aggregations over hypotheses from a learnable class do not preserve learnability, unlike in the binary setting.

Original languageEnglish
Pages (from-to)5433-5452
Number of pages20
JournalProceedings of Machine Learning Research
Volume195
StatePublished - 2023
Externally publishedYes
Event36th Annual Conference on Learning Theory, COLT 2023 - Bangalore, India
Duration: 12 Jul 202315 Jul 2023

Keywords

  • Compression Schemes
  • List learning
  • Multiclass boosting

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Improper Multiclass Boosting'. Together they form a unique fingerprint.

Cite this