Skip to main navigation Skip to search Skip to main content

Ignoring Is a Bliss: Learning with Large Noise Through Reweighting-Minimization

Daniel Vainsencher, Shie Mannor, Huan Xu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We consider learning in the presence of arbitrary noise that can overwhelm the signal in terms of magnitude on a fraction of data points observed (aka outliers). Standard approaches based on minimizing empirical loss can fail miserably and lead to arbitrary bad solutions in this setting. We propose an approach that iterates between finding a solution with minimal empirical loss and re-weighting the data, reinforcing data points where the previous solution works well. We show that our approach can handle arbitrarily large noise, is robust as having a non-trivial breakdown point, and converges linearly under certain conditions. The intuitive idea of our approach is to automatically exclude “difficult” data points from model fitting. More importantly (and perhaps surprisingly), we validate this intuition by establishing guarantees for generalization and iteration complexity that em essentially ignore the presence of outliers
Original languageEnglish
Title of host publicationProceedings of the 2017 Conference on Learning Theory
EditorsSatyen Kale, Ohad Shamir
Pages1849-1881
Number of pages33
Volume65
StatePublished - 1 Jul 2017

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR

Cite this