TY - GEN
T1 - Boosting for Straggling and Flipping Classifiers
AU - Cassuto, Yuval
AU - Kim, Yogjune
N1 - Publisher Copyright: © 2021 IEEE.
PY - 2021/7/12
Y1 - 2021/7/12
N2 - Boosting is a well-known method in machine learning for combining multiple weak classifiers into one strong classifier. When used in distributed setting, accuracy is hurt by classifiers that flip or straggle due to communication and/or computation unreliability. While unreliability in the form of noisy data is well-treated by the boosting literature, the unreliability of the classifier outputs has not been explicitly addressed. Protecting the classifier outputs with an error/erasure-correcting code requires reliable encoding of multiple classifier outputs, which is not feasible in common distributed settings. In this paper we address the problem of training boosted classifiers subject to straggling or flips at classification time. We propose two approaches: one based on minimizing the usual exponential loss but in expectation over the classifier errors, and one by defining and minimizing a new worst-case loss for a specified bound on the number of unreliable classifiers.
AB - Boosting is a well-known method in machine learning for combining multiple weak classifiers into one strong classifier. When used in distributed setting, accuracy is hurt by classifiers that flip or straggle due to communication and/or computation unreliability. While unreliability in the form of noisy data is well-treated by the boosting literature, the unreliability of the classifier outputs has not been explicitly addressed. Protecting the classifier outputs with an error/erasure-correcting code requires reliable encoding of multiple classifier outputs, which is not feasible in common distributed settings. In this paper we address the problem of training boosted classifiers subject to straggling or flips at classification time. We propose two approaches: one based on minimizing the usual exponential loss but in expectation over the classifier errors, and one by defining and minimizing a new worst-case loss for a specified bound on the number of unreliable classifiers.
UR - http://www.scopus.com/inward/record.url?scp=85115115276&partnerID=8YFLogxK
U2 - 10.1109/ISIT45174.2021.9517745
DO - 10.1109/ISIT45174.2021.9517745
M3 - منشور من مؤتمر
T3 - IEEE International Symposium on Information Theory - Proceedings
SP - 2441
EP - 2446
BT - 2021 IEEE International Symposium on Information Theory, ISIT 2021 - Proceedings
T2 - 2021 IEEE International Symposium on Information Theory, ISIT 2021
Y2 - 12 July 2021 through 20 July 2021
ER -