TY - GEN

T1 - Dueling Convex Optimization

AU - Saha, Aadirupa

AU - Koren, Tomer

AU - Mansour, Yishay

N1 - Publisher Copyright: Copyright © 2021 by the author(s)

PY - 2021

Y1 - 2021

N2 - We address the problem of convex optimization with preference (dueling) feedback. Like the traditional optimization objective, the goal is to find the optimal point with the least possible query complexity, however, without the luxury of even a zeroth order feedback. Instead, the learner can only observe a single noisy bit which is win-loss feedback for a pair of queried points based on their function values. The problem is certainly of great practical relevance as in many real-world scenarios, such as recommender systems or learning from customer preferences, where the system feedback is often restricted to just one binary-bit preference information. We consider the problem of online convex optimization (OCO) solely by actively querying {0, 1} noisy-comparison feedback of decision point pairs, with the objective of finding a near-optimal point (function minimizer) with the least possible number of queries. For the non-stationary OCO setup, where the underlying convex function may change over time, we prove an impossibility result towards achieving the above objective. We next focus only on the stationary OCO problem, and our main contribution lies in designing a normalized gradient descent based algorithm towards finding a ε-best optimal point. Towards this, our algorithm is shown to yield a convergence rate of Õ(dβ/εν2) (ν being the noise parameter) when the underlying function is β-smooth. Further we show an improved convergence rate of just Õ(dβ/αν2 log1 ε ) when the function is additionally also α-strongly convex.

AB - We address the problem of convex optimization with preference (dueling) feedback. Like the traditional optimization objective, the goal is to find the optimal point with the least possible query complexity, however, without the luxury of even a zeroth order feedback. Instead, the learner can only observe a single noisy bit which is win-loss feedback for a pair of queried points based on their function values. The problem is certainly of great practical relevance as in many real-world scenarios, such as recommender systems or learning from customer preferences, where the system feedback is often restricted to just one binary-bit preference information. We consider the problem of online convex optimization (OCO) solely by actively querying {0, 1} noisy-comparison feedback of decision point pairs, with the objective of finding a near-optimal point (function minimizer) with the least possible number of queries. For the non-stationary OCO setup, where the underlying convex function may change over time, we prove an impossibility result towards achieving the above objective. We next focus only on the stationary OCO problem, and our main contribution lies in designing a normalized gradient descent based algorithm towards finding a ε-best optimal point. Towards this, our algorithm is shown to yield a convergence rate of Õ(dβ/εν2) (ν being the noise parameter) when the underlying function is β-smooth. Further we show an improved convergence rate of just Õ(dβ/αν2 log1 ε ) when the function is additionally also α-strongly convex.

UR - http://www.scopus.com/inward/record.url?scp=85139899354&partnerID=8YFLogxK

M3 - منشور من مؤتمر

T3 - Proceedings of Machine Learning Research

SP - 9245

EP - 9254

BT - Proceedings of the 38th International Conference on Machine Learning, ICML 2021

PB - ML Research Press

T2 - 38th International Conference on Machine Learning, ICML 2021

Y2 - 18 July 2021 through 24 July 2021

ER -