We study a variant of online convex optimization where the player is permitted to switch decisions at most S times in expectation throughout T rounds. Similar problems have been addressed in prior work for the discrete decision set setting, and more recently in the continuous setting but only with an adaptive adversary. In this work, we aim to fill the gap and present computationally efficient algorithms in the more prevalent oblivious setting, establishing a regret bound of O(T/S) for general convex losses and O˜(T/S2) for strongly convex losses. In addition, for stochastic i.i.d. losses, we present a simple algorithm that performs logT switches with only a multiplicative logT factor overhead in its regret in both the general and strongly convex settings. Finally, we complement our algorithms with lower bounds that match our upper bounds in some of the cases we consider.
|Title of host publication||Proceedings of Thirty Fourth Conference on Learning Theory|
|Editors||Mikhail Belkin, Samory Kpotufe|
|Number of pages||17|
|State||Published - 2021|
|Name||Proceedings of Machine Learning Research|