Private Online Learning via Lazy Algorithms

Hilal Asi, Daogao Liu, Tomer Koren, Kunal Talwar

Research output: Contribution to journalConference articlepeer-review

Abstract

We study the problem of private online learning, focusing on online prediction from experts (OPE) and online convex optimization (OCO). We propose a new transformation that translates lazy, low-switching online learning algorithms into private algorithms. We apply our transformation to differentially private OPE and OCO using existing lazy algorithms for these problems. The resulting algorithms attain regret bounds that significantly improve over prior art in the high privacy regime, where ε ≪ 1, obtaining O(√T log d + T1/3 log(d)/ε2/3) regret for DP-OPE and O(√T + T1/3 √d/ε2/3) regret for DP-OCO. We complement our results with a lower bound for DP-OPE, showing that these rates are optimal for a natural family of low-switching private algorithms.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume37
StatePublished - 2024
Event38th Conference on Neural Information Processing Systems, NeurIPS 2024 - Vancouver, Canada
Duration: 9 Dec 202415 Dec 2024

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Private Online Learning via Lazy Algorithms'. Together they form a unique fingerprint.

Cite this