Localization and Adaptation in Online Learning Through Relaxations
The traditional worst-case analysis for online learning problems is often too pessimistic for real world applications. We would like to design adaptive online learning algorithms that enjoy much better (faster rates) regret bounds against "nicer" data sequences while still preserving the worst-case bounds against the worst case data sequences. While in previous works such algorithms have been designed for specific problems, in this talk I shall describe a generic methodology for designing adaptive algorithms for general online learning problems. Specifically I shall introduce the idea of adaptive relaxation and the concept of localization in online learning. Using these concepts I shall provide a general recipe for designing adaptive online learning algorithms for problems. Through examples I shall illustrate the utility of the introduced concepts on several problems. These examples include new adaptive online learning algorithms against iid adversaries and algorithms that can adapt to data geometry.