Menu

Some Recent Bandit Results

calendar icon May 28, 2013 4029 views
video thumbnail
Pause
Mute
speed icon
speed icon
0.25
0.5
0.75
1
1.25
1.5
1.75
2

Research on multi-armed bandits is expanding in several directions. In this talk I will cover a number of recent results which reflect the variety of current approaches. The first part will be devoted to the analysis of stochastic bandits when reward distributions have heavy tails, preventing the use of standard statistical estimators. Next, I will consider bandits with switching costs and show new upper and lower bounds under different assumptions on the reward processes. The last part ofthe talk will focus on combinatorial bandits, which include several interesting special cases like ranking and multiple pulls. In this setting I will discuss merits and limitations of the three main existing algorithmic approaches (Mirror Descent, Exp2, FPL). Joint work with: Sebastien Bubeck, Ofer Dekel, Elad Hazan, Sham Kakade, Gabor Lugosi, Ohad Shamir

RELATED CATEGORIES

MORE VIDEOS FROM THE SAME CATEGORIES

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International license.