BErMin: A Model Selection Algorithm for Reinforcement Learning Problems
BErMin: A Model Selection Algorithm for Reinforcement Learning Problems
en
0.25
0.5
0.75
1.25
1.5
1.75
2
We consider the problem of model selection in the batch (offline, non-interactive) reinforcement learning setting when the goal is to find an action-value function with the smallest Bellman error among a countable set of candidate functions. We propose a complexity regularization-based model selection algorithm, BErMin, and prove that it enjoys an oracle-like property: the estimator's error differs from that of an oracle, who selects the candidate with the minimum Bellman error, by only a constant factor and a small remainder term that vanishes at a parametric rate as the number of samples increases.