Odalric-Ambrym Maillard, Phuong Nguyen, Ronald Ortner, Daniil Ryabko.
In Proceedings of the 30th international conference on machine learning, ICML 2013, 2013.
[Download]
Abstract: |
We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the highest optimal reward, and acts optimally in it. Recent regret bounds for this setting are of order O(T^{2/3}) with an additive term constant yet exponential in some characteristics of the optimal MDP. We propose an algorithm whose regret after T time steps is O(\sqrt{T}), with all constants reasonably small. This is optimal in T since O(\sqrt{T}) is the optimal regret in the setting of learning in a (single discrete) MDP. |
You can dowload the paper from the ICML website (here), and a corrected version from the HAL online open depository* (here), or from arXiv (here) (the correction is minor and changes only a constant 2 into 2\sqrt{2}). See also a talk presenting this work here.
Bibtex: |
@inproceedings{MaillardNguyenOrtnerRyabko13, author = “Maillard, O. and Nguyen, P. and Ortner, R. and Ryabko, D.”, title = “Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning”, booktitle = “International conference on Machine Learning”, series = {JMLR W\&CP 28(1)}, address = “Atlanta, USA”, year = “2013”, pages = ” 543-551″ } |
Related Publications: |
Selecting the State-Representation in Reinforcement Learning. Odalric-Ambrym Maillard, Daniil Ryabko, Rémi Munos. In Proceedings of the 24th conference on advances in Neural Information Processing Systems, NIPS ’11, pages 2627–2635, 2011. |