Ronald Ortner, Odalric-Ambrym Maillard, Daniil Ryabko.
In Algorithmic Learning Theory, 2014.
[Download]
Abstract: |
We consider a reinforcement learning setting introduced in (Maillard et al., NIPS 2011) where the learner does not have explicit access to the states of the underlying Markov decision process (MDP). Instead, she has access to several models that map histories of past interactions to states. Here we improve over known regret bounds in this setting, and more importantly generalize to the case where the models given to the learner do not contain a true model resulting in an MDP representation but only approximations of it. We also give improved error bounds for state aggregation
|
You can dowload the paper from the ALT Website (here) or from the HAL online open depository* (here).
Bibtex: |
@inproceedings{ortner2014selecting, title={Selecting near-optimal approximate state representations in reinforcement learning}, author={Ortner, Ronald and Maillard, Odalric-Ambrym and Ryabko, Daniil}, booktitle={Algorithmic Learning Theory}, pages={140--154}, year={2014}, organization={Springer} } |
Related Publications: |
Competing with an infinite set of models in reinforcement learning, Phuong Nguyen, Odalric-Ambrym Maillard, Daniil Ryabko, and Ronald Ortner, in Proceedings of the International Conference on Artificial Intelligence and Statistics (AI&STATS), volume 31 of JMLR W&CP , pages 463–471, Arizona, USA, 2013.
Optimal regret bounds for selecting the state representation in reinforcement learning. Selecting the State-Representation in Reinforcement Learning. |