# Selecting Near-Optimal Approximate State Representations in Reinforcement Learning.

Ronald Ortner, Odalric-Ambrym Maillard, Daniil Ryabko.
In Algorithmic Learning Theory, 2014.

 Abstract: We consider a reinforcement learning setting introduced in (Maillard et al., NIPS 2011) where the learner does not have explicit access to the states of the underlying Markov decision process (MDP). Instead, she has access to several models that map histories of past interactions to states. Here we improve over known regret bounds in this setting, and more importantly generalize to the case where the models given to the learner do not contain a true model resulting in an MDP representation but only approximations of it. We also give improved error bounds for state aggregation

You can dowload the paper from the  ALT Website (here) or from the HAL online open depository* (here).

 Bibtex: @inproceedings{ortner2014selecting, title={Selecting near-optimal approximate state representations in reinforcement learning}, author={Ortner, Ronald and Maillard, Odalric-Ambrym and Ryabko, Daniil}, booktitle={Algorithmic Learning Theory}, pages={140--154}, year={2014}, organization={Springer} }
 Related Publications: Competing with an infinite set of models in reinforcement learning, Phuong Nguyen, Odalric-Ambrym Maillard, Daniil Ryabko, and Ronald Ortner, in Proceedings of the International Conference on Artificial Intelligence and Statistics (AI&STATS), volume 31 of JMLR W&CP , pages 463–471, Arizona, USA, 2013. Optimal regret bounds for selecting the state representation in reinforcement learning. Odalric-Ambrym Maillard, Phuong Nguyen, Ronald Ortner, Daniil Ryabko. In Proceedings of the 30th international conference on machine learning, ICML 2013, 2013. Selecting the State-Representation in Reinforcement Learning. Odalric-Ambrym Maillard, Daniil Ryabko, Rémi Munos. In Proceedings of the 24th conference on advances in Neural Information Processing Systems, NIPS ’11, pages 2627–2635, 2011.

# Competing with an Infinite Set of Models in Reinforcement Learning.

Phuong Nguyen, Odalric-Ambrym Maillard, Daniil Ryabko,Ronald Ortner.
In International Conference on Artificial Intelligence and Statistics, 2013.

 Abstract: We consider a reinforcement learning setting where the learner also has to deal with the problem of finding a suitable state-representation function from a given set of models. This has to be done while interacting with the environment in an online fashion (no resets), and the goal is to have small regret with respect to any Markov model in the set. For this setting, recently the BLB algorithm has been proposed, which achieves regret of order T^{2/3}, provided that the given set of models is finite. Our first contribution is to extend this result to a countably infinite set of models. Moreover, the BLB regret bound suffers from an additive term that can be exponential in the diameter of the MDP involved, since the diameter has to be guessed. The algorithm we propose avoids guessing the diameter, thus improving the regret bound.

You can dowload the paper from the JMLR website (here) or from the HAL online open depository* (soon).

 Bibtex: @InProceedings{Nguyen13, author = “Nguyen, P. and Maillard, O. and Ryabko, D. and Ortner, R. “, title = “Competing with an Infinite Set of Models in Reinforcement Learning”, booktitle = “AISTATS”, series = {JMLR W\&CP 31}, address = “Arizona, USA”, year = “2013”, pages = “463–471” }
 Related Publications: Optimal regret bounds for selecting the state representation in reinforcement learning. Odalric-Ambrym Maillard, Phuong Nguyen, Ronald Ortner, Daniil Ryabko. In Proceedings of the 30th international conference on machine learning, ICML 2013, 2013. Selecting the State-Representation in Reinforcement Learning. Odalric-Ambrym Maillard, Daniil Ryabko, Rémi Munos. In Proceedings of the 24th conference on advances in Neural Information Processing Systems, NIPS ’11, pages 2627–2635, 2011.

# Optimal regret bounds for selecting the state representation in reinforcement learning.

Odalric-Ambrym Maillard, Phuong Nguyen, Ronald Ortner, Daniil Ryabko.
In Proceedings of the 30th international conference on machine learning, ICML 2013, 2013.

 Abstract: We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the highest optimal reward, and acts optimally in it. Recent regret bounds for this setting are of order O(T^{2/3}) with an additive term constant yet exponential in some characteristics of the optimal MDP. We propose an algorithm whose regret after T time steps is O(\sqrt{T}), with all constants reasonably small. This is optimal in T since O(\sqrt{T}) is the optimal regret in the setting of learning in a (single discrete) MDP.

You can dowload the paper from the ICML website (here), and a corrected version from the HAL online open depository* (here), or from arXiv (here) (the correction is minor and changes only a constant 2 into 2\sqrt{2}). See also a talk presenting this work here.

 Bibtex: @inproceedings{MaillardNguyenOrtnerRyabko13, author = “Maillard, O. and Nguyen, P. and Ortner, R. and Ryabko, D.”, title = “Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning”, booktitle = “International conference on Machine Learning”, series = {JMLR W\&CP 28(1)}, address = “Atlanta, USA”, year = “2013”, pages = ” 543-551″ }
 Related Publications: Selecting the State-Representation in Reinforcement Learning. Odalric-Ambrym Maillard, Daniil Ryabko, Rémi Munos. In Proceedings of the 24th conference on advances in Neural Information Processing Systems, NIPS ’11, pages 2627–2635, 2011.

# Selecting the State-Representation in Reinforcement Learning.

Odalric-Ambrym Maillard, Daniil Ryabko, Rémi Munos.
In Proceedings of the 24th conference on advances in Neural Information Processing Systems, NIPS ’11, pages 2627–2635, 2011.