Ronald Ortner, OdalricAmbrym Maillard, Daniil Ryabko.
In Algorithmic Learning Theory, 2014.
[Download]
Abstract: 
We consider a reinforcement learning setting introduced in (Maillard et al., NIPS 2011) where the learner does not have explicit access to the states of the underlying Markov decision process (MDP). Instead, she has access to several models that map histories of past interactions to states. Here we improve over known regret bounds in this setting, and more importantly generalize to the case where the models given to the learner do not contain a true model resulting in an MDP representation but only approximations of it. We also give improved error bounds for state aggregation

You can dowload the paper from the ALT Website (here) or from the HAL online open depository* (here).
Bibtex: 
@inproceedings{ortner2014selecting, title={Selecting nearoptimal approximate state representations in reinforcement learning}, author={Ortner, Ronald and Maillard, OdalricAmbrym and Ryabko, Daniil}, booktitle={Algorithmic Learning Theory}, pages={140154}, year={2014}, organization={Springer} } 
Related Publications: 
Competing with an infinite set of models in reinforcement learning, Phuong Nguyen, OdalricAmbrym Maillard, Daniil Ryabko, and Ronald Ortner, in Proceedings of the International Conference on Artificial Intelligence and Statistics (AI&STATS), volume 31 of JMLR W&CP , pages 463–471, Arizona, USA, 2013.
Optimal regret bounds for selecting the state representation in reinforcement learning. Selecting the StateRepresentation in Reinforcement Learning. 