Selecting the State-Representation in Reinforcement Learning.

2011, Discussing articles

Odalric-Ambrym Maillard, Daniil Ryabko, Rémi Munos.
In Proceedings of the 24th conference on advances in Neural Information Processing Systems, NIPS ’11, pages 2627–2635, 2011.

[Download]

Abstract:

The problem of selecting the right state-representation in a reinforcement learning problem is considered. Several models (functions mapping past observations to a finite set) of the observations are given, and it is known that for at least one of these models the resulting state dynamics are indeed Markovian. Without knowing neither which of the models is the correct one, nor what are the probabilistic characteristics of the resulting MDP, it is required to obtain as much reward as the optimal policy for the correct model (or for the best of the correct models, if there are several). We propose an algorithm that achieves that, with a regret of order T^{2/3} where T is the horizon time.

You can dowload the paper from the NIPS website (here) or from the HAL online open depository* (here).

Bibtex:
@INPROCEEDINGS{Maillard2011,
author = {Odalric-Ambrym Maillard and Daniil Ryabko and R{\’e}mi Munos},
title = {Selecting the State-Representation in Reinforcement Learning},
booktitle = {Proceedings of the 24th conference on advances in Neural Information
Processing Systems},
year = {2011},
pages = {2627-2635},
keywords = {Reinforcement Learning},
location = {Granada, Spain}
}

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s