“How hard is my MDP?” Distribution-norm to the rescue.

2014, Discussing articles

Odalric-Ambrym Maillard, Timothy A. Mann, Shie Mannor.
In advances in Neural Information Processing Systems, 2014.

[Download]

Abstract:
In Reinforcement Learning (RL), state-of-the-art algorithms require a large number of samples per state-action pair to estimate the transition kernel p. In many problems, a good approximation of p is not needed. For instance, if from one state-action pair (s,a), one can only transit to states with the same value, learning p(|s,a) accurately is irrelevant (only its support matters). This paper aims at capturing such behavior by defining a novel hardness measure for Markov Decision Processes (MDPs) we call the distribution-norm. The distribution-norm w.r.t. a measure ν is defined on zero ν-mean functions f by the standard variation of f with respect to ν. We first provide a concentration inequality for the dual of the distribution-norm. This allows us to replace the generic but loose ||||1 concentration inequalities used in most previous analysis of RL algorithms, to benefit from this new hardness measure. We then show that several common RL benchmarks have low hardness when measured using the new norm. The distribution-norm captures finer properties than the number of states or the diameter and can be used to assess the difficulty of MDPs.

You can dowload the paper from the NIPS website (here) or from the HAL online open depository* (soon).

Bibtex:
@incollection{MaiManMan14,
title = {How hard is my MDP?” The distribution-norm to the rescue”},
author = {Maillard, Odalric-Ambrym and Mann, Timothy A and Mannor, Shie},
booktitle = {Advances in Neural Information Processing Systems 27},
editor = {Z. Ghahramani and M. Welling and C. Cortes and N.D. Lawrence and K.Q. Weinberger},
pages = {1835–1843},
year = {2014},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/5441-how-hard-is-my-mdp-the-distribution-norm-to-the-rescue.pdf}
}

Selecting Near-Optimal Approximate State Representations in Reinforcement Learning.

2014, Discussing articles

Ronald Ortner, Odalric-Ambrym Maillard, Daniil Ryabko.
In Algorithmic Learning Theory, 2014.

[Download]

Abstract:
We consider a reinforcement learning setting introduced in (Maillard et al., NIPS 2011) where the learner does not have explicit access to the states of the underlying Markov decision process (MDP). Instead, she has access to several models that map histories of past interactions to states. Here we improve over known regret bounds in this setting, and more importantly generalize to the case where the models given to the learner do not contain a true model resulting in an MDP representation but only approximations of it. We also give improved error bounds for state aggregation

You can dowload the paper from the  ALT Website (here) or from the HAL online open depository* (here).

Bibtex:
@inproceedings{ortner2014selecting,
  title={Selecting near-optimal approximate state representations in reinforcement learning},
  author={Ortner, Ronald and Maillard, Odalric-Ambrym and Ryabko, Daniil},
  booktitle={Algorithmic Learning Theory},
  pages={140--154},
  year={2014},
  organization={Springer}
}
Related Publications:
Competing with an infinite set of models in reinforcement learning, Phuong Nguyen, Odalric-Ambrym Maillard, Daniil Ryabko, and Ronald Ortner, in Proceedings of the International Conference on Artificial Intelligence and Statistics (AI&STATS), volume 31 of JMLR W&CP , pages 463–471, Arizona, USA, 2013.

Optimal regret bounds for selecting the state representation in reinforcement learning.
Odalric-Ambrym Maillard, Phuong Nguyen, Ronald Ortner, Daniil Ryabko.
In Proceedings of the 30th international conference on machine learning, ICML 2013, 2013.

Selecting the State-Representation in Reinforcement Learning.
Odalric-Ambrym Maillard, Daniil Ryabko, Rémi Munos.
In Proceedings of the 24th conference on advances in Neural Information Processing Systems, NIPS ’11, pages 2627–2635, 2011.

Competing with an Infinite Set of Models in Reinforcement Learning.

2013, Discussing articles

Phuong Nguyen, Odalric-Ambrym Maillard, Daniil Ryabko,Ronald Ortner.
In International Conference on Artificial Intelligence and Statistics, 2013.

[Download]

Abstract:

We consider a reinforcement learning setting where the learner also has to deal with the problem of finding a suitable state-representation function from a given set of models. This has to be done while interacting with the environment in an online fashion (no resets), and the goal is to have small regret with respect to any Markov model in the set. For this setting, recently the BLB algorithm has been proposed, which achieves regret of order T^{2/3}, provided that the given set of models is finite. Our first contribution is to extend this result to a countably infinite set of models. Moreover, the BLB regret bound suffers from an additive term that can be exponential in the diameter of the MDP involved, since the diameter has to be guessed. The algorithm we propose avoids guessing the diameter, thus improving the regret bound.

You can dowload the paper from the JMLR website (here) or from the HAL online open depository* (soon).

Bibtex:
@InProceedings{Nguyen13,
author = “Nguyen, P. and Maillard, O. and Ryabko, D. and Ortner, R. “,
title = “Competing with an Infinite Set of Models in Reinforcement Learning”,
booktitle = “AISTATS”,
series = {JMLR W\&CP 31},
address = “Arizona, USA”,
year = “2013”,
pages = “463–471” }
Related Publications:
Optimal regret bounds for selecting the state representation in reinforcement learning.
Odalric-Ambrym Maillard, Phuong Nguyen, Ronald Ortner, Daniil Ryabko.
In Proceedings of the 30th international conference on machine learning, ICML 2013, 2013.

Selecting the State-Representation in Reinforcement Learning.
Odalric-Ambrym Maillard, Daniil Ryabko, Rémi Munos.
In Proceedings of the 24th conference on advances in Neural Information Processing Systems, NIPS ’11, pages 2627–2635, 2011.

Optimal regret bounds for selecting the state representation in reinforcement learning.

2012, Discussing articles

Odalric-Ambrym Maillard, Phuong Nguyen, Ronald Ortner, Daniil Ryabko.
In Proceedings of the 30th international conference on machine learning, ICML 2013, 2013.

[Download]

Abstract:

We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the highest optimal reward, and acts optimally in it. Recent regret bounds for this setting are of order O(T^{2/3}) with an additive term constant yet exponential in some characteristics of the optimal MDP. We propose an algorithm whose regret after T time steps is O(\sqrt{T}), with all constants reasonably small. This is optimal in T since O(\sqrt{T}) is the optimal regret in the setting of learning in a (single discrete) MDP.

You can dowload the paper from the ICML website (here), and a corrected version from the HAL online open depository* (here), or from arXiv (here) (the correction is minor and changes only a constant 2 into 2\sqrt{2}). See also a talk presenting this work here.

Bibtex:
@inproceedings{MaillardNguyenOrtnerRyabko13,
author = “Maillard, O. and Nguyen, P. and Ortner, R. and Ryabko, D.”,
title = “Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning”,
booktitle = “International conference on Machine Learning”,
series = {JMLR W\&CP 28(1)},
address = “Atlanta, USA”,
year = “2013”,
pages = ” 543-551″
}
Related Publications:
Selecting the State-Representation in Reinforcement Learning.
Odalric-Ambrym Maillard, Daniil Ryabko, Rémi Munos.
In Proceedings of the 24th conference on advances in Neural Information Processing Systems, NIPS ’11, pages 2627–2635, 2011.

Selecting the State-Representation in Reinforcement Learning.

2011, Discussing articles

Odalric-Ambrym Maillard, Daniil Ryabko, Rémi Munos.
In Proceedings of the 24th conference on advances in Neural Information Processing Systems, NIPS ’11, pages 2627–2635, 2011.

[Download]

Abstract:

The problem of selecting the right state-representation in a reinforcement learning problem is considered. Several models (functions mapping past observations to a finite set) of the observations are given, and it is known that for at least one of these models the resulting state dynamics are indeed Markovian. Without knowing neither which of the models is the correct one, nor what are the probabilistic characteristics of the resulting MDP, it is required to obtain as much reward as the optimal policy for the correct model (or for the best of the correct models, if there are several). We propose an algorithm that achieves that, with a regret of order T^{2/3} where T is the horizon time.

You can dowload the paper from the NIPS website (here) or from the HAL online open depository* (here).

Bibtex:
@INPROCEEDINGS{Maillard2011,
author = {Odalric-Ambrym Maillard and Daniil Ryabko and R{\’e}mi Munos},
title = {Selecting the State-Representation in Reinforcement Learning},
booktitle = {Proceedings of the 24th conference on advances in Neural Information
Processing Systems},
year = {2011},
pages = {2627-2635},
keywords = {Reinforcement Learning},
location = {Granada, Spain}
}

Finite-Sample Analysis of Bellman Residual Minimization

2010, Discussing articles

Odalric-Ambrym Maillard, Rémi Munos, Alessandro Lazaric,
Mohammad Ghavamzadeh.
In ACML 2010, volume 13 of JMLR W&CP, pages 299-314, 2010.

[Download]

Abstract:

We consider the Bellman residual minimization approach for solving discounted Markov decision problems, where we assume that a generative model of the dynamics and rewards is available. At each policy iteration step, an approximation of the value function for the current policy is obtained by minimizing an empirical Bellman residual dened on a set of n states drawn i.i.d. from a distribution \mu, the immediate rewards, and the next states sampled from the model. Our main result is a generalization bound for the Bellman residual in linear approximation spaces. In particular, we prove that the empirical Bellman residual approaches the true (quadratic) Bellman residual in \mu-norm with a rate of order O(1/\sqrt{n}). This result implies that minimizing the empirical residual is indeed a sound approach for the minimization of the true Bellman residual which guarantees a good approximation of the value function for each policy. Finally, we derive performance bounds for the resulting approximate policy iteration algorithm in terms of the number of samples n and a measure of how well the function space is able to approximate the sequence of value functions.

You can dowload the paper from the JMLR webiste (here) or from the HAL online open depository* (here).

Bibtex:
@inproceedings{MaillardMLG10,
author = {O. A. Maillard and R. Munos and A. Lazaric and M. Ghavamzadeh},
title = {Finite Sample Analysis of {B}ellman Residual Minimization},
booktitle = {Asian Conference on Machine Learpning. JMLR: Workshop and Conference Proceedings},
volume = 13,
editor = {Masashi Sugiyama and Qiang Yang},
pages = {309-324},
year = {2010}
}

LSTD with Random Projections.

2010, Discussing articles

Mohammad Ghavamzadeh, Alessandro Lazaric,
Odalric-Ambrym Maillard, Rémi Munos.
In NIPS’10, pages 721–729, 2010.

[Download]

Abstract:

We consider the problem of reinforcement learning in high-dimensional spaces when the number of features is bigger than the number of samples. In particular, we study the least-squares temporal difference (LSTD) learning algorithm when a space of low dimension is generated with a random projection from a highdimensional space. We provide a thorough theoretical analysis of the LSTD with random projections and derive performance bounds for the resulting algorithm. We also show how the error of LSTD with random projections is propagated through the iterations of a policy iteration algorithm and provide a performance bound for the resulting least-squares policy iteration (LSPI) algorithm.

You can dowload the paper from the NIPS website (here) or from the HAL online open depository* (here).

Bibtex:
@incollection{GhavamzadehLMM2010,
title = {LSTD with Random Projections},
author = {Ghavamzadeh, Mohammad and Lazaric, Alessandro and Odalric Maillard and R\'{e}mi Munos},
booktitle = {Advances in Neural Information Processing Systems 23},
editor = {J.D. Lafferty and C.K.I. Williams and J. Shawe-Taylor and R.S. Zemel and A. Culotta},
pages = {721–729},
year = {2010},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/3994-lstd-with-random-projections.pdf}
}