LSTD with Random Projections.

2010, Discussing articles

Mohammad Ghavamzadeh, Alessandro Lazaric,
Odalric-Ambrym Maillard, Rémi Munos.
In NIPS’10, pages 721–729, 2010.

[Download]

Abstract:

We consider the problem of reinforcement learning in high-dimensional spaces when the number of features is bigger than the number of samples. In particular, we study the least-squares temporal difference (LSTD) learning algorithm when a space of low dimension is generated with a random projection from a highdimensional space. We provide a thorough theoretical analysis of the LSTD with random projections and derive performance bounds for the resulting algorithm. We also show how the error of LSTD with random projections is propagated through the iterations of a policy iteration algorithm and provide a performance bound for the resulting least-squares policy iteration (LSPI) algorithm.

You can dowload the paper from the NIPS website (here) or from the HAL online open depository* (here).

Bibtex:
@incollection{GhavamzadehLMM2010,
title = {LSTD with Random Projections},
author = {Ghavamzadeh, Mohammad and Lazaric, Alessandro and Odalric Maillard and R\'{e}mi Munos},
booktitle = {Advances in Neural Information Processing Systems 23},
editor = {J.D. Lafferty and C.K.I. Williams and J. Shawe-Taylor and R.S. Zemel and A. Culotta},
pages = {721–729},
year = {2010},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/3994-lstd-with-random-projections.pdf}
}
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s