OdalricAmbrym Maillard, Timothy A. Mann, Shie Mannor.
In advances in Neural Information Processing Systems, 2014.
[Download]
Abstract: 
In Reinforcement Learning (RL), stateoftheart algorithms require a large number of samples per stateaction pair to estimate the transition kernel p. In many problems, a good approximation of p is not needed. For instance, if from one stateaction pair (s,a), one can only transit to states with the same value, learning p(⋅s,a) accurately is irrelevant (only its support matters). This paper aims at capturing such behavior by defining a novel hardness measure for Markov Decision Processes (MDPs) we call the distributionnorm. The distributionnorm w.r.t. a measure ν is defined on zero νmean functions f by the standard variation of f with respect to ν. We first provide a concentration inequality for the dual of the distributionnorm. This allows us to replace the generic but loose ⋅1 concentration inequalities used in most previous analysis of RL algorithms, to benefit from this new hardness measure. We then show that several common RL benchmarks have low hardness when measured using the new norm. The distributionnorm captures finer properties than the number of states or the diameter and can be used to assess the difficulty of MDPs.

You can dowload the paper from the NIPS website (here) or from the HAL online open depository* (soon).
Bibtex: 
@incollection{MaiManMan14, title = {How hard is my MDP?” The distributionnorm to the rescue”}, author = {Maillard, OdalricAmbrym and Mann, Timothy A and Mannor, Shie}, booktitle = {Advances in Neural Information Processing Systems 27}, editor = {Z. Ghahramani and M. Welling and C. Cortes and N.D. Lawrence and K.Q. Weinberger}, pages = {1835–1843}, year = {2014}, publisher = {Curran Associates, Inc.}, url = {http://papers.nips.cc/paper/5441howhardismymdpthedistributionnormtotherescue.pdf} } 