Sub-sampling for multi-armed bandits.

Discussing articles

Akram Baransi, Odalric-Ambrym Maillard, Shie Mannor.
In European conference on Machine Learning, 2014.

[Download]

Abstract:
The stochastic multi-armed bandit problem is a popular model of the exploration/exploitation trade-off in sequential decision problems. We introduce a novel algorithm that is based on sub-sampling. Despite its simplicity, we show that the algorithm demonstrates excellent empirical performances against state-of-the-art algorithms, including Thompson sampling and KL-UCB. The algorithm is very flexible, it does need to know a set of reward distributions in advance nor the range of the rewards. It is not restricted to Bernoulli distributions and is also invariant under rescaling of the rewards. We provide a detailed experimental study comparing the algorithm to the state of the art, the main intuition that explains the striking results, and conclude with a finite-time regret analysis for this algorithm in the simplified two-arm bandit setting.

You can dowload the paper from the ECML website (here) or from the HAL online open depository* (here).

Bibtex:
@incollection{baransi2014sub,
title={Sub-sampling for Multi-armed Bandits},
author={Baransi, Akram and Maillard, Odalric-Ambrym and Mannor, Shie},
booktitle={Machine Learning and Knowledge Discovery in Databases},
pages={115–131},
year={2014},
publisher={Springer}
}

Latent Bandits.

2013, Discussing articles

Odalric-Ambrym Maillard, Shie Mannor
In International Conference on Machine Learning, 2014.

[Download]

Abstract:

We consider a multi-armed bandit problem where the reward distributions are indexed by two sets –one for arms, one for type– and can be partitioned into a small number of clusters according to the type. First,we consider the setting where all reward distributions are known and all types have the same underlying cluster, the type’s identity is, however, unknown. Second, we study the case where types may come from different classes, which is significantly more challenging. Finally, we tackle the case where the reward distributions are completely unknown. In each setting, we introduce specific algorithms and derive non-trivial regret performance. Numerical experiments show that, in the most challenging agnostic case, the proposed algorithm achieves excellent performance in several difficult scenarios.

You can dowload the paper from the JMLR website (here) or from the HAL online open depository* (here).

You can download the Java code used to generate the experiments here.

Bibtex:
@inproceedings{maillard2014latent,
  title={Latent Bandits.},
  author={Maillard, Odalric-Ambrym and Mannor, Shie},
  booktitle={Proceedings of The 31st International Conference on Machine Learning},
  pages={136--144},
  year={2014}
}

Robust Risk-averse Multi-armed Bandits.

2013, Discussing articles

Odalric-Ambrym Maillard.
In Algorithmic Learning Theory, 2013.

[Download]

Abstract:

We study a variant of the standard stochastic multi-armed bandit problem when one is not interested in the arm with the best mean, but instead in the arm maximizing some coherent risk measure criterion. Further, we are studying the deviations of the regret instead of the less informative expected regret. We provide an algorithm, called RA-UCB to solve this problem, together with a high probability bound on its regret.

You can dowload the paper from the ALT website (here) or from the HAL online open depository* (here).

Bibtex:
@incollection{Maillard2013,
year={2013},
isbn={978-3-642-40934-9},
booktitle={Algorithmic Learning Theory},
volume={8139},
series={Lecture Notes in Computer Science},
editor={Jain, Sanjay and Munos, Rémi and Stephan, Frank and Zeugmann, Thomas},
title={Robust Risk-Averse Stochastic Multi-armed Bandits},
publisher={Springer Berlin Heidelberg},
author={Maillard, Odalric-Ambrym},
pages={218-233}
}
Related Publications:
Kullback-Leibler Upper Confidence Bounds for  Optimal Sequential Allocation.
Olivier Cappé, Aurelien Garivier, Odalric-Ambrym Maillard, Rémi Munos, Gilles Stoltz.
In The Annals of Statistics, 2013.

Finite-Time Analysis of Multi-armed Bandits Problems with Kullback-Leibler Divergences.
Odalric-Ambrym Maillard, Gilles Stoltz, Rémi Munos.
In Proceedings of the 24th annual Conference On Learning Theory, COLT, 2011.

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation.

2013, Discussing articles

Olivier Cappé, Aurélien Garivier,
Odalric-Ambrym Maillard,
Rémi Munos, Gilles Stoltz.
In The Annals of Statistics, 2013.

[Download]

Abstract:

We consider optimal sequential allocation in the context of the so-called stochastic multi-armed bandit model. We describe a generic index policy, in the sense of  Gittins (1979), based on upper confidence bounds of the arm payoffs computed using the Kullback-Leibler divergence. We consider two classes of distributions for which instances of this general idea are analyzed: The kl-UCB algorithm is designed for one-parameter exponential families and the empirical KL-UCB algorithm for bounded and finitely supported distributions. Our main contribution is a unified finite-time analysis of the regret of these algorithms that asymptotically matches the lower bounds of Lai and Robbins (1985) and Burnetas et Katehakis (1996), respectively. We also investigate the behavior of these algorithms when used with general bounded rewards, showing in particular that they provide significant improvements over the state-of-the-art.

You can dowload the paper from the Annals of Statistics (here) or from the HAL online open depository* (here).

Bibtex:
@article{CaGaMaMuSt2013,
AUTHOR = {Olivier Capp\'{e}, Aur\'{e}lien Garivier, Odalric-Ambrym Maillard, R\'{e}mi Munos, Gilles Stoltz},
TITLE = {Kullback–Leibler upper confidence bounds for optimal sequential allocation},
JOURNAL = {Ann. Statist.},
FJOURNAL = {Annals of Statistics},
YEAR = {2013},
VOLUME = {41},
NUMBER = {3},
PAGES = {1516-1541},
ISSN = {0090-5364},
DOI = {10.1214/13-AOS1119} }
Related Publications:
Finite-Time Analysis of Multi-armed Bandits Problems with Kullback-Leibler Divergences.
Odalric-Ambrym Maillard, Gilles Stoltz, Rémi Munos.
In Proceedings of the 24th annual Conference On Learning Theory, COLT, 2011.

Hierarchical Optimistic Region Selection driven by Curiosity.

2012, Discussing articles

Odalric-Ambrym Maillard.
In Proceedings of the 25th conference on advances in Neural Information Processing Systems, NIPS ’12, 2012.

[Download]

Abstract:

This paper aims to take a step forwards making the term ”intrinsic motivation” from reinforcement learning theoretically well founded, focusing on curiosity-driven learning. To that end, we consider the setting where, a fixed partition \P of a continuous space \X being given, and a process \nu defined on \X being unknown, we are asked to sequentially decide which cell of the partition to select as well as where to sample \nu in that cell, in order to minimize a loss function that is inspired from previous work on curiosity-driven learning. The loss on each cell consists of one term measuring a simple worst case quadratic sampling error, and a penalty term proportional to the range of the variance in that cell. The corresponding problem formulation extends the setting known as active learning for multi-armed bandits to the case when each arm is a continuous region, and we show how an adaptation of recent algorithms for that problem and of hierarchical optimistic sampling algorithms for optimization can be used in order to solve this problem. The resulting procedure, called Hierarchical Optimistic Region SElection driven by Curiosity (HORSE.C) is provided together with a finite-time regret analysis.

You can dowload the paper from the NIPS website (here) or from the HAL online open depository* (here).

Bibtex:
@inproceedings{Maillard12,
title ={Hierarchical Optimistic Region Selection driven by Curiosity},
author={Odalric-Ambrym Maillard},
booktitle = {Advances in Neural Information Processing Systems 25},
editor = {P. Bartlett and F.C.N. Pereira and C.J.C. Burges and L. Bottou and K.Q. Weinberger},
pages = {1457–1465},
year = {2012}
}

Online allocation and homogeneous partitioning for piecewise constant mean-approximation.

2012, Discussing articles

 Alexandra Carpentier, Odalric-Ambrym Maillard.
In Proceedings of the 25th conference on advances in Neural Information Processing Systems, NIPS ’12, 2012.

[Download]

Abstract:

In the setting of active learning for the multi-armed bandit, where the goal of a learner is to estimate with equal precision the mean of a finite number of arms, recent results show that it is possible to derive strategies based on finite-time confidence bounds that are competitive with the best possible strategy. We here consider an extension of this problem to the case when the arms are the cells of a finite partition P of a continuous sampling space X \subset \Real^d. Our goal is now to build a piecewise constant approximation of a noisy function (where each piece is one region of P and P is fixed beforehand) in order to maintain the local quadratic error of approximation on each cell equally low. Although this extension is not trivial, we show that a simple algorithm based on upper confidence bounds can be proved to be adaptive to the function itself in a near-optimal way, when |P| is chosen to be of minimax-optimal order on the class of \alpha-Hölder functions.

You can dowload the paper from the NIPS website (here) or from the HAL online open depository* (here).

Bibtex:
@inproceedings{CarpentierMaillard12,
title = {Online allocation and homogeneous partitioning for piecewise constant mean-approximation.},
author = {Carpentier, Alexandra and Maillard, Odalric-Ambrym},
booktitle = {Advances in Neural Information Processing Systems 25},
editor = {P. Bartlett and F.C.N. Pereira and C.J.C. Burges and L. Bottou and K.Q. Weinberger},
pages = {1970-1978},
year = {2012}
}

Apprentissage Séquentiel : Bandits, Statistique et Renforcement.

2011, Discussing articles

Odalric-Ambrym Maillard.
PhD thesis, Université de Lille 1, October 2011.
[AFIA PhD Prize 2012]

[Download]

Abstract:

This thesis studies the following topics in Machine Learning: Bandit theory, Statistical learning and Reinforcement learning. The common underlying thread is the non-asymptotic study of various notions of adaptation: to an environment or an opponent in part I about bandit theory, to the structure of a signal in part II about statistical theory, to the structure of states and rewards or to some state-model of the world in part III about reinforcement learning.
First we derive a non-asymptotic analysis of a Kullback-Leibler-based algorithm for the stochastic multi-armed bandit that enables to match, in the case of distributions with finite support, the asymptotic distribution-dependent lower bound known for this problem. Now for a multi-armed bandit with a possibly adaptive opponent, we introduce history-based models to catch some weakness of the opponent, and show how one can benefit from such models to design algorithms adaptive to this weakness.
Then we contribute to the regression setting and show how the use of random matrices can be beneficial both theoretically and numerically when the considered hypothesis space has a large, possibly infinite, dimension. We also use random matrices in the sparse recovery setting to build sensing operators that allow for recovery when the basis is far from being orthogonal.
Finally we combine part I and II to first provide a non-asymptotic analysis of reinforcement learning algorithms such as Bellman-residual minimization and a version of Least squares temporal-difference that uses random projections and then, upstream of the Markov Decision Problem setting, discuss the practical problem of choosing a good model of states.

You can dowload my Ph.D. manuscript from the University website (here).

Bibtex:
@phdthesis{maillard2011apprentissage,
title={APPRENTISSAGE S{\’E}QUENTIEL: Bandits, Statistique et Renforcement.},
author={Maillard, Odalric-Ambrym},
year={2011},
school={Universit{\’e} des Sciences et Technologie de Lille — Lille I}
}

Finite-Time Analysis of Multi-armed Bandits Problems with Kullback-Leibler Divergences.

2011, Discussing articles

Odalric-Ambrym Maillard, Gilles Stoltz, Rémi Munos.
In Proceedings of the 24th annual Conference On Learning Theory,
COLT 2011.

[Download]

Abstract:

We consider a Kullback-Leibler-based algorithm for the stochastic multi-armed bandit problem in the case of distributions with finite supports (not necessarily known beforehand), whose asymptotic regret matches the lower bound of \cite{Burnetas96}. Our contribution is to provide a finite-time analysis of this algorithm; we get bounds whose main terms are smaller than the ones of previously known algorithms with finite-time analyses (like UCB-type algorithms).

You can dowload the paper from the JMLR website (here) or from the HAL online open depository* (here).

Bibtex:
@INPROCEEDINGS{MaiMunSto2011KL,
author = {Odalric-Ambrym Maillard and R{\’e}mi Munos and Gilles Stoltz},
title = {Finite-Time Analysis of Multi-armed Bandits Problems with Kullback-Leibler
Divergences},
booktitle = {Proceedings of the 24th annual Conference On Learning Theory},
year = {2011},
series = {COLT ’11},
keywords = {Bandit Theory},
location = {Budapest, Hungary},
numpages = {13}
}

Adaptive bandits: Towards the best history-dependent strategy

2011, Discussing articles

Odalric-Ambrym Maillard, Rémi Munos.
In Proceedings of the 14th international conference on Artificial Intelligence and Statistics,
AI&Statistics 2011, volume 15 of JMLR W&CP, 2011.

[Download]

Abstract:

We consider multi-armed bandit games with possibly adaptive opponents. We introduce models Theta of constraints based on equivalence classes on the common history (information shared by the player and the opponent) which define two learning scenarios: (1) The opponent is constrained, i.e. she provides rewards that are stochastic functions of equivalence classes defined by some model theta* in Theta. The regret is measured with respect to (w.r.t.) the best history-dependent strategy. (2) The opponent is arbitrary and we measure the regret w.r.t. the best strategy among all mappings from classes to actions (i.e. the best history-class-based strategy) for the best model in  Theta. This allows to model opponents (case 1) or strategies (case 2) which handles finite memory, periodicity, standard stochastic bandits and other situations. When  Theta={theta}, i.e. only one model is considered, we derive tractable algorithms achieving a tight regret (at time T) bounded by  Õ(\sqrt{TAC}), where  C is the number of classes of  theta. Now, when many models are available, all known algorithms achieving a nice regret  O(\sqrt{T}) are unfortunately not tractable and scale poorly with the number of models  |Theta| . Our contribution here is to provide tractable algorithms with regret bounded by  T^{2/3}C^{1/3}log(|Theta|)^{1/2}.

You can dowload the paper from the JMLR webiste (here) or from the HAL online open depository* (here).

Bibtex:
@inproceedings{MaillardM11,
author = {Odalric{-}Ambrym Maillard and
R{\'{e}}mi Munos},
title = {Adaptive Bandits: Towards the best history-dependent strategy},
booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, {AISTATS} 2011, Fort Lauderdale, USA, April 11-13, 2011},
year = {2011},
pages = {570–578}
editor = {Geoffrey J. Gordon and David B. Dunson and Miroslav Dud{\'{\i}}k},
series = {{JMLR} Proceedings},
year = {2011},
volume = {15}
}