Please, follow this link

## http://odalricambrymmaillard.neowordpress.fr

for an up-to-date version of this site.

On these pages, you will find information regarding my research activities in the wide fields of Mathematics>Statistical Theory and Computer Science>Machine Learning. In case you prefer a more visual version of this site, follow this link:

http://odalricambrymmaillard.neowordpress.fr

You may want to read and comment on my publications, attend the Seminaires d’Apprentissage et de Statistique de l’Université Paris-Saclay, subscribe to the Probability and Statistics news mailing list, or follow much more interesting links.

If you are a student looking for a research internship, then go read this page.

If you are interested in actively saving academic research in France, you may ask your university to open a “Travail de Communication de la Recherche (T.C.R)“, this is a Teaching Unit (Unité d’Enseignement) for students to practice communicating research activities.

🙂 Have a good day. 🙂

In case you

- believe that understanding the dynamics of complex systems, as well as how to optimally act in them can have a huge positive impact on all aspects of human societies that require a careful management of natural, energetic, human and computational resources, and that it is thus our duty to optimally answer it,
- consider that for that purpose, due to the limitations of human capabilities to process large amounts of data, we should pursue the long-term development of an optimal and automatic method that can, from mere observations and interactions with a complex system, understand its dynamics and how to optimally act in it,
- want to attack this problem by using any combination of the following four pillar domains: Machine Learning, Mathematical Statistics, Dynamical Systems and Optimization,

then do not hesitate to contact me,

I’ll be very happy to help you achieve this goal.

##### Workshops

2014 NIPS Workshop “From Bad Models to Good Policies” (Sequential Decision Making under Uncertainty)

##### Conference articles

**“How hard is my MDP?” Distribution-norm to the rescue, **Odalric-Ambrym Maillard, Timothy A. Mann and Shie Mannor in Proceedings of the 27th *conference on advances in Neural Information Processing Systems (NIPS), 2014.*

**Selecting Near-Optimal Approximate State Representations in Reinforcement Learning, **R.Ortner, O.-A. Maillard and D. Ryabko, in *Proceedings of the International Conference on Algorithmic Learning Theory (ALT)*, 2014.**
Sub-sampling for multi-armed bandits, **A. Baransi, O.-A. Maillard, S. Mannor, in

*Europeean conference on Machine Learning (ECML),*2014

*.*

**Latent bandits**, O.-A. Maillard and S. Mannor, in Proceedings of the International Conference on Machine Learning (ICML), 2013.

**Robust risk-averse stochastic multi-armed bandits**, O.-A. Maillard, in Sanjay Jain, Rémi Munos, Frank Stephan, and Thomas Zeugmann, editors, Proceedings of the International Conference on Algorithmic Learning Theory (ALT), volume 8139 of Lecture Notes in Computer Science, pages 218–233. Springer Berlin Heidelberg, 2013.

**Competing with an infinite set of models in reinforcement learning**, P. Nguyen, O.-A. Maillard, D. Ryabko, and R. Ortner, in Proceedings of the International Conference on Artificial Intelligence and Statistics (AI&STATS), volume 31 of JMLR W&CP , pages 463–471, Arizona, USA, 2013.

**Optimal regret bounds for selecting the state representation in reinforcement learning**, O.-A. Maillard, P. Nguyen, R. Ortner, and D. Ryabko. In Proceedings of the International conference on Machine Learning (ICML), volume 28 of JMLR W&CP, pages 543–551, Atlanta, USA, 2013.

**Hierarchical optimistic region selection driven by curiosity**, O.-A. Maillard, in P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Proceedings of the conference on advances in Neural Information Processing Systems 25 (NIPS), pages 1457–1465, 2012.

**Online allocation and homogeneous partitioning for piecewise constant mean-approximation**, A. Carpentier and O.-A. Maillard, in P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Proceedings of the conference on advances in Neural Information Processing Systems 25 (NIPS), pages 1970–1978, 2012.

**Finite-time analysis of multi-armed bandits problems with Kullback-Leibler divergences**, O.-A. Maillard, R. Munos, and G. Stoltz, in Proceedings of the 24th annual Conference On Learning Theory (COLT), 2011.

**Selecting the state-representation in reinforcement learning**O.-A. Maillard, D. Ryabko, and R. Munos, in Proceedings of the 24th conference on advances in Neural Information Processing Systems (NIPS), pages 2627–2635, 2011.

**Sparse recovery with Brownian sensing**, A. Carpentier, O.-A. Maillard and R. Munos, in Proceedings of the 24th conference on advances in Neural Information Processing Systems (NIPS), 2011.

**Online learning in adversarial lipschitz environments**, O.-A. Maillard and R. Munos, in Proceedings of the 2010 European Conference on Machine Learning and Knowledge Discovery in Databases: Part II, (ECML-PKDD), pages 305–320, Berlin, Heidelberg, 2010. Springer-Verlag.

**Finite sample analysis of bellman residual minimization**, O.-A. Maillard, R. Munos, A. Lazaric, and M. Ghavamzadeh, in Proceedings of the Asian Conference on Machine Learning (ACML), 2010.

**Adaptive bandits: Towards the best history-dependent strategy**, O.-A. Maillard and R. Munos, in Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AI&STATS), volume 15 of JMLR W&CP, 2011.

**Scrambled objects for least-squares regression**, O.-A. Maillard and R. Munos, in J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Proceedings of the 23rd conference on advances in Neural Information Processing Systems (NIPS), NIPS ’10, pages 1549–1557, 2010.

**LSTD with random projections**, M. Ghavamzadeh, A. Lazaric, O.-A. Maillard, and R. Munos, In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Proceedings of 23rd conference on advances in Neural Information Processing Systems (NIPS) (NIPS), pages 721–729, 2010.

**Compressed least-squares regression**, O.-A. Maillard and R. Munos, in Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Proceedings of the 22nd conference on advances in Neural Information Processing Systems (NIPS), pages 1213–1221, 2009.

**Complexity versus Agreement for Many Views**, O.-A. Maillard and N. Vayatis in Proceedings of the International Conference on Algorithmic Learning Theory (ALT), 2009.

##### Journal articles

**Concentration inequalities for sampling without replacement**, R. Bardenet and O.-A. Maillard, in Bernoulli, 2014 (In press).

**Kullback–leibler upper confidence bounds for optimal sequential allocation**, O. Cappé, A. Garivier, O.-A. Maillard, R. Munos and G. Stoltz, in The Annals of Statistics, 41(3):1516–1541, 2013.

**Linear Regression with Random Projections**, O.-A. Maillard and R. Munos, in Journal of Machine Learning Research (JMLR), 13:2735–2772, 2012.

##### Workshop articles

**Parallelization of the TD(λ) Learning Algorithm**, O.-A. Maillard, R. Coulom and P. Preux in European Workshop on Reinforcement Learning (EWRL), 2005.