Sparse recovery with Brownian sensing.

2011, Discussing articles

Alexandra Carpentier, Odalric-Ambrym Maillard, Rémi Munos.
In Proceedings of the 24th conference on advances in Neural Information Processing Systems, NIPS ’11, 2011.

[Download]

Abstract:

We consider the problem of recovering the parameter α ∈ R^K of a sparse function f (i.e. the number of non-zero entries of α is small compared to the number K of features) given noisy evaluations of f at a set of well-chosen sampling points. We introduce an additional randomization process, called Brownian sensing, based on the computation of stochastic integrals, which produces a Gaussian sensing matrix, for which good recovery properties are proven, independently on the number of sampling points N , even when the features are arbitrarily non-orthogonal. Under the assumption that f is Hölder continuous with exponent at least 1/2, we provide an estimate â of the parameter such that ||α − â||_2 = O( ||η||_2/ sqrt(N)), where η is the observation noise. The method uses a set of sampling points uniformly distributed along a one-dimensional curve selected according to the features. We report numerical experiments illustrating our method.

You can dowload the paper from the NIPS webiste (here) or from the HAL online open depository* (here).

Bibtex:
@incollection{CarpentierMM11,
title = {Sparse Recovery with Brownian Sensing},
author = {Alexandra Carpentier and Odalric-ambrym Maillard and R\'{e}mi Munos},
booktitle = {Advances in Neural Information Processing Systems 24},
editor = {J. Shawe-Taylor and R.S. Zemel and P.L. Bartlett and F. Pereira and K.Q. Weinberger},
pages = {1782–1790},
year = {2011},
publisher = {Curran Associates, Inc.}
}

Compressed Least Squares Regression

2009, Discussing articles

[see  Linear regression with random projections, 2012 for corrections]

Odalric-Ambrym Maillard, Rémi Munos.
In NIPS ’09, pages 1213–1221, 2009.

[Download]

Abstract:

We consider the problem of learning, from K data, a regression function in a linear space of high dimension N using projections onto a random subspace of lower dimension M. From any algorithm minimizing the (possibly penalized) empirical risk, we provide bounds on the excess risk of the estimate computed in the projected subspace (compressed domain) in terms of the excess risk of the estimate built in the high-dimensional space (initial domain). We show that solving the problem in the compressed domain instead of the initial domain reduces the estimation error at the price of an increased (but controlled) approximation error. We apply the analysis to Least-Squares (LS) regression and discuss the excess risk and numerical complexity of the resulting “Compressed Least Squares Regression” (CLSR) in terms of N, K, and M. When we choose M=O(\sqrt{K}),  we show that CLSR has estimation error of order O(\log K / \sqrt{K}) and provides an interesting alternative to usual penalization techniques where one first selects a random subspace of lower dimension and then performs an empirical risk minimizer in this subspace.

You can dowload the paper from the NIPS Website (here) or from the HAL online open depository* (here).

Bibtex:
@incollection{MaillardM09,
title = {Compressed Least-Squares Regression},
author = {Odalric Maillard and R\'{e}mi Munos},
booktitle = {Advances in Neural Information Processing Systems 22},
editor = {Y. Bengio and D. Schuurmans and J.D. Lafferty and C.K.I. Williams and A. Culotta},
pages = {1213–1221},
year = {2009},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/3698-compressed-least-squares-regression.pdf}
}