Linear regression with random projections.

2012, Discussing articles

Odalric-Ambrym Maillard, Rémi Munos.
In Journal of Machine Learning Research 2012, vol:13, pp:2735-2772.

[Download]

Abstract:

We investigate a method for regression that makes use of a randomly generated subspace G_P (of finite dimension P) of a given large (possibly infinite) dimensional function space F, for example, L_{2}([0,1]^d). G_P is defined as the span of P random features that are linear combinations of a basis functions of F weighted by random Gaussian i.i.d.~coefficients. We show practical motivation for the use of this approach, detail the link that this random projections method share with RKHS and Gaussian objects theory and prove, both in deterministic and random design, approximation error bounds when searching for the best regression function in G_P rather than in F, and derive excess risk bounds for a specific regression algorithm (least squares regression in G_P). This paper stresses the motivation to study such methods, thus the analysis developed is kept simple for explanations purpose and leaves room for future developments.

You can dowload the paper from the JMRL website (here) or from the HAL online open depository* (here).

Bibtex:
@article{MaillardMunos12,
title = {{Linear Regression with Random Projections}},
author = {Maillard, Odalric-Ambrym and Munos, R{\’e}mi},
affiliation = {SEQUEL – INRIA Lille – Nord Europe},
pages = {2735-2772},
journal = {Journal of Machine Learning Research},
volume = {13},
year = {2012}
}
Related Publications:
Scrambled Objects for Least-Squares Regression.
Odalric-Ambrym Maillard, Rémi Munos.
In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Proceedings of the 23rd conference on advances in Neural Information Processing Systems, NIPS ’10, pages 1549–1557, 2010.

Compressed Least Squares Regression.
Odalric-Ambrym Maillard, Rémi Munos.
In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Proceedings of the 22nd conference on advances in Neural Information Processing Systems,
NIPS ’09, pages 1213–1221, 2009.

Scrambled Objects for Least-Squares Regression.

2010, Discussing articles

Odalric-Ambrym Maillard, Rémi Munos.
In NIPS ’10, pages 1549–1557, 2010.

[Download]

Abstract:

We consider least-squares regression using a randomly generated subspace G_P \subset F of finite dimension P,  where F is a function space of infinite dimension, e.g. L2([0, 1]^d). G_P is defined as the span of P random features that are linear combinations of the basis functions of F weighted by random Gaussian i.i.d. coefficients. In particular, we consider multi-resolution random combinations at all scales of a given mother function, such as a hat function or a wavelet. In this latter case, the resulting Gaussian objects are called scrambled wavelets and we show that they enable to approximate functions in Sobolev spaces H_s([0, 1]^d). As a result, given N data, the least-squares estimate \hat g built from P scrambled wavelets has excess risk ||f^* − \hat g||2 P = O(||f^*||2_{H_s([0,1]^d)}(logN)/P + P(logN)/N) for target functions f^* \in H_s([0, 1]^d) of smoothness order s > d/2. An interesting aspect of the resulting bounds is that they do not depend on the distribution P from which the data are generated, which is important in a statistical regression setting considered here. Randomization enables to adapt to any possible distribution. We conclude by describing an efficient numerical implementation using lazy expansions with numerical complexity \tilde O (2dN^3/2 logN + N^2), where d is the dimension of the input space.

You can dowload the paper from the NIPS Website (here) or from the HAL online open depository* (here).

Bibtex:
@incollection{MaillardM10b,
title = {Scrambled Objects for Least-Squares Regression},
author = {Odalric Maillard and R\'{e}mi Munos},
booktitle = {Advances in Neural Information Processing Systems 23},
editor = {J.D. Lafferty and C.K.I. Williams and J. Shawe-Taylor and R.S. Zemel and A. Culotta},
pages = {1549–1557},
year = {2010},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/3973-scrambled-objects-for-least-squares-regression.pdf}
}

Compressed Least Squares Regression

2009, Discussing articles

[see  Linear regression with random projections, 2012 for corrections]

Odalric-Ambrym Maillard, Rémi Munos.
In NIPS ’09, pages 1213–1221, 2009.

[Download]

Abstract:

We consider the problem of learning, from K data, a regression function in a linear space of high dimension N using projections onto a random subspace of lower dimension M. From any algorithm minimizing the (possibly penalized) empirical risk, we provide bounds on the excess risk of the estimate computed in the projected subspace (compressed domain) in terms of the excess risk of the estimate built in the high-dimensional space (initial domain). We show that solving the problem in the compressed domain instead of the initial domain reduces the estimation error at the price of an increased (but controlled) approximation error. We apply the analysis to Least-Squares (LS) regression and discuss the excess risk and numerical complexity of the resulting “Compressed Least Squares Regression” (CLSR) in terms of N, K, and M. When we choose M=O(\sqrt{K}),  we show that CLSR has estimation error of order O(\log K / \sqrt{K}) and provides an interesting alternative to usual penalization techniques where one first selects a random subspace of lower dimension and then performs an empirical risk minimizer in this subspace.

You can dowload the paper from the NIPS Website (here) or from the HAL online open depository* (here).

Bibtex:
@incollection{MaillardM09,
title = {Compressed Least-Squares Regression},
author = {Odalric Maillard and R\'{e}mi Munos},
booktitle = {Advances in Neural Information Processing Systems 22},
editor = {Y. Bengio and D. Schuurmans and J.D. Lafferty and C.K.I. Williams and A. Culotta},
pages = {1213–1221},
year = {2009},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/3698-compressed-least-squares-regression.pdf}
}