Section: New Results
The distribution of the Lasso: Uniform control over sparse balls and adaptive parameter tuning
This is a joint work with Andrea Montanari. The Lasso is a popular regression method for high-dimensional problems in which the number of parameters , is larger than the number of samples: . A useful heuristics relates the statistical properties of the Lasso estimator to that of a simple soft-thresholding denoiser, in a denoising problem in which the parameters are observed in Gaussian noise, with a carefully tuned variance. Earlier work confirmed this picture in the limit , pointwise in the parameters , and in the value of the regularization parameter.
Here, we consider a standard random design model and prove exponential concentration of its empirical distribution around the prediction provided by the Gaussian denoising model. Crucially, our results are uniform with respect to belonging to balls, , and with respect to the regularization parameter. This allows to derive sharp results for the performances of various data-driven procedures to tune the regularization.
Our proofs make use of Gaussian comparison inequalities, and in particular of a version of Gordon's minimax theorem developed by Thrampoulidis, Oymak, and Hassibi, which controls the optimum value of the Lasso optimization problem. Crucially, we prove a stability property of the minimizer in Wasserstein distance, that allows to characterize properties of the minimizer itself.