EN FR
EN FR


Section: Overall Objectives

Sparsity in Imaging

Sparse 1 regularization.

Beside image warping and registration in medical image analysis, a key problem in nearly all imaging applications is the reconstruction of high quality data from low resolution observations. This field, commonly referred to as “inverse problems”, is very often concerned with the precise location of features such as point sources (modeled as Dirac masses) or sharp contours of objects (modeled as gradients being Dirac masses along curves). The underlying intuition behind these ideas is the so-called sparsity model (either of the data itself, its gradient, or other more complicated representations such as wavelets, curvelets, bandlets  [149] and learned representation  [184] ).

The huge interest in these ideas started mostly from the introduction of convex methods to serve as proxy for these sparse regularizations. The most well known is the 1 norm introduced independently in imaging by Donoho and co-workers under the name “Basis Pursuit”  [108] and in statistics by Tibshirani  [175] under the name “Lasso”. A more recent resurgence of this interest dates back to 10 years ago with the introduction of the so-called “compressed sensing” acquisition techniques  [90] , which make use of randomized forward operators and 1-type reconstruction.

Regularization over measure spaces.

However, the theoretical analysis of sparse reconstructions involving real-life acquisition operators (such as those found in seismic imaging, neuro-imaging, astro-physical imaging, etc.) is still mostly an open problem. A recent research direction, triggered by a paper of Candès and Fernandez-Granda  [92] , is to study directly the infinite dimensional problem of reconstruction of sparse measures (i.e. sum of Dirac masses) using the total variation of measures (not to be mistaken for the total variation of 2-D functions). Several works  [91] , [119] , [116] have used this framework to provide theoretical performance guarantees by basically studying how the distance between neighboring spikes impacts noise stability.

Figure 3. Two example of application of the total variation regularization of functions. Left: image segmentation into homogeneous color regions. Right: image zooming (increasing the number of pixels while keeping the edges sharp).
IMG/segmentation-input.png IMG/segmentation-output.png IMG/zoom-input.png IMG/zoom-output-tv.png
Segmentation input output Zooming input output

Low complexity regularization and partial smoothness.

In image processing, one of the most popular method is the total variation regularization  [169] , [84] . It favors low-complexity images that are piecewise constant, see Figure 3 for some example to solve some image processing problems. Beside applications in image processing, sparsity-related ideas also had a deep impact in statistics  [175] and machine learning  [48] . As a typical example, for applications to recommendation systems, it makes sense to consider sparsity of the singular values of matrices, which can be relaxed using the so-called nuclear norm (a.k.a. trace norm)  [49] . The underlying methodology is to make use of low-complexity regularization models, which turns out to be equivalent to the use of partly-smooth regularization functionals  [141] , [177] enforcing the solution to belong to a low-dimensional manifold.