EN FR
EN FR
New Software and Platforms
Bilateral Contracts and Grants with Industry
Bibliography
New Software and Platforms
Bilateral Contracts and Grants with Industry
Bibliography


Section: Research Program

Self-Paced Learning with Missing Information

Many tasks in artificial intelligence are solved by building a model whose parameters encode the prior domain knowledge and the likelihood of the observed data. In order to use such models in practice, we need to estimate its parameters automatically using training data. The most prevalent paradigm of parameter estimation is supervised learning, which requires the collection of the inputs xi and the desired outputs yi. However, such an approach has two main disadvantages. First, obtaining the ground-truth annotation of high-level applications, such as a tight bounding box around all the objects present in an image, is often expensive. This prohibits the use of a large training dataset, which is essential for learning the existing complex models. Second, in many applications, particularly in the field of medical image analysis, obtaining the ground-truth annotation may not be feasible. For example, even the experts may disagree on the correct segmentation of a microscopical image due to the similarities between the appearance of the foreground and background.

In order to address the deficiencies of supervised learning, researchers have started to focus on the problem of parameter estimation with data that contains hidden variables. The hidden variables model the missing information in the annotations. Obtaining such data is practically more feasible: image-level labels (`contains car',`does not contain person') instead of tight bounding boxes; partial segmentation of medical images. Formally, the parameters w of the model are learned by minimizing the following objective:

min 𝐰 𝒲 R ( 𝐰 ) + i = 1 n Δ ( y i , y i ( 𝐰 ) , h i ( 𝐰 ) ) . (5)

Here, 𝒲 represents the space of all parameters, n is the number of training samples, R(·) is a regularization function, and Δ(·) is a measure of the difference between the ground-truth output yi and the predicted output and hidden variable pair (yi(𝐰),hi(𝐰)).

Previous attempts at minimizing the above objective function treat all the training samples equally. This is in stark contrast to how a child learns: first focus on easy samples (`learn to add two natural numbers') before moving on to more complex samples (`learn to add two complex numbers'). In our work, we capture this intuition using a novel, iterative algorithm called self-paced learning (spl ). At an iteration t, spl minimizes the following objective function:

min 𝐰 𝒲 , 𝐯 { 0 , 1 } n R ( 𝐰 ) + i = 1 n v i Δ ( y i , y i ( 𝐰 ) , h i ( 𝐰 ) ) - μ t i = 1 n v i . (6)

Here, samples with vi=0 are discarded during the iteration t, since the corresponding loss is multiplied by 0. The term μt is a threshold that governs how many samples are discarded. It is annealed at each iteration, allowing the learner to estimate the parameters using more and more samples, until all samples are used. Our results already demonstrate that spl estimates accurate parameters for various applications such as image classification, discriminative motif finding, handwritten digit recognition and semantic segmentation. We will investigate the use of spl to estimate the parameters of the models of medical imaging applications, such as segmentation and registration, that are being developed in the GALEN team. The ability to handle missing information is extremely important in this domain due to the similarities between foreground and background appearances (which results in ambiguities in annotations). We will also develop methods that are capable of minimizing more general loss functions that depend on the (unknown) value of the hidden variables, that is,

min 𝐰 𝒲 , θ Θ R ( 𝐰 ) + i = 1 n h i Pr ( h i | x i , y i ; θ ) Δ ( y i , h i , y i ( 𝐰 ) , h i ( 𝐰 ) ) . (7)

Here, θ is the parameter vector of the distribution of the hidden variables hi given the input xi and output yi, and needs to be estimated together with the model parameters w. The use of a more general loss function will allow us to better exploit the freely available data with missing information. For example, consider the case where yi is a binary indicator for the presence of a type of cell in a microscopical image, and hi is a tight bounding box around the cell. While the loss function Δ(yi,yi(𝐰),hi(𝐰)) can be used to learn to classify an image as containing a particular cell or not, the more general loss function Δ(yi,hi,yi(𝐰),hi(𝐰)) can be used to learn to detect the cell as well (since hi models its location)