Section: New Results
Machine Learning & Optimization
Participants : Andreas Argyriou, Matthew Blaschko, Pawan Kumar.
Sparse Prediction & Convex Optimization Decomposition [Andreas Argyriou]
In [36] , we have introduced a new regularization penalty for sparse prediction, the -support norm. This norm corresponds to the tightest convex relaxation of sparsity combined with an penalty. We have shown that this new norm provides a tighter relaxation than the elastic net, and is thus a good replacement for the Lasso or the elastic net in sparse prediction problems. In [41] , motivated by learning problems we proposed a novel optimization algorithm for minimizing a convex objective which decomposes into three parts: a smooth part, a simple non-smooth Lipschitz part, and a simple non-smooth non-Lipschitz part.
Learning Optimization for NP-complete Inference [Matthew Blaschko]
In [14] an optimization strategy for learning to optimize boolean satisfiability (SAT) solvers is given. Applications to real-world SAT problems show improved computational performance as a result of the learning algorithm.
Max-Margin Min-Entropy Models & Dissimilarity Coefficient based Learning [Pawan Kumar]
In [22] we proposed the family of max-margin min-entropy (M3E) models, which predicts a structured output for a given input by minimizing the Renyi entropy. The parameters of M3E are learned by minimizing an upper bound on a user-defined loss. We demonstrated the efficacy of M3E on two problems using publicly available datasets: motif finding and image classification. In [19] we proposed a novel structured prediction framework for weakly supervised datasets. The framework minimizes a dissimilarity coefficient between the predictor and a conditional distribution over the missing information. We demonstrated the efficacy of our approach on two problems using publicly available datasets: object detection and action detection.