Section: Research Program

Optimization and Meta-optimization

TAO, with a first-rank expertise worldwide in stochastic black-box optimization, has now been splitted into the new team RANDOPT, and the present team TAU. While RANDOPT further investigates single- and multi-objective continuous stochastic optimization, TAU continues to focus on the fruitful hybridization of ML and stochastic optimization, with the dual persepectives of using ML for a better informed Optimization, and using Optimization to improve ML performances.

One long-term research perspective in the former context is to apprehend the black-box optimization process (BBO) as a sequential optimal decision process, along the lines of the learning to learn framework [58]. An effective policy (in expectation) can be trained on a representative set of benchmark problems, noting that comparison-based BBO methods offer good generalization properties thanks to their invariances properties, opening the road to Riemanian geometric approaches [11]. Another research perspective concerns interactive optimization, where the initially unknown optimization objective is gradually estimated based on the feedback of the human in the loop, and tackled [62], [56]. But this requires making a trade-off between the optimization search space (rich enough to contain good solutions) and the preference search space (simple enough to support effective preference learning with a limited number of queries).

On the other hand, the meta-optimization problem, concerned with selecting a nearly optimal algorithm and its hyper-parameters depending on the problem instance at hand, has been identified a key issue in both ML [61], [59], and Optimization [71]. This issue becomes a bottleneck for the tranfer to industry, due to the acknowledged shortage of data scientists and the increasing complexity of ML/Optimization toolboxes. The a priori algorithm selection and calibration in ML is hindered by the lack of appropriate meta-features to describe a problem instance [10], and the state of the art thus relies on Bayesian optimization, alternatively building a surrogate model of algorithm performances on the instance at hand  [82]. The search for meta-features can be revisited, exploiting latent representations derived from Collaborative Filtering [10] and Domain Adaptation approaches based on adversarial networks [68], [67].

Note that Isabelle Guyon was the main organizer of the AutoML challenge, whose purpose was to come up with automatic use of ML methods.