Section: New Results
Continuous Optimization
Participants : Ouassim Ait Elhara, Asma Atamna, Anne Auger, Alexandre Chotard, Nikolaus Hansen, Yann Ollivier, Marc Schoenauer, Michèle Sebag, Olivier Teytaud, Luigi Malago, Emmanuel Benazera.
Our main expertise in continuous optimization is on stochastic search algorithms. We address theory, algorithm design, and applications. The methods we investigate are adaptive techniques able to learn iteratively parameters of the distribution used to sample solutions. The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is nowadays one of the most powerful methods for derivative-free continuous optimization. We work on different variants of the CMA-ES to improve it in various contexts as described below. We are well recognized in the field and were invited to write a book chapter on the design of continuous stochastic search[50] .
- Online adaptation of CMA-ES hyperparameters
-
CMA-ES uses clever mechanisms to adapt the covariance matrix and the step-size, based on the evolution path. However, these mechanisms in turn use learning parameters, that were adjusted by trial-and-error in the seminal algorithm. However, thanks to the invariance properties of the algorithm, these values have been demonstrated to be very robust. An original mechanism has been proposed to adapt these hyper-parameters online, maximizing the likelihood of the selected sample at time to adapt the hyperparameters at time t-1. The corresponding paper published at PPSN received the Best Paper Award [36] .
- Linear Time and Space Complexity CMA-ES for Large-Scale Optimization
-
We have been proposing a large-scale version of CMA-ES where the covariance matrix is restriced to a linear numbers of parameters. The update for the covariance matrix has been derived using the Information Geometric Optimization (IGO) framework and cumulation concepts borrowed from the original CMA have been additionally included [14] . This work is part of a joint project between the TAO team and Shinshu university in Japan funded by the Japanese governement. In this context, Luigi Malago is visiting the team working on extending the proposed algorithm to a richer model.
- Evaluation of Black-Box Optimizers
-
We have been focusing on appraising the performance of step-size adaptation mechanisms for stochastic adaptive algorithms. We have shown that a too restrictive choice of test functions for the design of a method leads to misleading conclusions and proposed a thorough framework for evaluating step-size mechanism [29] . We have been pursuing our effort for thorough and rigorous benchmarking of black-box algorithms by organizing two more Black-Box-Optimization Benchmarking workshops that will take place at CEC 2015 and GECCO 2015. Those workshops are based on the platform COCO that we develop in the context of the ANR NumBBO project.
- Theoretical Analysis of Stochastic Adaptive Algorithms
-
We have analyzed the CSA-ES algorithm using resampling for constrained optimization on a linear function with a linear contraint. We have studied the behavior of the algorithm and proven success of failure of the algorithm depending on internal parameters of the algorithm [22] . We have extended a previous work on a linear function from the use of standard normal distribution to more general ones [23] . The published paper has been invited for an extension in an ECJ special issue. The extended paper had been submitted in december 2014. We have been providing a general methodology to prove the linear convergence of Comparison-based Step-size Adaptive Randomized Search on scaling-invariant functions by analyzing the stability of underlying Markov chains [57] .
- CMA-ES Library
-
Besides our continuous work on implementations of CMA-ES (see e.g. github , PyPI ), we have created a new library in C++11 (libcmaes ). As part of the ANR SIMINOLE project, the library has been coupled with ROOT , the data analysis framework used at CERN, and generally in physics.