Section: New Results
Continuous Optimization
Participants : Ouassim Ait Elhara, Yohei Akimoto, Asma Atamna, Anne Auger, Alexandre Chotard, Nikolaus Hansen, Ilya Loshchilov, Yann Ollivier, Marc Schoenauer, Michèle Sebag, Olivier Teytaud.
Our expertise in continuous optimization is focused on stochastic search algorithms. We address theory, algorithm design, and applications. The methods we investigate are adaptive techniques that are able to learn iteratively the parameters of the distribution used to sample (new) solutions. The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is nowadays one of the most powerful methods for derivative-free continuous optimization. We work on different variants of the CMA-ES to improve it in various contexts as described below. We have previously proven the convergence of simplified variants of the CMA-ES algorithm using the theory of stochastic approximation, and have provided the first proofs of convergence on composite of twice continuously differentiable functions. More recently, we used Markov chain analysis for analyzing the step-size adaptation rules of evolution strategies related to the CMA-ES algorithm.
- Surrogate models for CMA-ES.
-
In the context of his PhD thesis defended in January 2013 [4] , Ilya Loshchilov has proposed different surrogate variants of CMA-ES based on ranking-SVM that preserve the invariance to monotonic transformation of the CMA-ES algorithm. As a follow-up, he has proposed an original over-exploitation mechanism in case of accurate surrogate [44] . Several of these models have entered the BBOB-2013 workshop [43] . Further research direction using the KL divergence between successive distributions as a trigger for a new learning phase has been proposed [45] .
- Step-size adaptive methods.
-
We have proposed a new step-size adaptation mechanism that can loosely be interpreted as a new variant of the 1/5 success rule for comma (non-elitist) strategies and which is applicable with a large population size [21] . The rule uses the success of the median fitness of the current population compared to a (different) fitness percentile from the previous population.
- Principles of Stochastic Optimization.
-
Based on the framework of information geometry (IGO), theoretical guarantees have been obtained for continuous optimization algorithms: using the natural gradient provides improvement guarantees even when using finite step sizes [22] . We have considered the principles of designing effective stochastic optimization algorithms from the bottom-up and the top-down perspective [56] . The top-down perspective takes the information-geometrical view-point and largely confirms the bottom-up construction.
- Benchmarking.
-
We have continued our effort for improving standards in benchmarking and pursued the development of the COCO (COmparing Continuous Optimizers) platform (see Section 5.4 ). We have organized the ACM GECCO 2013 workshop on Black-Box-Optimization Benchmarking (see http://coco.gforge.inria.fr/doku.php?id=bbob-2013 ) and benchmarked different surrogate-based variants of the CMA-ES algorithm [26] , [44] , [43] . Our new starting ANR project NumBBO, centered on the COCO platform, aims at extending it for large-scale, expensive, constrained and multi-objective optimization.
- Theoretical proofs of convergence.
-
We have established the connection between convergence of comparison based step-size adaptive randomized search and the stability analysis of some underlying Markov chains. This connection heavily exploits invariance properties of the algorithm. In a first paper we establish this connection for scaling-invariant functions and prove sufficient conditions for linear convergence expressed in terms of stability conditions [63] . We have proven, using this defined methodology, the linear convergence of a famous algorithm introduced independently by several resarchers and known as the (1+1)-ES with one-fifth success rule [62] . In [32] , we have proven the linear convergence of a modified evolutionary algorithm without assuming quasi-convexity.