Section: New Results

Continuous Optimization

  • Markov Chain Analysis of Evolution Strategies The theory of Markov chains with discrete time and continuous state space turns out to be very useful to analyze the convergence of adaptive evolution strategies, including simplified versions of the state-of-the art CMA-ES. Exploiting invariance properties of the objective function and of a wide variety of comparison-based optimisation algorithms, we have developed a general methodology to prove global linear convergence [4]. The constructed Markov chains also show the connection between comparison-based adaptive stochastic algorithms and Markov chain Monte Carlo algorithms. Furthermore, we have continued to work on new theoretical tools that exploit deterministic control models to prove the irreducibility and T-chain property of general Markov chains. These tools promise to trivialise some stability proofs of the Markov chains we are interested in to analyse.

  • Large-scale Optimisation Algorithms We have been working on (improved) variants of CMA-ES with more favorable scaling properties with the dimension. While computing and using the natural gradient in appropriate subspaces turned out to be considerably more difficult than expected, we explored variants that restrict the covariance via projection, so-called VkD-CMA-ES [21]. We derived a computationally efficient way to update the restricted covariance matrix, where the richness of the model is controlled by the integer parameter k. This parameter provides a smooth transition between the case where only diagonal elements are subject to changes and changes of the full covariance matrix. In the latter case, the update is equivalent with the original CMA-ES. In order to get rid of the control parameter we propose an adaptation of k which turns out to be surprisingly efficient [20].

  • Analysis of Lagrangian based Constraints Handling in Evolution Strategies We have addressed the question of linear convergence of evolution strategies on constrained optimisation problems with one linear constraint. Based on previous works, we consider an adaptive augmented Lagrangian approach for the simple (1+1)-ES [23] and for the CMA-ES [24]. By design both algorithms derive from a framework with an underlying homogenous Markov chain which paves the way to prove linear convergence on a comparatively large class of functions. For the time being, stability of the Markov chain, associated with linear convergence, has been shown empirically on convex-quadratic and ill-conditioned functions.

  • Benchmarking of continuous optimizers We have been pursuing our efforts towards improving the standards in benchmarking of continuous optimisers [65], [66], [64]. Three new testbeds have been developed and implemented. (i) A bi-objective testbed [74] where also a corresponding performance assessment procedure has been advised [62]. In this context, a new version of MO-CMA-ES has been developed and benchmarked [44] on this testbed. (ii) A large-scale testbed, as a straight forward extension of the standard tested. The extension is based on a general methodology we have developed to construct non-trivial but scalable test functions [19]. (iii) a constrained testbed (unpublished).