EN FR
EN FR


Section: New Results

Benchmarking and Understanding Optimizers

Benchmarking is an important task in optimization in order to understand the working principles behind existing solvers, to find out about weaknesses of them and to finally recommend good ones. The COCO platform, developed in the Randopt team since 2007, aims at automatizing these numerical benchmarking experiments and the visual presentation of their results. We regularly use the platform to initiate workshop papers during the ACM-GECCO conference and also held a workshop this year (See numbbo.github.io/workshops/BBOB-2019/).

In this context, several workshop papers have been published by members of the team and we also proposed some extensions of the platform and updated its documentation.

Two papers addressed single-objective unconstrained problems. One paper investigated the impact of the sample volume of a simple random search on the bbob test suite of COCO [2] and the other paper benchmarked all solvers available in the scipy.optimize module of Python [9], re-discovering SLSQP as a very well-performing solver for small budgets.

Two additional papers addressed multiobjective problems in the context of the bbob-biobj test suite of COCO: “Benchmarking Algorithms from the platypus Framework on the Biobjective bbob-biobj Testbed” [3] compared several baseline algorithms from the literature such as NSGA-II, MOEA/D, SPEA2, and IBEA and “Benchmarking MO-CMA-ES and COMO-CMA-ES on the Bi-objective bbob-biobj Testbed” [4] compared our new COMO-CMA-ES solver with its previous version MO-CMA-ES.

As to extensions of the COCO platform, we released new test suites this year as described earlier. For the large-scale test suite of [12], 11 algorithm variants of the CMA-ES and L-BFGS solvers have been compared in the paper “Benchmarking Large Scale Variants of CMA-ES and L-BFGS-B on the bbob-largescale Testbed” [10].

Overall, we collected 54 new algorithm data sets within the COCO platform in 2019—the highest number in a single year since the release of COCO.

Finally, we updated our documentation on the biobjective test suite, we introduced in 2019. The corresponding journal paper “Using Well-Understood Single-Objective Functions in Multiobjective Black-Box Optimization Test Suites” [11] is now under revision for the Evolutionary Computation journal.