Section: New Software and Platforms
COCO
COmparing Continuous Optimizers
Keywords: Benchmarking - Numerical optimization - Black-box optimization - Stochastic optimization
Scientific Description
COmparing Continuous Optimisers (COCO) is a tool for benchmarking algorithms for black-box optimisation. COCO facilitates systematic experimentation in the field of continuous optimization. COCO provides: (1) an experimental framework for testing the algorithms, (2) post-processing facilities for generating publication quality figures and tables, (3) LaTeX templates of articles which present the figures and tables in a single document.
The COCO software is composed of two parts: (i) an interface available in different programming languages (C/C++, Java, Matlab/Octave, R, Python) which allows to run and log experiments on multiple test functions testbeds of functions (noisy and noiseless) are provided (ii) a Python tool for generating figures and tables that can be used in the LaTeX templates.
Functional Description
The Coco Platform provides the functionality to automatically benchmark optimization algorithms for unbounded, unconstrained optimization problems in continuous domains. Benchmarking is a vital part of algorithm engineering and a necessary path to recommend algorithms for practical applications. The Coco platform releases algorithm developers and practitioners alike from (re-)writing test functions, logging, and plotting facilities by providing an easy-to-handle interface in several programming languages. The Coco platform has been developed since 2007 and has been used extensively within the “Blackbox Optimization Benchmarking (BBOB)” workshop series since 2009. Overall, 123 algorithms and algorithm variants by contributors from all over the world have been benchmarked with the platform so far and all data is publicly available for the research community for the submissions to BBOB-2013).