Keywords
 A6.2. Scientific computing, Numerical Analysis & Optimization
 A8.2. Optimization
 A8.9. Performance evaluation
 B4.4. Energy delivery
 B9.5.1. Computer science
 B9.5.2. Mathematics
 B9.8. Reproducibility
1 Team members, visitors, external collaborators
Research Scientists
 Anne Auger [Team leader, Inria, Senior Researcher, HDR]
 Dimo Brockhoff [Inria, Researcher]
 Nikolaus Hansen [Inria, Senior Researcher, HDR]
PhD Students
 Alann Cheral [École polytechnique]
 Marie Ange Dahito [Groupe PSA, CIFRE]
 Paul Dufosse [Thales, CIFRE]
 Eugenie Marescaux [École polytechnique]
 Cheikh Saliou Toure [Inria]
 Konstantinos Varelas [Thales, CIFRE]
Interns and Apprentices
 Armand Gissler [Ecole normale supérieure ParisSaclay, from Apr 2020 until Aug 2020]
 Baptiste PlaqueventJourdain [Inria, from Mar 2020 until Aug 2020]
 Jingyun Yang [Inria, from Jun 2020 until Sep 2020]
Administrative Assistant
 Marie Enee [Inria]
2 Overall objectives
2.1 Scientific Context
Critical problems of the 21st century like the search for highly energy efficient or even carbonneutral, and costefficient systems, or the design of new molecules against extensively drugresistant bacteria crucially rely on the resolution of challenging numerical optimization problems. Such problems typically depend on noisy experimental data or involve complex numerical simulations where derivatives are not useful or not available and the function is seen as a blackbox.
Many of those optimization problems are in essence multiobjective—one needs to optimize simultaneously several conflicting objectives like minimizing the cost of an energy network and maximizing its reliability—and most of the challenging blackbox problems are nonconvex and nonsmooth and they combine difficulties related to illconditioning, nonseparability, and ruggedness (a term that characterizes functions that can be nonsmooth but also noisy or multimodal). Additionally, the objective function can be expensive to evaluate, that is one function evaluation can take several minutes to hours (it can involve for instance a CFD simulation).
In this context, the use of randomness combined with proper adaptive mechanisms that particularly satisfy several invariance properties (affine invariance, invariance to monotonic transformations) has proven to be one key component for the design of robust global numerical optimization algorithms 43, 31.
The field of adaptive stochastic optimization algorithms has witnessed some important progress over the past 15 years. On the one hand, subdomains like mediumscale unconstrained optimization may be considered as “solved” (particularly, the CMAES algorithm, an instance of Evolution Strategy (ES) algorithms, stands out as stateoftheart method) and considerably better standards have been established in the way benchmarking and experimentation are performed. On the other hand, multiobjective populationbased stochastic algorithms became the method of choice to address multiobjective problems when a set of some best possible compromises is thought for. In all cases, the resulting algorithms have been naturally transferred to industry (the CMAES algorithm is now regularly used in companies such as Bosch, Total, ALSTOM, ...) or to other academic domains where difficult problems need to be solved such as physics, biology 47, geoscience 39, or robotics 41).
Very recently, ES algorithms attracted quite some attention in Machine Learning with the OpenAI article Evolution Strategies as a Scalable Alternative to Reinforcement Learning. It is shown that the training time for difficult reinforcement learning benchmarks could be reduced from 1 day (with standard RL approaches) to 1 hour using ES 44.1 A few years ago, another impressive application of CMAES, how “Computer Sim Teaches Itself To Walk Upright” (published at the conference SIGGRAPH Asia 2013) was presented in the press in the UK.
Several of these important advances around adaptive stochastic optimization algorithms are relying to a great extent on works initiated or achieved by the founding members of RandOpt, particularly related to the CMAES algorithm and to the Comparing Continuous Optimizer (COCO) platform.
Yet, the field of adaptive stochastic algorithms for blackbox optimization is relatively young compared to the “classical optimization” field that includes convex and gradientbased optimization. For instance, the stateofthe art algorithms for unconstrained gradient based optimization like quasiNewton methods (e.g. the BFGS method) date from the 1970s 30 while the stochastic derivativefree counterpart, CMAES dates from the early 2000s 33. Consequently, in some subdomains with important practical demands, not even the most fundamental and basic questions are answered:
 This is the case of constrained optimization where one needs to find a solution ${x}^{*}\in {\mathbb{R}}^{n}$ minimizing a numerical function ${min}_{x\in {\mathbb{R}}^{n}}f\left(x\right)$ while respecting a number of constraints $m$ typically formulated as ${g}_{i}\left({x}^{*}\right)\le 0$ for $i=1,...,m$. Only recently, the fundamental requirement of linear convergence2, as in the unconstrained case, has been clearly stated 22.
 In multiobjective optimization, most of the research so far has been focusing on how to select candidate solutions from one iteration to the next one. The difficult question of how to generate effectively new solutions is not yet answered in a proper way and we know today that simply applying operators from singleobjective optimization may not be effective with the current best selection strategies. As a comparison, in the singleobjective case, the question of selection of candidate solutions was already solved in the 1980s and 15 more years were needed to solve the trickier question of an effective adaptive strategy to generate new solutions.
 With the current demand to solve larger and larger optimization problems (e.g. in the domain of deep learning), optimization algorithms that scale linearly (in terms of internal complexity, memory and number of function evaluations to reach an $\u03f5$ball around the optimum) with the problem dimension are nowadays in increasing demand. Only recently, first proposals of how to reduce the quadratic scaling of CMAES have been made without a clear view of what can be achieved in the best case in practice. These later variants apply to optimization problems with thousands of variables. The question of designing randomized algorithms capable to handle problems with one or two orders of magnitude more variables effectively and efficiently is still largely open.
 For expensive optimization, standard methods are so called Bayesian optimization (BO) algorithms often based on Gaussian processes. Commonly used examples of BO algorithms are EGO 38, SMAC 36, Spearmint 45, or TPE 25 which are implemented in different libraries. Yet, our experience with a popular method like EGO is that many important aspects to come up with a good implementation rely on insider knowledge and are not standard across implementations. Two EGO implementations can differ for example in how they perform the initial design, which bandwidth for the Gaussian kernel is used, or which strategy is taken to optimize the expected improvement.
Additionally, the development of stochastic adaptive methods for blackbox optimization has been mainly driven by heuristics and practice—rather than a general theoretical framework—validated by intensive computational simulations. Undoubtedly, this has been an asset as the scope of possibilities for design was not restricted by mathematical frameworks for proving convergence. In effect, powerful stochastic adaptive algorithms for unconstrained optimization like the CMAES algorithm emerged from this approach. At the same time, naturally, theory strongly lags behind practice. For instance, the striking performances of CMAES empirically observed contrast with how little is theoretically proven on the method. This situation is clearly not satisfactory. On the one hand, theory generally lifts performance assessment from an empirical level to a conceptual one, rendering results independent from the problem instances where they have been obtained. On the other hand, theory typically provides insights that change perspectives on some algorithm components. Also theoretical guarantees generally increase the trust in the reliability of a method and facilitate the task to make it accepted by wider communities.
Finally, as discussed above, the development of novel blackbox algorithms strongly relies on scientific experimentation, and it is quite difficult to conduct proper and meaningful experimental analysis. This is well known for more than two decades now and summarized in this quote from Johnson in 1996
“the field of experimental analysis is fraught with pitfalls. In many ways, the implementation of an algorithm is the easy part. The hard part is successfully using that implementation to produce meaningful and valuable (and publishable!) research results.” 37
Since then, quite some progress has been made to set better standards in conducting scientific experiments and benchmarking. Yet, some domains still suffer from poor benchmarking standards and from the generic problem of the lack of reproducibility of results. For instance, in multiobjective optimization, it is (still) not rare to see comparisons between algorithms made by solely visually inspecting Pareto fronts after a fixed budget. In Bayesian optimization, good performance seems often to be due to insider knowledge not always well described in papers.
In the context of blackbox numerical optimization previously described, the scientific positioning of the RandOpt ream is at the intersection between theory, algorithm design, and applications. Our vision is that the field of stochastic blackbox optimization should reach the same level of maturity than gradientbased convex mathematical optimization. This entails major algorithmic developments for constrained, multiobjective and largescale blackbox optimization and major theoretical developments for analyzing current methods including the stateoftheart CMAES.
The specificity in blackbox optimization is that methods are intended to solve problems characterized by "nonproperties"—nonlinear, nonconvex, nonsmooth, nonLipschitz. This contrasts with gradientbased optimization and poses on the one hand some challenges when developing theoretical frameworks but also makes it compulsory to complement theory with empirical investigations.
Our ultimate goal is to provide software that is useful for practitioners. We see that theory is a means for this end (rather than an end in itself) and we also firmly belief that parameter tuning is part of the algorithm designer's task.
This shapes, on the one hand, four main scientific objectives for our team:
 develop novel theoretical frameworks for guiding (a) the design of novel blackbox methods and (b) their analysis, allowing to
 provide proofs of key features of stochastic adaptive algorithms including the stateoftheart method CMAES: linear convergence and learning of second order information.
 develop stochastic numerical blackbox algorithms following a principled design in domains with a strong practical need for much better methods namely constrained, multiobjective, largescale and expensive optimization. Implement the methods such that they are easy to use. And finally, to
 set new standards in scientific experimentation, performance assessment and benchmarking both for optimization on continuous or combinatorial search spaces. This should allow in particular to advance the state of reproducibility of results of scientific papers in optimization.
On the other hand, the above motivates our objectives with respect to dissemination and transfer:
 develop software packages that people can directly use to solve their problems. This means having carefully thought out interfaces, generically applicable setting of parameters and termination conditions, proper treatment of numerical errors, catching properly various exceptions, etc.;
 have direct collaborations with industrials;
 publish our results both in applied mathematics and computer science bridging the gap between very often disjoint communities.
3 Research program
The lines of research we intend to pursue is organized along four axis namely developing novel theoretical framework, developing novel algorithms, setting novel standards in scientific experimentation and benchmarking and applications.
3.1 Developing Novel Theoretical Frameworks for Analyzing and Designing Adaptive Stochastic Algorithms
Stochastic blackbox algorithms typically optimize nonconvex, nonsmooth functions. This is possible because the algorithms rely on weak mathematical properties of the underlying functions: the algorithms do not use the derivatives—hence the function does not need to be differentiable—and, additionally, often do not use the exact function value but instead how the objective function ranks candidate solutions (such methods are sometimes called functionvaluefree). (To illustrate a comparisonbased update, consider an algorithm that samples $\lambda $ (with $\lambda $ an even integer) candidate solutions from a multivariate normal distribution. Let ${x}_{1},...,{x}_{\lambda}$ in ${\mathbb{R}}^{n}$ denote those $\lambda $ candidate solutions at a given iteration. The solutions are evaluated on the function $f$ to be minimized and ranked from the best to the worse:
In the previous equation $i\phantom{\rule{0.166667em}{0ex}}:\phantom{\rule{0.166667em}{0ex}}\lambda $ denotes the index of the sampled solution associated to the $i$th best solution. The new mean of the Gaussian vector from which new solutions will be sampled at the next iteration can be updated as
The previous update moves the mean towards the $\lambda /2$ best solutions. Yet the update is only based on the ranking of the candidate solutions such that the update is the same if $f$ is optimized or $g\circ f$ where $g:\mathrm{Im}\left(f\right)\to \mathbb{R}$ is strictly increasing. Consequently, such algorithms are invariant with respect to strictly increasing transformations of the objective function. This entails that they are robust and their performances generalize well.)
Additionally, adaptive stochastic optimization algorithms typically have a complex state space which encodes the parameters of a probability distribution (e.g. mean and covariance matrix of a Gaussian vector) and other state vectors. This statespace is a manifold. While the algorithms are Markov chains, the complexity of the statespace makes that standard Markov chain theory tools do not directly apply. The same holds with tools stemming from stochastic approximation theory or Ordinary Differential Equation (ODE) theory where it is usually assumed that the underlying ODE (obtained by proper averaging and limit for learning rate to zero) has its critical points inside the search space. In contrast, in the cases we are interested in, the critical points of the ODEs are at the boundary of the domain.
Last, since we aim at developing theory that on the one hand allows to analyze the main properties of stateoftheart methods and on the other hand is useful for algorithm design, we need to be careful not to use simplifications that would allow a proof to be done but would not capture the important properties of the algorithms. With that respect one tricky point is to develop theory that accounts for invariance properties.
To face those specific challenges, we need to develop novel theoretical frameworks exploiting invariance properties and accounting for peculiar statespaces. Those frameworks should allow researchers to analyze one of the core properties of adaptive stochastic methods, namely linear convergence on the widest possible class of functions.
We are planning to approach the question of linear convergence from three different complementary angles, using three different frameworks:
 the Markov chain framework where the convergence derives from the analysis of the stability of a normalized Markov chain existing on scalinginvariant functions for translation and scaleinvariant algorithms 24. This framework allows for a fine analysis where the exact convergence rate can be given as an implicit function of the invariant measure of the normalized Markov chain. Yet it requires the objective function to be scalinginvariant. The stability analysis can be particularly tricky as the Markov chain that needs to be studied writes as ${\Phi}_{t+1}=F({\Phi}_{t},{W}_{t+1})$ where $\{{W}_{t}:t>0\}$ are independent identically distributed and $F$ is typically discontinuous because the algorithms studied are comparisonbased. This implies that practical tools for analyzing a standard property like irreducibility, that rely on investigating the stability of underlying deterministic control models 42, cannot be used. Additionally, the construction of a drift to prove ergodicity is particularly delicate when the state space includes a (normalized) covariance matrix as it is the case for analyzing the CMAES algorithm.
 The stochastic approximation or ODE framework. Those are standard techniques to prove the convergence of stochastic algorithms when an algorithm can be expressed as a stochastic approximation of the solution of a mean field ODE 27, 26, 40. What is specific and induces difficulties for the algorithms we aim at analyzing is the nonstandard statespace since the ODE variables correspond to the statevariables of the algorithm (e.g. ${\mathbb{R}}^{n}\times {\mathbb{R}}_{>0}$ for stepsize adaptive algorithms, ${\mathbb{R}}^{n}\times {\mathbb{R}}_{>0}\times {S}_{++}^{n}$ where ${S}_{++}^{n}$ denotes the set of positive definite matrices if a covariance matrix is additionally adapted). Consequently, the ODE can have many critical points at the boundary of its definition domain (e.g. all points corresponding to ${\sigma}_{t}=0$ are critical points of the ODE) which is not typical. Also we aim at proving linear convergence, for that it is crucial that the learning rate does not decrease to zero which is nonstandard in ODE method.
 The direct framework where we construct a global Lyapunov function for the original algorithm from which we deduce bounds on the hitting time to reach an $\u03f5$ball of the optimum. For this framework as for the ODE framework, we expect that the class of functions where we can prove linear convergence are composite of $g\circ f$ where $f$ is differentiable and $g:\mathrm{Im}\left(f\right)\to \mathbb{R}$ is strictly increasing and that we can show convergence to a local minimum.
We expect those frameworks to be complementary in the sense that the assumptions required are different. Typically, the ODE framework should allow for proofs under the assumptions that learning rates are small enough while it is not needed for the Markov chain framework. Hence this latter framework captures better the real dynamics of the algorithm, yet under the assumption of scalinginvariance of the objective functions. Also, we expect some overlap in terms of function classes that can be studied by the different frameworks (typically convexquadratic functions should be encompassed in the three frameworks). By studying the different frameworks in parallel, we expect to gain synergies and possibly understand what is the most promising approach for solving the holy grail question of the linear convergence of CMAES. We foresee for instance that similar approaches like the use of FosterLyapunov drift conditions are needed in all the frameworks and that intuition can be gained on how to establish the conditions from one framework to another one.
3.2 Algorithmic developments
We are planning on developing algorithms in the subdomains with strong practical demand for better methods of constrained, multiobjective, largescale and expensive optimization.
Many of the algorithm developments, we propose, rely on the CMAES method. While this seems to restrict our possibilities, we want to emphasize that CMAES became a family of methods over the years that nowadays include various techniques and developments from the literature to handle nonstandard optimization problems (noisy, largescale, ...). The core idea of all CMAES variants—namely the mechanism to adapt a Gaussian distribution—has furthermore been shown to derive naturally from first principles with only minimal assumptions in the context of derivativefree blackbox stochastic optimization 43, 31. This is a strong justification for relying on the CMAES premises while new developments naturally include new techniques typically borrowed from other fields. While CMAES is now a full family of methods, for visibility reasons, we continue to refer often to “the CMAES algorithm”.
3.2.1 Constrained optimization
Many (realworld) optimization problems have constraints related to technical feasibility, cost, etc. Constraints are classically handled in the blackbox setting either via rejection of solutions violating the constraints—which can be quite costly and even lead to quasiinfinite loops—or by penalization with respect to the distance to the feasible domain (if this information can be extracted) or with respect to the constraint function value 28. However, the penalization coefficient is a sensitive parameter that needs to be adapted in order to achieve a robust and general method 29. Yet, the question of how to handle properly constraints is largely unsolved. Previous constraints handling for CMAES were adhoc techniques driven by many heuristics 29. Also, only recently it was pointed out that linear convergence properties should be preserved when addressing constraint problems 22.
Promising approaches though, rely on using augmented Lagrangians 22, 23. The augmented Lagrangian, here, is the objective function optimized by the algorithm. Yet, it depends on coefficients that are adapted online. The adaptation of those coefficients is the difficult part: the algorithm should be stable and the adaptation efficient. We believe that the theoretical frameworks developed (particularly the Markov chain framework) will be useful to understand how to design the adaptation mechanisms. Additionally, the question of invariance will also be at the core of the design of the methods: augmented Lagrangian approaches break the invariance to monotonic transformation of the objective functions, yet understanding the maximal invariance that can be achieved seems to be an important step towards understanding what adaptation rules should satisfy.
3.2.2 Largescale Optimization
In the largescale setting, we are interested to optimize problems with the order of ${10}^{3}$ to ${10}^{4}$ variables. For one to two orders of magnitude more variables, we will talk about a “very largescale” setting.
In this context, algorithms with a quadratic scaling (internal and in terms of number of function evaluations needed to optimize the problem) cannot be afforded. In CMAEStype algorithms, we typically need to restrict the model of the covariance matrix to have only a linear number of parameters to learn such that the algorithms scale linearly in terms of internal complexity, memory and number of function evaluations to solve the problem. The main challenge is thus to have rich enough models for which we can efficiently design proper adaptation mechanisms. Some first largescale variants of CMAES have been derived. They include the online adaptation of the complexity of the model 21, 20. Yet, the type of Hessian matrices they can learn is restricted and not fully satisfactory. Different restricted families of distributions are conceivable and it is an open question which can be effectively learned and which are the most promising in practice.
Another direction, we want to pursue, is exploring the use of largescale variants of CMAES to solve reinforcement learning problems 44.
Last, we are interested to investigate the verylargescale setting. One approach consists in doing optimization in subspaces. This entails the efficient identification of relevant spaces and the restriction of the optimization to those subspaces.
3.2.3 Multiobjective Optimization
Multiobjective optimization, i.e., the simultaneous optimization of multiple objective functions, differs from singleobjective optimization in particular in its optimization goal. Instead of aiming at converging to the solution with the best possible function value, in multiobjective optimization, a set of solutions 3 is sought. This set, called Paretoset, contains all tradeoff solutions in the sense of Paretooptimality—no solution exists that is better in all objectives than a Paretooptimal one. Because converging towards a set differs from converging to a single solution, it is no surprise that we might lose many good convergence properties if we directly apply search operators from singleobjective methods. However, this is what has typically been done so far in the literature. Indeed, most of the research in stochastic algorithms for multiobjective optimization focused instead on the so called selection part, that decides which solutions should be kept during the optimization—a question that can be considered as solved for many years in the case of singleobjective stochastic adaptive methods.
We therefore aim at rethinking search operators and adaptive mechanisms to improve existing methods. We expect that we can obtain orders of magnitude better convergence rates for certain problem types if we choose the right search operators. We typically see two angles of attack: On the one hand, we will study methods based on scalarizing functions that transform the multiobjective problem into a set of singleobjective problems. Those singleobjective problems can then be solved with stateoftheart singleobjective algorithms. Classical methods for multiobjective optimization fall into this category, but they all solve multiple singleobjective problems subsequently (from scratch) instead of dynamically changing the scalarizing function during the search. On the other hand, we will improve on currently available populationbased methods such as the first multiobjective versions of the CMAES. Here, research is needed on an even more fundamental level such as trying to understand success probabilities observed during an optimization run or how we can introduce nonelitist selection (the state of the art in singleobjective stochastic adaptive algorithms) to increase robustness regarding noisy evaluations or multimodality. The challenge here, compared to singleobjective algorithms, is that the quality of a solution is not anymore independent from other sampled solutions, but can potentially depend on all known solutions (in the case of three or more objective functions), resulting in a more noisy evaluation as the relatively simple functionvaluebased ranking within singleobjective optimizers.
3.2.4 Expensive Optimization
In the socalled expensive optimization scenario, a single function evaluation might take several minutes or even hours in a practical setting. Hence, the available budget in terms of number of function evaluation calls to find a solution is very limited in practice. To tackle such expensive optimization problems, it is needed to exploit the first few function evaluations in the best way. To this end, typical methods couple the learning of a surrogate (or metamodel) of the expensive objective function with traditional optimization algorithms.
In the context of expensive optimization and CMAES, which usually shows its full potential when the number $n$ of variables is not too small (say larger than 3) and if the number of available function evaluations is about $100n$ or larger, several research directions emerge. The two main possibilities to integrate metamodels into the search with CMAES type algorithms are (i) the successive injection of the minimum of a learned metamodel at each time step into the learning of CMAES's covariance matrix and (ii) the use of a metamodel to predict the internal ranking of solutions. While for the latter, first results exist, the former idea is entirely unexplored for now. In both cases, a fundamental question is which type of metamodel (linear, quadratic, Gaussian Process, ...) is the best choice for a given number of function evaluations (as low as one or two function evaluations) and at which time the type of the metamodel shall be switched.
3.3 Setting novel standards in scientific experimentation and benchmarking
Numerical experimentation is needed as a complement to theory to test novel ideas, hypotheses, the stability of an algorithm, and/or to obtain quantitative estimates. Optimally, theory and experimentation go hand in hand, jointly guiding the understanding of the mechanisms underlying optimization algorithms. Though performing numerical experimentation on optimization algorithms is crucial and a common task, it is nontrivial and easy to fall in (common) pitfalls as stated by J. N. Hooker in his seminal paper 35.
In the RandOpt team we aim at raising the standards for both scientific experimentation and benchmarking.
On the experimentation aspect, we are convinced that there is common ground over how scientific experimentation should be done across many (sub)domains of optimization, in particular with respect to the visualization of results, testing extreme scenarios (parameter settings, initial conditions, etc.), how to conduct understandable and small experiments, how to account for invariance properties, performing scaling up experiments and so forth. We therefore want to formalize and generalize these ideas in order to make them known to the entire optimization community with the final aim that they become standards for experimental research.
Extensive numerical benchmarking, on the other hand, is a compulsory task for evaluating and comparing the performance of algorithms. It puts algorithms to a standardized test and allows to make recommendations which algorithms should be used preferably in practice. To ease this part of optimization research, we have been developing the Comparing Continuous Optimizers platform (COCO) since 2007 which allows to automatize the tedious task of benchmarking. It is a game changer in the sense that the freed time can now be spent on the scientific part of algorithm design (instead of implementing the experiments, visualization, statistical tests, etc.) and it opened novel perspectives in algorithm testing. COCO implements a thorough, welldocumented methodology that is based on the above mentioned general principles for scientific experimentation.
Also due to the freely available data from 300+ algorithms benchmarked with the platform, COCO became a quasistandard for singleobjective, noiseless optimization benchmarking. It is therefore natural to extend the reach of COCO towards other subdomains (particularly constrained optimization, manyobjective optimization) which can benefit greatly from an automated benchmarking methodology and standardized tests without (much) effort. This entails particularly the design of novel test suites and rethinking the methodology for measuring performance and more generally evaluating the algorithms. Particularly challenging is the design of scalable nontrivial testbeds for constrained optimization where one can still control where the solutions lies. Other optimization problem types, we are targeting are expensive problems (and the Bayesian optimization community in particular, see our AESOP project), optimization problems in machine learning (for example parameter tuning in reinforcement learning), and the collection of realworld problems from industry.
Another aspect of our future research on benchmarking is to investigate the large amounts of benchmarking data, we collected with COCO during the years. Extracting information about the influence of algorithms on the best performing portfolio, clustering algorithms of similar performance, or the automated detection of anomalies in terms of good/bad behavior of algorithms on a subset of the functions or dimensions are some of the ideas here.
Last, we want to expand the focus of COCO from automatized (large) benchmarking experiments towards everyday experimentation, for example by allowing the user to visually investigate algorithm internals on the fly or by simplifying the set up of algorithm parameter influence studies.
4 Application domains
Applications of blackbox algorithms occur in various domains. Industry but also researchers in other academic domains have a great need to apply blackbox algorithms on a daily basis. Generally, we do not target a specific application domain and are interested in blackbox applications stemming from various origins. This is to us intrinsic to the nature of the methods we develop that are general purpose algorithms. Hence our strategy with respect to applications can be considered as opportunistic and our main selection criteria when approached by colleagues who want to develop a collaboration around an application is whether we find the application interesting and valuable: that means the application brings new challenges and/or gives us the opportunity to work on topics we already intended to work on, and it brings, in our judgement, an advancement to society in the application domain.
The concrete applications related to industrial collaborations we are currently dealing with are:
 With Thales for the theses of Konstantinos Varelas and Paul Dufossé (DGACIFRE theses) related to the design of radars (shape optimization of the wave form). They investigate more specifically the development of largescale variants of CMAES and constrainedhandling for CMAES, respectively.
 With Storengy, a subsidiary of the ENGIE group, specialized in gas storage for the thesis of Cheikh Touré. Different multiobjective applications are considered in this context but the primary motivation of Storengy is to get at their disposal a better multiobjective variant of CMAES which is the main objective of the developments within the thesis.
 With PSA in the context of the OpenLab and the thesis of MarieAnge Dahito for the design of part of a car body.
 With Onera in the context of the thesis of Alann Cheral related to the optimization of the choice of hyperspectral bandwidth.
5 Highlights of the year
5.1 Awards
Nikolaus Hansen and Anne Auger received the SIGEVO impact award for the article "Comparing Results of 31 Algorithms from the BlackBox Optimization Benchmarking BBOB2009" that was published in 2010 32. This impact award distinguishes work that has been published at the GECCO conference ten years ago and made a significant impact. Through this award the impact of the COCO platform, developed first within the TAO team and then the RandOpt team, has been acknowledged. Details have been presented in the article "A SIGEVO impact award for a paper arising from the COCO platform: a summary and beyond"19.
5.2 Software
We want to highlight the impact and transfer of our work through the following: the main algorithm developed in the team for singleobjective optimization, CMAES, has been implemented in a lightweight version for Python 3 by Masashi Shibata (not a RANDOPT team member). The implementation is based on publications of the team members and notably on 34. By the end of the year 2020, the code release on PyPI was being downloaded 150,000 times per month.
6 New software and platforms
6.1 New software
6.1.1 COCO
 Name: COmparing Continuous Optimizers
 Keywords: Benchmarking, Numerical optimization, Blackbox optimization, Stochastic optimization

Scientific Description:
COmparing Continuous Optimisers (COCO) is a tool for benchmarking algorithms for blackbox optimisation. COCO facilitates systematic experimentation in the field of continuous optimization. COCO provides: (1) an experimental framework for testing the algorithms, (2) postprocessing facilities for generating publication quality figures and tables, including the easy integration of data from benchmarking experiments of 300+ algorithm variants, (3) LaTeX templates for scientific articles and HTML overview pages which present the figures and tables.
The COCO software is composed of two parts: (i) an interface available in different programming languages (C/C++, Java, Matlab/Octave, Python, external support for R) which allows to run and log experiments on several function test suites (unbounded noisy and noiseless singleobjective functions, unbounded noiseless multiobjective problems, constrained problems) are provided (ii) a Python tool for generating figures and tables that can be looked at in every web browser and that can be used in the provided LaTeX templates to write scientific papers.
 Functional Description: The Coco platform aims at supporting the numerical benchmarking of blackbox optimization algorithms in continuous domains. Benchmarking is a vital part of algorithm engineering and a necessary path to recommend algorithms for practical applications. The Coco platform releases algorithm developers and practitioners alike from (re)writing test functions, logging, and plotting facilities by providing an easytohandle interface in several programming languages. The Coco platform has been developed since 2007 and has been used extensively within the “Blackbox Optimization Benchmarking (BBOB)” workshop series since 2009. Overall, 300+ algorithms and algorithm variants by contributors from all over the world have been benchmarked on the platform's supported test suites so far. The most recent extensions have been towards largescale as well as mixedinteger problems.

URL:
https://
github. com/ numbbo/ coco  Contacts: Anne Auger, Dimo Brockhoff
 Participants: Anne Auger, Asma Atamna, Dejan Tusar, Dimo Brockhoff, Marc Schoenauer, Nikolaus HANSEN, Ouassim Ait Elhara, Raymond Ros, Tea Tusar, ThanhDo Tran, Umut Batu, Konstantinos Varelas
 Partners: TU Dortmund University, Charles University Prague, Jozef Stefan Institute (JSI)
6.1.2 CMAES
 Name: Covariance Matrix Adaptation Evolution Strategy
 Keywords: Numerical optimization, Blackbox optimization, Stochastic optimization
 Scientific Description: The CMAES is considered as stateoftheart in evolutionary computation and has been adopted as one of the standard tools for continuous optimisation in many (probably hundreds of) research labs and industrial environments around the world. The CMAES is typically applied to unconstrained or bounded constraint optimization problems, and search space dimensions between three and a hundred. The method should be applied, if derivative based methods, e.g. quasiNewton BFGS or conjugate gradient, (supposedly) fail due to a rugged search landscape (e.g. discontinuities, sharp bends or ridges, noise, local optima, outliers). If second order derivative based methods are successful, they are usually faster than the CMAES: on purely convexquadratic functions, f(x)=xTHx, BFGS (Matlabs function fminunc) is typically faster by a factor of about ten (in terms of number of objective function evaluations needed to reach a target function value, assuming that gradients are not available). On the most simple quadratic function f(x)=x2=xTx BFGS is faster by a factor of about 30.
 Functional Description: The CMAES is an evolutionary algorithm for difficult nonlinear nonconvex blackbox optimisation problems in continuous domain.

URL:
http://
cma. gforge. inria. fr/ cmaes_sourcecode_page. html  Contacts: Nikolaus HANSEN, Anne Auger
 Participant: Nikolaus HANSEN
6.1.3 COMOCMAES
 Name: Comma MultiObjective Covariance Matrix Adaptation Evolution Strategy
 Keywords: Blackbox optimization, Global optimization, Multiobjective optimisation
 Scientific Description: The CMAES is considered as stateoftheart in evolutionary computation and has been adopted as one of the standard tools for continuous optimisation in many (probably hundreds of) research labs and industrial environments around the world. The CMAES is typically applied to unconstrained or bounded constraint optimization problems, and search space dimensions between three and a hundred. COMOCMAES is a multiobjective optimization algorithm based on the standard CMAES using the Uncrowded Hypervolume Improvement within the socalled Sofomore framework.
 Functional Description: The COMOCMAES is an evolutionary algorithm for difficult nonlinear nonconvex blackbox optimisation problems with several (two) objectives in continuous domain.

URL:
https://
github. com/ CMAES/ pycomocma  Contacts: Nikolaus HANSEN, Dimo Brockhoff
6.1.4 MOarchiving
 Name: Multiobjective Optimization Archiving Module
 Keywords: Mathematical Optimization, Multiobjective optimisation
 Scientific Description: Multiobjective optimization relies on the maintenance of a set of nondominated (and hence incomparable) solutions. Performance indicator computations and in particular the computation of the hypervolume indicator is based on this solution set. The hypervolume computation and the update of the set of nondominated solutions are generally time critical operations. The module computes the biobjective hypervolume in linear time and updates the nondominated solution set in logarithmic time.
 Functional Description: The module implements a biobjective nondominated archive using a Python list as parent class. The main functionality is heavily based on the bisect module. The class provides easy and fast access to the overall hypervolume, the contributing hypervolume of each element, and to the uncrowded hypervolume improvement of any given point in objective space.

URL:
https://
github. com/ CMAES/ moarchiving  Contacts: Nikolaus HANSEN, Dimo Brockhoff
7 New results
7.1 Analysis of Adaptive Stochastic Search Algorithms
Participants: Anne Auger, Nikolaus Hansen, Cheikh Touré, Armand Gissler
External collaborators: Youhei Akimoto (Tsukuba University), Tobias Glasmachers (Ruhr University, Bochum), Asma Atamna (Telecom Paris)
Central to the design of adaptive Evolution Strategies are the questions of invariance and of linear convergence. In the case of constraints handled with augmented Lagrangians, sufficient conditions for linear convergence have been proposed in 10. Quantitative estimates of convergence rates on convexquadratic functions of algorithms using weighted recombination have been published in 8.
We have proven the global linear convergence of the (1+1)ES with onefifth success rule as stepsize adaptation on a class of functions that embed smooth strongly convex functions and positively homogeneous functions. Because of the invariance to monotonic transformations, the study holds for noncontinuous and nonconvex functions. Arguably, our study provides the first proof of the linear convergence of an adaptive evolution strategy without modifying its underlying updates (to make a proof work) on such a wide class of functions 16.
Over the past years, we have developed a methodology to analyze the linear convergence of adaptive comparisonbased algorithms including Evolution Strategies by studying the stability of underlying Markov chains. This methodology allows to derive convergence on socalled scalinginvariant functions. Yet this class of functions has not been studied in the past such that we needed to derive important mathematical properties that are needed to conduct our convergence studies. In his internship, Armand Gissler derived key results connecting scalinginvariant functions to positively homogeneous ones. Those results complemented results by Cheikh Touré and are now presented in an article 18.
7.2 Largescale blackbox optimization
Participants: Konstantinos Varelas, Anne Auger, Nikolaus Hansen
External collaborators: Youhei Akimoto (Tsukuba University)
One objective of the team is to advance research on blackbox stochastic optimization towards largescale problems. In the context of blackbox optimization without derivatives, largescale starts when the number of variables to be optimized is of the order of a few hundred. We have designed a variant of CMAES capable to exploit separability of the functions while keeping the ability to optimize fully nonseparable functions. This results in the DDCMAES algorithm 9.
In his thesis, Konstantinos Varelas has explored different approaches to learn sparsity of a problem and increase the learning rates so as to reduce the number of function evaluations to solve a problem. In contrast to many largescale variants of CMAES, the main idea was to not assume a fixed structure for the covariance matrix to be learned. On the one hand, he used graphical Lasso to learn a sparse precision matrix, leading to interesting improvements assuming a proper choice of the penalization coefficient 14. On the other hand, he also explored thresholding as well as a method used for learning Markov decision processes.
7.3 Applications to Radar
Participants: Konstantinos Varelas, Paul Dufosse, Nikolaus Hansen
External collaborators: THALES and notably Yann Semet, Rami Kassab, Frédéric Barbaresco, Cyrille Enderli
Two of the theses in the team are funded by Thales and DGA interested to improve the resolution of different optimization problems related to the design of antenna and Radar. In this context, the design of phasedarray antenna pattern with CMAES has been investigated 13.
For the design of new generation digital multimissions radars, THALES has proposed a framework using in particular CMAES as optimization module 15.
In the article 17, different manyobjective optimization solvers are empirically compared to approach the problem of finding optimal Pulse Repetion Intervals.
7.4 Multiobjective optimization
Participants: Cheikh Touré, Eugénie Marescaux, Baptiste PlaqueventJourdain, Anne Auger, Dimo Brockhoff, Nikolaus Hansen
External collaborators: Youhei Akimoto (Tsukuba University)
A central theme for the team is the design of multiobjective optimization algorithms. We have worked on building algorithms that converge to the entire Paretofront, both using gradients of the objectives and in a derivativefree mode. Those algorithms are based on the idea to incrementally approximate the Pareto set by singleobjective optimization of an improved variant of the hypervolume indicator, called uncrowded hypervolume improvement (UHVI, 46). One first class of algorithms based on quasiNewton techniques have been developed, implemented, and experimentally compared on the biobjective test functions of the COCO platform in the context of the internship of Baptiste PlaqueventJourdain. In parallel, we have worked on extending the COMO multiobjective solver. The publications related to those works are in the process to be finalized.
In parallel, Eugénie Marescaux studied the convergence of the ensuing solver and she was able to prove under proper assumptions that they converge towards the entire Pareto front.
Finally, we have released two Python modules related to multiobjective optimization: the MOarchiving module and the module implementing the COMOCMAES algorithm. They are both described in the New Software section.
7.5 Benchmarking: methodology and the Comparing Continuous Optimziers Platform (COCO)
Participants: Anne Auger, Dimo Brockhoff, Nikolaus Hansen, Konstantinos Varelas
External collaborators: Olaf Mersmann (TH Köln), Raymond Ros (U ParisSaclay), Tea Tušar (Jozef Stefan Institute)
Benchmarking is an important task in optimization in order to assess and compare the performance of algorithms as well as to motivate the design of better solvers. We are leading the benchmarking of derivative free solvers in the context of difficult problems: we have been developing methodologies and testbeds as well as assembled this into a platform automatizing the benchmarking process. This is a continuing effort that we are pursuing in the team.
7.5.1 Methodology
The main innovations and methodological ideas of the platform have been published in the paper 11.
Building on the BBOB testbed of the COCO platform, the bboblargescale testbed has been finalized, released and the underlying methodology to build the testbed has been published 12.
7.5.2 The COCO platform
The COCO platform, developed at Inria first in the TAO team and then in Randopt since 2007, aims at automatizing numerical benchmarking experiments and the visual presentation of their results. The platform consists of an experimental part to generate benchmarking data (in various programming languages) and a postprocessing module (in python), see Figure 1. At the interface between the two, we provide data sets from numerical experiments of 300+ algorithms and algorithm variants from various fields (quasiNewton, derivativefree optimization, evolutionary computing, Bayesian optimization) and for various problem characteristics (noiseless/noisy optimization, single/multiobjective optimization, continuous/mixedinteger, ...).
We have been using the platform in the past to initiate workshop papers during the ACMGECCO conference as well as to collect algorithm data sets from the entire optimization community. The next workshop in this series is going to take place in 20214.
In this context, we constantly improve and extend the software and the year 2020 was no exception. Overall, 57 issues have been closed in 2020. One big change, made in 2020, was to allow for a continuous submission of data sets from benchmarking experiments to our data archive. Since the official start in November, we have received five data sets already. New visualizations of our biobjective functions have been made available as well on a new webpage at https://
8 Bilateral contracts and grants with industry
8.1 Bilateral contracts with industry
 Contract with the company Storengy partially funding the PhD thesis of Cheikh Touré (2017–2020)
 Contract with Thales in the context of the CIFRE PhD thesis of Konstantinos Varelas (2017–2020)
 Contract with PSA (now Stellantis) in the context of the CIFRE PhD thesis of MarieAnge Dahito (2018–2021)
 Contract with Thales for the CIFRE PhD thesis of Paul Dufossé (2018–2021)
9 Partnerships and cooperations
9.1 International initiatives
9.1.1 Inria international partners
Informal international partners
 Tea Tušar, Jozef Stefan Institute (JSI), Ljubljana, Slovenia
 Tobias Glasmachers, Ruhr Universität Bochum, Bochum, Germany
 Youhei Akimoto, University of Tsukuba, Tsukuba, Japan
9.2 National initiatives
ANR
 CIROQUO ("Consortium Industriel de Recherche en Optimisation et QUantification d'incertitudes pour les données Onéreuses"), participation as Inria Saclay/Ecole Polytechnique, together with six other academic and five industrial partners
 ANR project “Big Multiobjective Optimization (BigMO)”, Dimo Brockhoff participates in this project through the Inria team BONUS in Lille (2017–2020)
10 Dissemination
10.1 Promoting scientific activities
10.1.1 Scientific events: organisation
 Anne Auger: coorganizer of Dagstuhl seminar on Benchmarking originally planned for 2021 and postponed.
 Anne Auger: coorganizer of Dagstuhl seminar on Theory of Randomized Search Heuristics planned for 2022.
Member of the organizing committees
 Anne Auger, Dimo Brockhoff, and Nikolaus Hansen: coorganizers of the BlackBox Optimization Benchmarking workshop (BBOB) at the ACMGECCO 2021 conference (together with Peter Bosman, Tobias Glasmachers, Tea Tušar and Petr Pošik).
10.1.2 Scientific events: selection
Reviewer
 Dimo Brockhoff: ACMGECCO 2020, PPSN 2020, EMO 2021
 Nikolaus Hansen: ACMGECCO 2020
 Anne Auger: ACMGECCO 2020, PPSN 2020
10.1.3 Journal
The three permanent members are frequent reviewers for major journals in Evolutionary Computation. Anne Auger is a frequent reviewer of mathematical optimization journal (JOGO, SIAM OPT). We additionally review papers in Machine Learning related to optimization for JMLR, Machine Learning.
Member of the editorial boards
 Anne Auger, Dimo Brockhoff and Nikolaus Hansen: Associate Editor of the ACM Transactions on Evolutionary Learning and Optimization
 Anne Auger and Nikolaus Hansen: Associate Editor of the Evolutionary Computation Journal
 Anne Auger is guest editor of an Algorithmica special issue of papers selected from the ACMGECCO'2018 theory track
 Anne Auger is guest editor of the IEEE Transactions on Evolutionary Computation special issue on Theoretical Foundations of Evolutionary Computation
10.1.4 Invited talks
 Dimo Brockhoff: “Evolutionary Computation: From Biological to Simulated Evolution and Into Everyday Life”, keynote talk at the 18th Shanghai Forum on Software Trade, Shanghai, November 2020, online
 Nikolaus Hansen: “Ten+ Years of Benchmarking with COCO/BBOB” at the Lorentz Center workshop Benchmarked: Optimization Meets Machine Learning, November 2020.
10.1.5 Dagstuhl seminar invitations
 Dimo Brockhoff invited at the Dagstuhl Seminar 20031 on “Scalability in Multiobjective Optimization”, January 2020
10.1.6 Lorentz Center seminar invitations
 Anne Auger, Dimo Brockhoff, and Nikolaus Hansen invited at the Lorentz Center Seminar “Benchmarked: Optimization Meets Machine Learning”, November 2020, held online due to covid19.
10.1.7 Leadership within the scientific community
 Anne Auger, Elected Member of the ACMSIGEVO executive board
 Anne Auger is member of the ACMSIGEVO board
10.1.8 Research administration
 Anne Auger, member of the BCEP of InriaSaclay.
 Anne Auger, member of the conseil de laboratoire of the CMAP, Ecole Polytechnique.
 Dimo Brockhoff, member of the Commision de Développement Technologique (CDT) at Inria Saclay
10.2 Teaching  Supervision  Juries
10.2.1 Teaching
 Master: Anne Auger, “Optimization without gradients”, 22.5h ETD, niveau M2 (Optimization Master of ParisSaclay)
 Master: Dimo Brockhoff, “Algorithms and Complexity”, 36h ETD, niveau M1/M2 (joint MSc with ESSEC “Data Sciences & Business Analytics”), CentraleSupelec, France
 Master: Anne Auger and Dimo Brockhoff, “Introduction to Optimization”, 31.5h ETD, niveau M2 (MSc Informatique  Parcours Apprentissage, Information et Contenu (AIC)), U. ParisSaclay, France
 Master: Anne Auger, “Math for datascience course”, 31.5h ETD, niveau M1 (MSc Informatique  Parcours Apprentissage, Information et Contenu (AIC)), U. ParisSaclay, France
10.2.2 Supervision
 PhD in progress: Konstantinos Varelas, “LargeScale Optimization, CMAES and Radar Applications” (Dec. 2017–Feb 2021), supervisors: Anne Auger and Dimo Brockhoff
 PhD in progress: Cheikh Touré, “Linearly Convergent Multiobjective Stochastic Optimizers” (Dec. 2017–), supervisors: Anne Auger and Dimo Brockhoff
 PhD in progress: Paul Dufossé, “Constrained Optimization and Radar Applications”, Oct. 2018, supervisor: Nikolaus Hansen
 PhD in progress: MarieAnge Dahito, “MixedInteger Blackbox Optimization for Multiobjective Problems in the Automotive Industry” (Jan 2019–), supervisors: Dimo Brockhoff and Nikolaus Hansen
 PhD in progress: Eugénie Marescaux, Theoretical Analysis of convergence of multiobjective solvers (2019–), supervisor: Anne Auger
 PhD in progress: Alann Cheral, “Blackbox optimization for the optimization of hyperspectral bandwidth for anomaly detection” (2019–), supervisor: Anne Auger
 Jingyun Yang, Ecole Polytechnique, since June 2020
 Armand Gissler, Ecole normale supérieure ParisSaclay, from April 2020 till August 2020
 Baptiste PlaqueventJourdain, ENSTA Paris, from March 2020 till August 2020
10.2.3 Juries
 Anne Auger: cohead of the SIGEVO PhD award scientific committee.
 Anne Auger member of the comité de selection for professorship in Calais (COS PR 0054).
 Anne Auger: member of the comité de selection for tenuretrack professorship in LIMOS ClermontFerrand.
 Anne Auger: member of the comité de selection for a tenurtrack assistant professor position on Machine Learning in the Applied Math department of Ecole Polytechnique.
11 Scientific production
11.1 Major publications
 1 articleDiagonal Acceleration for Covariance Matrix Adaptation Evolution StrategiesEvolutionary Computation2832020, 405435
 2 articleA SIGEVO impact award for a paper arising from the COCO platformACM SIGEVOlution134January 2021, 111
 3 articleVerifiable Conditions for the Irreducibility and Aperiodicity of Markov Chains by Analyzing Underlying Deterministic ModelsBernoulli251December 2018, 112147
 4 inproceedingsA Global Surrogate Assisted CMAESGECCO 2019  The Genetic and Evolutionary Computation ConferenceACMPrague, Czech RepublicJuly 2019, 664672
 5 articleCOCO: A Platform for Comparing Continuous Optimizers in a BlackBox SettingOptimization Methods and Software361ArXiv eprints, arXiv:1603.087852020, 114144
 6 articleInformationGeometric Optimization Algorithms: A Unifying Picture via Invariance PrinciplesJournal of Machine Learning Research18182017, 165
 7 inproceedings Uncrowded Hypervolume Improvement: COMOCMAES and the Sofomore framework GECCO 2019  The Genetic and Evolutionary Computation Conference Part of this research has been conducted in the context of a research collaboration between Storengy and Inria Prague, Czech Republic July 2019
11.2 Publications of the year
International journals
 8 articleQuality Gain Analysis of the Weighted Recombination Evolution Strategy on General Convex Quadratic FunctionsTheoretical Computer Science8322020, 4267
 9 articleDiagonal Acceleration for Covariance Matrix Adaptation Evolution StrategiesEvolutionary Computation2832020, 405435
 10 articleOn Invariance and Linear Convergence of Evolution Strategies with Augmented Lagrangian Constraint HandlingTheoretical Computer Science8322020, 6897
 11 articleCOCO: A Platform for Comparing Continuous Optimizers in a BlackBox SettingOptimization Methods and Software3612020, 114144
 12 article Benchmarking largescale continuous optimizers: the bboblargescale testbed, a COCO software guide and beyond Applied Soft Computing 2020
International peerreviewed conferences
 13 inproceedings PhasedArray Antenna Pattern Optimization with Evolution Strategies 2020 IEEE International Radar Conference Fiorenza, Italy September 2020
 14 inproceedings Sparse Inverse Covariance Learning for CMAES with Graphical Lasso PPSN 2020  Sixteenth International Conference on Parallel Problem Solving from Nature Leiden, Netherlands September 2020
Conferences without proceedings
 15 inproceedings AIAugmented Multi Function Radar Engineering with Digital Twin: Towards Proactivity 2020 IEEE International Radar Conference Florence, Italy September 2020
Reports & preprints
 16 misc Global Linear Convergence of Evolution Strategies on More Than Smooth Strongly Convex Functions October 2020
 17 misc Finding optimal Pulse Repetion Intervals with Manyobjective Evolutionary Algorithms November 2020
 18 misc Scalinginvariant functions versus positively homogeneous functions January 2021
11.3 Other
Scientific popularization
 19 articleA SIGEVO impact award for a paper arising from the COCO platformACM SIGEVOlution134January 2021, 111
11.4 Cited publications
 20 inproceedingsOnline model selection for restricted covariance matrix adaptationInternational Conference on Parallel Problem Solving from NatureSpringer2016, 313
 21 inproceedingsProjectionbased restricted covariance matrix adaptation for high dimensionProceedings of the 2016 on Genetic and Evolutionary Computation ConferenceACM2016, 197204

22
inproceedingsTowards au Augmented Lagrangian Constraint Handling Approach for the
$(1+1)$ ES'Genetic and Evolutionary Computation ConferenceACM Press2015, 249256  23 inproceedings Linearly Convergent Evolution Strategies via Augmented Lagrangian Constraint Handling Foundation of Genetic Algorithms (FOGA) 2017
 24 articleLinear Convergence of Comparisonbased Stepsize Adaptive Randomized Search via Stability of Markov ChainsSIAM Journal on Optimization2632016, 15891624
 25 inproceedings Algorithms for HyperParameter Optimization Neural Information Processing Systems (NIPS 2011) 2011
 26 article The O.D.E. Method for Convergence of Stochastic Approximation and Reinforcement Learning SIAM Journal on Control and Optimization 38 2 January 2000
 27 booklet Stochastic approximation: a dynamical systems viewpoint Cambridge University Press 2008
 28 inproceedingsConstrainthandling techniques used with evolutionary algorithmsProceedings of the 2008 Genetic and Evolutionary Computation ConferenceACM2008, 24452466
 29 inproceedings Covariance Matrix Adaptation Evolution Strategy for Multidisciplinary Optimization of Expendable Launcher Families 13th AIAA/ISSMO Multidisciplinary Analysis Optimization Conference, Proceedings 2010
 30 book Numerical Methods for Unconstrained Optimization and Nonlinear Equations Englewood Cliffs, NJ PrenticeHall 1983
 31 incollectionPrincipled design of continuous stochastic search: From theory to practiceTheory and principled methods for the design of metaheuristicsSpringer2014, 145180
 32 inproceedingsComparing results of 31 algorithms from the blackbox optimization benchmarking BBOB2009Proceedings of the 12th annual conference companion on Genetic and evolutionary computation2010, 16891696
 33 articleCompletely Derandomized SelfAdaptation in Evolution StrategiesEvolutionary Computation922001, 159195
 34 articleThe CMA Evolution Strategy: A TutorialCoRRabs/1604.007722016, URL: http://arxiv.org/abs/1604.00772
 35 articleTesting heuristics: We have it all wrongJournal of heuristics111995, 3342
 36 inproceedingsAn Evaluation of Sequential Modelbased Optimization for Expensive Blackbox FunctionsGECCO (Companion) 2013Amsterdam, The NetherlandsACM2013, 12091216
 37 articleA theoretician’s guide to the experimental analysis of algorithmsData structures, near neighbor searches, and methodology: fifth and sixth DIMACS implementation challenges592002, 215250
 38 articleEfficient global optimization of expensive blackbox functionsJournal of Global optimization1341998, 455492
 39 articleCalibrating a global threedimensional biogeochemical ocean model (MOPS1.0)Geoscientific Model Development1012017, 127
 40 bookStochastic approximation and recursive algorithms and applicationsApplications of mathematicsNew YorkSpringer2003, URL: http://opac.inria.fr/record=b1099801
 41 inproceedings Design and Optimization of an Omnidirectional Humanoid Walk: A Winning Approach at the RoboCup 2011 3D Simulation Competition Proceedings of the TwentySixth AAAI Conference on Artificial Intelligence (AAAI) Toronto, Ontario, Canada July 2012
 42 book Markov Chains and Stochastic Stability New York SpringerVerlag 1993
 43 article Informationgeometric optimization algorithms: A unifying picture via invariance principles Journal Of Machine Learning Research accepted 2016
 44 article Evolution strategies as a scalable alternative to reinforcement learning arXiv preprint arXiv:1703.03864 2017
 45 inproceedingsPractical bayesian optimization of machine learning algorithmsNeural Information Processing Systems (NIPS 2012)2012, 29512959
 46 inproceedings Uncrowded Hypervolume Improvement: COMOCMAES and the Sofomore framework GECCO 2019  The Genetic and Evolutionary Computation Conference Part of this research has been conducted in the context of a research collaboration between Storengy and Inria Prague, Czech Republic July 2019
 47 articleLongterm model predictive control of gene expression at the population and singlecell levelsProceedings of the National Academy of Sciences109352012, 1427114276