EN FR
EN FR


Section: New Results

Analysis of adaptive Stochastic Optimizers

New ODE method for proving the geometric convergence of adaptive Stochastic Optimizers

The ODE method is a standard technique to analyze the convergence of stochastic algorithms defined as a stochastic approximation of an ODE. In a nutshell, the convergence of the algorithms derives from the stability of the ODE and the control of the error between the solution of the ODE and the trajectory of the stochastic algorithm. We have been developing a new ODE method to be able to prove the geometric convergence of stochastic approximation algorithms that derive from the family of adaptive stochastic optimization algorithms. Standard theory did not apply in this context as the state variable adapted typically converge to the boundary of the state-space domain where an infinite number of points are equilibrium points for the ODE [7].

Convergence and convergence rate analysis of the (1+1)-ES with one-fifth success rule

When analyzing adaptive stochastic optimizers, one is typically interested to prove the linear convergence and investigate the dependency of the convergence rate with respect to the dimension. We have greatly simplified the analysis of the convergence and convergence rate of the (1+1)-ES with one-fifth success rule on the sphere function. We have shown that the analysis derives from applying a simple "drift" theorem and consequently shown a hitting time to reach an ϵ-ball of the optimum of Θ(1dlog(1/ϵ)) akin to linear convergence with a convergence rate scaling linearly with the dimension [4].

Quality-gain Analysis on Convex-quadratic functions

We have analyzed the expected function value decrease (related to the convergence rate) of Evolution Strategies with weighed recombination on convex-quadratic functions. We have derive different bounds and limit expression that allow to derive optimal recombination weights and the optimal step-size, and found that the optimal recombination weights are independent of the Hessian of the objective function. We have moreover shown the dependencies of the optimal parameters in the dimension and population size [1].