EN FR
EN FR


Section: New Results

Optimal Decision Making under Uncertainty

The Tao-uct-sig is working mainly on mathematical programming tools useful for power systems. In particular, we advocate a data science approach, in order to reduce the model error - which is much more critical than the optimization error, in most cases. Real data are the best way for handling uncertainties. Our main works are as follows:

  • Noisy optimization In the context of stochastic uncertainties, noisy optimization handles the model error by simulation-based optimization. Our results include:

    • A formalization of noisy optimization in continuous domains, often poorly modeled in the evolutionary computation community [64] , [6] . We also proposed heuristic rules for reaching slope -1/2 in log-log scale [34] . We also show that in some settings the slope -1 (classical in mathematical programming) can be recoved in evolution strategies (unpublished: http://www.lri.fr/~teytaud/mca.pdf ), and we provided complexity bounds [20] .

    • An extension of portfolio algorithms for noisy optimization. Portfolio methods are already usual in combinatorial optimization, some works exists in the continuous case, this is the first work in the noisy case[8] .

    • Pragmatic approaches of noisy optimization, for improving robustness and for taking into account human expertise, including: Applying sieves methods in noisy optimization [27] , paired optimization [35] , and combining various policies [25] .

    • Upper bounds on noisy optimization in discrete domains [5] .

  • Quasi-random numbers Continuous optimization is a key component of our works, hence we improve evolution strategies (which have simplicity and convenience qualities) by quasi-random numbers (showing that even in simple cases it is beneficial[52] , and provides great improvements in highly multimodal cases (unpublished, http://www.lri.fr/~teytaud/qrr.pdf )). We also developped criteria and testbeds, pointing out some key points not widely studied in the optimization literature[26] . We also extended our earlier results in parallel optimization to additional algorithms[30] , and used cutting planes as in the ellipsoid method, hence combining the best of both worlds, i.e. fast rates from cutting planes methodologies and parallel behavior as in evolution strategies[36] .

  • Dynamical problems The dynamical nature of power systems is critical, as transient regimes, ramping constraints are ubiquitous in unit commitment and dispatch. Optimizing policies, with their temporal components, is a challenge when the high dimension and the nonlinearities are taken into account. Games provide a nice testbed for experiments and are used in several of our works. We provided:

    • An original algorithm for learning opening books, by an unexpected use of random seeds[32] . The principle is to randomly sample policies, by modifying the random seed. This can be used for any stochastic policy: we generate thousands of deterministic policies (by setting the random seed to arbitrary values) and select the best ones. This can be applied for games (always the most convincing application for a proof of concept), and any control problem where stochastic policies are available.

    • An extension of the previous work for dynamically adapting the probability distribution for specific positions[51] . This work provides a MCTS without the scalability limitations of MCTS. This work might give birth to many future works.