EN FR
EN FR


Section: New Results

Energy Management

Power Grids Daily Management

Participants: Isabelle Guyon, Marc Schoenauer

PhDs: Benjamin Donnot, Balthazar Donon, Herilalaina Rakotoarison

Collaboration: Antoine Marot, Patrick Panciatici (RTE), Olivier Teytaud (Facebook)

In the context of the Power Grid safety (Benjamin Donnot's CIFRE PhD with RTE, to be defended in February 2019), the goal is to assess in real time the so-called "(n-1)" safety (see Section 4.2) of possible recovery actions after some problem occurred somewhere on the grid. However, the simulator that allows to compute the power flows in the whole network is far too slow to simulate in real time all n-1 possible failures. A simplified simulator is also available, but its accuracy is too poor to give any good result. Deep surrogate models can be trained off-line, based on the results of the slow simulator, with high0enough accuracy, but training as many models as possible failures (i.e., n-1), obviously doesn't scale up: the topology of the grid must be an input of the learned model, allowing to instantly compute the power flows at least for grid configurations close to the usual running state of the grid. A standard approach is the one-hot encoding of the topology, where n additional boolean inputs are added to the neural network, encoding the presence or absence of each line. An original "guided dropout" approach was proposed [24], in which the topology directly acts on the connections of the deep network: a missing line suppresses some connections. However, whereas the standard dropout method disconnect random connections for every batch, in order to improve the generalization capacity of the network, the "guided dropout" method removes some connections based on the actual topology of the network. THis approach is experimentaly validated against the above-mentionned approaches on small subsets of the French grid (up to 308 lines). Interestingly, and rather suprisingly, even though only examples with a single disconnected line are used in the training set, the learned model is able of some additive generalization, and predictions are also accurate enough in the case 2 lines are disconnected. The guided dropout approach was later robustified [23] by learning to rapidly rank higher order contingencies including all pairs of disconnected lines, in order to prioritize the cases where the slow simulator is run: Another neural network is trained to rank all (n-1) and (n-2) contingencies in decreasing order of presumed severity.

Local Grids Optimization, and the Modeling of Worst-case Scenarios

Participants: Isabelle Guyon, Marc Schoenauer, Michèle Sebag

PhDs: Victor Berger, Herilalaina Rakotoarison; Post-doc: Berna Batu

Collaboration: Vincent Renaut (Artelys)

One of the goals of the ADEME Next project, in collaboration with SME Artelys (see also Section 4.2), is the sizing and capacity design of regional power grids. Though smaller than the national grid, regional and urban grids nevertheless raise scaling issues, in particular because many more fine-grained information must be taken into account for their design and predictive growth.

Provided accurate predictions of consumption (see below), off-the-shelf graph optimization algorithms can be used. Berna Batu is gathering different approaches, while Herilalaina Rakotoarison's PhD is concerned with the automatic tuning of their parameters (see Section 7.2.1, and his original approach, at the moment applied to standard benchmarks [40], as well as to Artelys' home optimizer at large Knitro, and compared to the state-of-the-art in parameter tuning (confidential deliverable).

In order to get accurate consumption predictions, V. Berger's PhD tackles the identification of the peak of energy consumption, defined as the level of consumption that is reached during at least a given duration with a given probability, depending on consumers (profiles and contracts) and weather conditions. The peak identification problem is currently tackled using Monte-Carlo simulations based on consumer profile- and weather-dependent individual models, at a high computational cost. The challenge is to exploit individual models to train a generative model, aimed to sampling the collective consumption distribution in the quantiles with highest peak consumption.