Section: New Results
Energy Management
Power Grids Daily Management
Participants: Isabelle Guyon, Marc Schoenauer
PhDs: Benjamin Donnot, Balthazar Donon
Collaboration: Antoine Marot, Patrick Panciatici (RTE), Olivier Teytaud (Facebook)
Benjamin Donnot's CIFRE PhD with RTE [12] dealt with Power Grid safety: The goal is to assess in real time the so-called "(n-1) safety" (see Section 4.2) of possible recovery actions modifying the topology of the grid after some problem occurred somewhere on the grid. However, the HADES simulator, that allows to compute the power flows in the whole network, is far too slow to simulate in real time all n-1 possible failures of a tentative topology. A simplified simulator is also available, but its accuracy is too poor to give good results. Deep surrogate models can be trained off-line for a given topology, based on the results of the slow simulator, with high-enough accuracy, but training as many models as possible failures (i.e., n-1) obviously doesn't scale up: the topology of the grid must be an input of the learned model, allowing to instantly compute the power flows at least for grid configurations close to the usual running state of the grid. A standard approach is the one-hot encoding of the topology, where n additional boolean inputs are added to the neural network, encoding the presence or absence of each line. Nevertheless, this approach poorly generalize to topologies outside the distribution of the ones used for training.
An original "guided dropout" approach was first proposed [87], in which the topology directly acts on the connections of the deep network: a missing line suppresses some connections. Whereas the standard dropout method disconnect random connections for every batch, in order to improve the generalization capacity of the network, the "guided dropout" method removes some connections based on the actual topology of the network. This approach is experimentally validated against the one-hot encoding on small subsets of the French grid (up to 308 lines). Interestingly, and rather surprisingly, even though only examples with a single disconnected line are used in the training set, the learned model is able of some additive generalization, and predictions are also accurate enough in the case 2 lines are disconnected. The guided dropout approach was later robustified [86] by learning to rapidly rank higher order contingencies including all pairs of disconnected lines, in order to prioritize the cases where the slow simulator is run: Another neural network is trained to rank all (n-1) and (n-2) contingencies in decreasing order of presumed severity.
The guided dropout approach has been further extended and generalized with the LEAP (Latent Encoding of Atypical Perturbation) architecture [30], [17], by crossing-out connections between the encoder and the decoder parts of the ResNet architecture. LEAP then performs transfer learning over spaces of distributions of topology perturbations, allowing to better handle more complex actions on the tolology, going beyond (n-1) and (n-2) perturbations by also including node-split, a current action in the real world. The LEAP approach was theoretically studied in the case of additive perturbations, and experimentally validated on an actual sub-grid of the French grid with 46 consumption nodes, 122 production nodes, 387lines and 192 substations.
LEAP is also the firs part of Balthazar Donon's on-going PhD, that currently develops using a completely different approach to approximate the power flows on a grid, i.e. that of Graph Neural Networks (GNNs). From a Power Grid perspective, GNNs can be viewed as including the topology in the very structure of the neural network, and learning some generic transfer function amongst nodes that will perform well on any topology. First results [31] use a loss based on a large dataset of actual power flows computed using the slow HADES simulator. The results indeed generalize to very different topologies than the ones used for training, in particular very different sizes of power grids. On-going work [56] removes the need to run HADES thanks to a loss that directly aims to minimize Kirshoff's law on all lines.
Local Grids Optimization, and the Modeling of Worst-case Scenarios
Participants: Isabelle Guyon, Marc Schoenauer, Michèle Sebag
PhDs: Victor Berger, Herilalaina Rakotoarison; Post-doc: Berna Batu
Collaboration: Vincent Renaut (Artelys)
One of the goals of the ADEME Next project, in collaboration with SME Artelys (see also Section 4.2), is the sizing and capacity design of regional power grids. Though smaller than the national grid, regional and urban grids nevertheless raise scaling issues, in particular because many more fine-grained information must be taken into account for their design and predictive growth.
Regarding the design of such grids, and provided accurate predictions of consumption are available (see below), off-the-shelf graph optimization algorithms can be used. Berna Batu is gathering different approaches. Herilalaina Rakotoarison's PhD tackles the automatic tuning of their parameters (see Section 7.2.1); while the Mosaic algorithm is validated on standard AutoML benchmarks [40], its application to Artelys' home optimizer at large Knitro is on-going, and compared to the state-of-the-art in parameter tuning (confidential deliverable).
In order to get accurate consumption predictions, V. Berger's PhD tackles the identification of the peak of energy consumption, defined as the level of consumption that is reached during at least a given duration with a given probability, depending on consumers (profiles and contracts) and weather conditions. The peak identification problem is currently tackled using Monte-Carlo simulations based on consumer profile- and weather-dependent individual models, at a high computational cost. The challenge is to exploit individual models to train a generative model, aimed to sampling the collective consumption distribution in the quantiles with highest peak consumption. The concept of Compositional Variational Auto-Encoder was proposed: it is amenable to multi-ensemblist operations (addition or subtraction of elements in the composition), enabled by the invariance and generality of the whole framework w.r.t. respectively, the order and number of the elements. It has been first tested on synthetic problems [26].