Section: New Results
Learning the optimal importance distribution
Participants : François Le Gland, Rudy Pastel.
This is a collaboration with Jérôme Morio (ONERA Palaiseau).
As explained in 3.3 , multilevel splitting ideas can be useful even to solve some static problems, such as evaluating the (small) probability that a random variable exceeds some (extreme) threshold. Incidentally, it can also be noticed that a population of particles is available at each stage of the algorithm, which is distributed according to the original distribution conditionned on exceeding the current level. Furthermore, this conditional distribution is known to be precisely the optimal importance distribution for evaluating the probability of exceeding the current level. In other words, the optimal importance distribution is learned automatically by the algorithm, as a by–product, and therefore can be used to produce an importance sampling estimate with very low variance. Building on this idea, several other iterative methods have been studied, that learn the optimal importance distribution at each stage of the algorithm, such as nonparametric adaptive importance sampling (NAIS) [69] , or the cross–entropy (CE) method [65] , [33] . These methods have been applied to a practical example from the aerospace industry, the evaluation of collision probabilities between two satellites, or between a satellite and space debris.