Section: New Results

ML-assisted optimization

FIve major contributions related to ML-assisted optimization have been achieved and summarized in the following. As pointed out previously in our research program, one of the major issues in surrogate-assisted optimization is how to integrate efficiently and effectively the surrogates in the optimization process. This issue is addressed in first three contributions. Another major aspect addressed in the fourth contribution is the investigation of surrogates within the context of combinatorial optimization. The focus of the fifth contribution is put on the landscape analysis applied within the context of multi-objective optimization.

  • Efficient Global Optimization Using Deep Gaussian Processes.

    Participants: A. Hebbal, E-G. Talbi and N. Melab, external collaborators: L. Brevault and M. Balesdent from ONERA (Palaiseau, Paris)

    Efficient Global Optimization (EGO) is widely used for the optimization of computationally expensive black-box functions. EGO is based on a surrogate modeling technique using Gaussian Processes (kriging). However, due to the use of a stationary covariance, kriging is not well suited for approximating non stationary functions. Non stationarity is generally due to the abrupt change of a physical property that often occurs in the design of launch vehicles, subject of our collaboration with ONERA. This leads to a variation of the objective function with a completely different smoothness along the input space. In the spirit of deep learning using neural networks, we have investigated in [25] the integration of Deep Gaussian processes (DGP) in EGO framework to deal with non stationarity. Numerical experimentations are performed on analytical problems to highlight the different aspects of DGP and EGO. The experimental results show that the coupling EGO-DGP outperforms EGO-GP with a significant margin. Furthermore, the study has also highlighted some challenging issues to be investigated including: the integration of DGP in multi-objective EGO, the configuration of the network and revisiting the training model. Ultra-scale optimization at different levels is particularly important given the large number of hyperparameters of the training model.

  • Efficient global optimization of constrained mixed variable problems.

    Participants: E-G. Talbi, external collaborators: Julien Pelamatti, Loïc Brevault, Mathieu Balesdent (ONERA) Yannick Guerin (CNES)

    Due to the increasing demand for high performance and cost reduction within the framework of complex system design, numerical optimization of computationally costly problems is an increasingly popular topic in most engineering fields [33]. In this work, several variants of the Efficient Global Optimization algorithm for costly constrained problems depending simultaneously on continuous decision variables as well as on quantitative and/or qualitative discrete design parameters are proposed. The adaptation that is considered is based on a redefinition of the Gaussian Process kernel as a product between the standard continuous kernel and a second kernel representing the covariance between the discrete variable values. Several parameterizations of this discrete kernel, with their respective strengths and weaknesses, have been investigated. The novel algorithms are tested on a number of analytical test-cases and an aerospace related design problem, and it is shown that they require fewer function evaluations in order to converge towards the neighborhoods of the problem optima when compared to more commonly used optimization algorithms [38].

  • Adaptive Evolution Control using Confident Regions for Surrogate-assisted Optimization. Participants: G. Briffoteaux and N. Melab, external collaborators: M. Mezmaz and D. Tuyttens from Université de Mons (BELGIUM)

    The challenge of the efficient/effective integration of surrogates in the optimization process is to find the best trade-off between the quality (in terms of quality/precision) of the generated solutions and the efficiency (in terms of execution time) of the resolution. In [22], we have investigated the evolution control that alternates between the real function (simulator) and the surrogate within the multi-objective optimization process. We propose an adaptive evolution control mechanism based on the distance-based concept of confident regions (hyperspheres). The approach has been integrated into an ANN-assisted NSGA-2 and experimented using the ZDT4 multi-modal benchmark function. The reported results show that the proposed approach outperforms two other existing ones.

  • A surrogate model for combinatorial optimization.

    Participants: B. Derbel and A. Liefooghe, external collaborators: H. Aguirre and K. Tanaka, Shinshu University (JAPAN), S. Verel, Univ. Littoral (FRANCE)

    Extensive efforts so far have been devoted to the design of effective surrogate models for expensive black-box continuous optimization problems. There are, however, relatively few investigations on the development of methodologies for combinatorial domains. In [31], we rely on the mathematical foundations of discrete Walsh functions in order to derive a surrogate model for pseudo-boolean optimization functions. Specifically, we model such functions by means of Walsh expansion. By conducting a comprehensive set of experiments on nk-landscapes, we provide empirical evidence on the accuracy of the proposed model. In particular, we show that a Walsh-based surrogate model can outperform the recently-proposed discrete model based on Kriging.

  • Landscape analysis for multi-objective optimization.

    Participants: B. Derbel and A. Liefooghe, external collaborators: H. Aguirre and K. Tanaka, Shinshu University (JAPAN); M. López-Ibánez, Univ. Manchester (UK); L. Paquete, Univ. Coimbra, Portugal; S. Verel, Univ. Littoral (FRANCE)

    Pareto local optimal solutions (PLOS) are believed to highly influence the dynamics and the performance of multi-objective optimization algorithms, especially those based on local search and Pareto dominance. In  [28], we introduce a PLOS network (PLOS-net) model as a step toward the fundamental understanding of multi-objective landscapes and search algorithms. Using a comprehensive set of instances, PLOS-nets are constructed by full enumeration, and selected network features are further extracted and analyzed with respect to instance characteristics. A correlation and regression analysis is then conducted to capture the importance of the PLOS-net features on the runtime and effectiveness of two prototypical Pareto-based heuristics. In particular, we are able to provide empirical evidence for the relevance of the PLOS-net model to explain algorithm performance.

    Additionally, we know that local search algorithms naturally stop at a local optimal set (LO-set) under given definitions of neighborhood and preference relation among subsets of solutions, such as set-based dominance relation, hypervolume or epsilon indicator. It is an open question how LO-sets under different set preference relations relate to each other. In [29], we report an in-depth experimental analysis on multi-objective nk-landscapes. Our results reveal that, whatever the preference relation, the number of LO-sets typically increases with the problem non-linearity, and decreases with the number of objectives. We observe that strict LO-sets of bounded cardinality under set-dominance are LO-sets under both epsilon and hypervolume, and that LO-sets under hypervolume are LO-sets under set-dominance, whereas LO-sets under epsilon are not. Nonetheless, LO-sets under set-dominance are more similar to LO-sets under epsilon than under hypervolume. These findings have important implications for multi-objective local search. For instance, a dominance-based approach with bounded archive gets more easily trapped and might experience difficulty to identify an LO-set under epsilon or hypervolume. On the contrary, a hypervolume-based approach is expected to perform more steps before converging to better approximations.