EN FR
EN FR
BONUS - 2023

2023Activity reportProject-TeamBONUS

RNSR: 201722535A
  • Research center Inria Centre at the University of Lille
  • In partnership with:Université de Lille
  • Team name: Big Optimization aNd Ultra-Scale Computing
  • In collaboration with:Centre de Recherche en Informatique, Signal et Automatique de Lille
  • Domain:Applied Mathematics, Computation and Simulation
  • Theme:Optimization, machine learning and statistical methods

Keywords

Computer Science and Digital Science

  • A1.1.1. Multicore, Manycore
  • A1.1.2. Hardware accelerators (GPGPU, FPGA, etc.)
  • A1.1.5. Exascale
  • A7.1.4. Quantum algorithms
  • A8.2.1. Operations research
  • A8.2.2. Evolutionary algorithms
  • A9.6. Decision support
  • A9.7. AI algorithmics

Other Research Topics and Application Domains

  • B3.1. Sustainable development
  • B3.1.1. Resource management
  • B7. Transport and logistics
  • B8.1.1. Energy for smart buildings

1 Team members, visitors, external collaborators

Faculty Members

  • Bilel Derbel [Team leader, UNIV LILLE, Professor Delegation, team leader since Nov 2023, HDR]
  • Nouredine Melab [Team leader, UNIV LILLE, Professor, team leader until Oct 2023, HDR]
  • Arnaud Liefooghe [UNIV LILLE, Associate Professor, HDR]
  • El-Ghazali Talbi [UNIV LILLE, Professor, HDR]

PhD Students

  • Guillaume Briffoteaux [UNIV MONS, ATER, until Aug 2023]
  • Lorenzo Canonne [INRIA, until Sep 2023]
  • Raphaël Cosson [UNIV LILLE, ATER, until Aug 2023]
  • Thomas Firmin [UNIV LILLE]
  • Juliette Gamot [ENS PARIS-SACLAY, ATER, from Nov 2023]
  • Juliette Gamot [INRIA, until Oct 2023]
  • Maxime Gobert [UNIV MONS]
  • Guillaume Helbecque [UNIV LILLE]
  • Julie Keisler [EDF, CIFRE]
  • Houssem Ouertatani [IRT SYSTEM X]
  • David Redon [UNIV LILLE]
  • Jérôme Rouzé [UNIV MONS]

Administrative Assistant

  • Karine Lewandowski [INRIA]

Visiting Scientists

  • Abdelmoiz Zakaria Dahi [UNIV MALAGA (Spain), from May 2023 until Jun 2023]
  • Grégoire Danoy [UNIV LUXEMBOURG, until Feb 2023]
  • Cosijopii Garcia-Garcia [CONACYT, from Nov 2023]
  • Shoichiro Tanaka [UNIV TOKYO, until Apr 2023]

2 Overall objectives

2.1 Presentation

Solving an optimization problem consists in optimizing (minimizing or maximizing) one or more objective function(s) subject to some constraints. This can be formulated as follows:

Min / Max F ( 𝐱 ) = ( f 1 ( 𝐱 ) , f 2 ( 𝐱 ) , ... , f m ( 𝐱 ) ) subject to 𝐱 Ω ,

where 𝐱=(𝐱1,𝐱2,...,𝐱𝐧) is the decision variable vector of dimension n, Ω is the domain of x (decision space), and 𝐅(𝐱) is the objective function vector of dimension m1. The objective space is composed of all values of 𝐅(𝐱) corresponding to the values of 𝐱 in the decision space.

Nowadays, in many research and application areas we are witnessing the emergence of the big era (big data, big graphs, etc). In the optimization setting, the problems are increasingly big in practice. Big optimization problems (BOPs) refer to problems composed of a large number of environmental input parameters and/or decision variables (high dimensionality), and/or many objective functions that may be computationally expensive. For instance, in smart grids, there are many optimization problems for which it has to be considered a large number of consumers (appliances, electrical vehicles, etc.) and multiple suppliers with various energy sources. In the area of engineering design, the optimization process must often take into account a large number of parameters from different disciplines. In addition, the evaluation of the objective function(s) often consist(s) in the execution of an expensive simulation of a black-box complex system. This is for instance typically the case in aerodynamics where a CFD-based simulation may require several hours. On the other hand, to meet the high-growing needs of applications in terms of computational power in a wide range of areas including optimization, high-performance computing (HPC) technologies have known a revolution during the last decade (see Top500 international ranking (Edition of November 2022)). Indeed, HPC is evolving toward ultra-scale supercomputers composed of millions of cores supplied in heterogeneous devices including multi-core processors with various architectures, GPU accelerators and MIC coprocessors.

Beyond the “big buzzword” as some say, solving BOPs raises at least four major challenges: (1) tackling their high dimensionality in the decision space; (2) handling many objectives; (3) dealing with computationally expensive objective functions; and (4) scaling up on (ultra-scale) modern supercomputers. The overall scientific objectives of the Bonus project consist in addressing efficiently these challenges. On the one hand, the focus will be put on the design, analysis and implementation of optimization algorithms that are scalable for high-dimensional (in decision variables and/or objectives) and/or expensive problems. On the other hand, the focus will also be put on the design of optimization algorithms able to scale on heterogeneous supercomputers including several millions of processing cores. To achieve these objectives raising the associated challenges a program including three lines of research will be adopted (Fig. 1): decomposition-based optimization, Machine Learning (ML)-assisted optimization and ultra-scale optimization. These research lines are developed in the following section.

Figure 1

Research challenges/objectives and lines

Figure1: Research challenges/objectives and lines

From the software standpoint, our objective is to integrate the approaches we will develop in our ParadisEO  2, 46 framework in order to allow their reuse inside and outside the Bonus team. The major challenge will be to extend ParadisEO in order to make it more collaborative with other software including machine learning tools, other (exact) solvers and simulators. From the application point of view, the focus will be put on two classes of applications: complex scheduling and engineering design.

3 Research program

3.1 Decomposition-based Optimization

Given the large scale of the targeted optimization problems in terms of the number of variables and objectives, their decomposition into simplified and loosely coupled or independent subproblems is essential to raise the challenge of scalability. The first line of research is to investigate the decomposition approach in the two spaces (decision and objective) and their combination, as well as their implementation on ultra-scale architectures. The motivation of the decomposition is twofold: first, the decomposition allows the parallel resolution of the resulting subproblems on ultra-scale architectures. Here also several issues will be addressed: the definition of the subproblems, their coding to allow their efficient communication and storage (checkpointing), their assignment to processing cores, etc. Second, decomposition is necessary for solving large problems that cannot be solved (efficiently) using traditional algorithms. Indeed, for instance with the popular NSGA-II algorithm the number of non-dominated solutions 1 increases drastically with the number of objectives leading to a very slow convergence to the Pareto Front 2. Therefore, decomposition-based techniques are gaining a growing interest. The objective of Bonus is to investigate various decomposition schemes and cooperation protocols between the subproblems resulting from the decomposition to generate efficiently global solutions of good quality. Several challenges have to be addressed: (1) how to define the subproblems (decomposition strategy), (2) how to solve them to generate local solutions (local rules), and (3) how to combine these latter with those generated by other subproblems and how to generate global solutions (cooperation mechanism), and (4) how to combine decomposition strategies in more than one space (hybridization strategy)?

The decomposition in the decision space can be performed following different ways according to the problem at hand. Two major categories of decomposition techniques can be distinguished: the first one consists in breaking down the high-dimensional decision vector into lower-dimensional and easier-to-optimize blocks of variables. The major issue is how to define the subproblems (blocks of variables) and their cooperation protocol: randomly vs. using some learning (e.g. separability analysis), statically vs. adaptively, etc. The decomposition in the decision space can also be guided by the type of variables i.e. discrete vs. continuous. The discrete and continuous parts are optimized separately using cooperative hybrid algorithms 54. The major issue of this kind of decomposition is the presence of categorial variables in the discrete part 50. The Bonus team is addressing this issue, rarely investigated in the literature, within the context of vehicle aerospace engineering design. The second category consists in the decomposition according to the ranges of the decision variables (search space decomposition). For continuous problems, the idea consists in iteratively subdividing the search (e.g. design) space into subspaces (hyper-rectangles, intervals, etc.) and select those that are most likely to produce the lowest objective function value. Existing approaches meet increasing difficulty with an increasing number of variables and are often applied to low-dimensional problems. We are investigating this scalability challenge (e.g. 9). For discrete problems, the major challenge is to find a coding (mapping) of the search space to a decomposable entity. We have proposed an interval-based coding of the permutation space for solving big permutation problems. The approach opens perspectives we are investigating 6, in terms of ultra-scale parallelization, application to multi-permutation problems and hybridization with metaheuristics.

The decomposition in the objective space consists in breaking down an original many-objective problem (MaOP) into a set of cooperative single-objective subproblems (SOPs). The decomposition strategy requires the careful definition of a scalarizing (aggregation) function and its weighting vectors (each of them corresponds to a separate SOP) to guide the search process towards the best regions. Several scalarizing functions have been proposed in the literature including weighted sum, weighted Tchebycheff, vector angle distance scaling, etc. These functions are widely used but they have their limitations. For instance, using weighted Tchebycheff might do harm diversity maintenance and weighted sum is inefficient when it comes to deal with nonconvex Pareto Fronts 44. Defining a scalarizing function well-suited to the MaOP at hand is therefore a difficult and still an open question being investigated in Bonus 4, 5. Studying/defining various functions and in-depth analyzing them to better understand the differences between them is required. Regarding the weighting vectors that determine the search direction, their efficient setting is also a key and open issue. They dramatically affect in particular the diversity performance. Their setting rises two main issues: how to determine their number according to the available computational resources? when (statically or adaptively) and how to determine their values? Weight adaptation is one of our main concerns that we are addressing especially from a distributed perspective. They correspond to the main scientific objectives targeted by our bilateral ANR-RGC BigMO project with City University (Hong Kong). The other challenges pointed out in the beginning of this section concern the way to solve locally the SOPs resulting from the decomposition of a MaOP and the mechanism used for their cooperation to generate global solutions. To deal with these challenges, our approach is to design the decomposition strategy and cooperation mechanism keeping in mind the parallel and/or distributed solving of the SOPs. Indeed, we favor the local neighborhood-based mating selection and replacement to minimize the network communication cost while allowing an effective resolution 4. The major issues here are how to define the neighborhood of a subproblem and how to cooperatively update the best-known solution of each subproblem and its neighbors.

To sum up, the objective of the Bonus team is to come up with scalable decomposition-based approaches in the decision and objective spaces. In the decision space, a particular focus will be put on high dimensionality and mixed-continuous variables which have received little interest in the literature. We will particularly continue to investigate at larger scales using ultra-scale computing the interval-based (discrete) and fractal-based (continuous) approaches. We will also deal with the rarely addressed challenge of mixed-continuous variables including categorial ones (collaboration with ONERA). In the objective space, we will investigate parallel ultra-scale decomposition-based many-objective optimization with ML-based adaptive building of scalarizing functions. A particular focus will be put on the state-of-the-art MOEA/D algorithm. This challenge is rarely addressed in the literature which motivated the collaboration with the designer of MOEA/D (bilateral ANR-RGC BigMO project with City University, Hong Kong). Finally, the joint decision-objective decomposition, which is still in its infancy 56, is another challenge of major interest.

3.2 Machine Learning-assisted Optimization

The Machine Learning (ML) approach based on metamodels (or surrogates) is commonly used, and also adopted in Bonus, to assist optimization in tackling BOPs characterized by time-demanding objective functions. The second line of research of Bonus is focused on ML-aided optimization to raise the challenge of expensive functions of BOPs using surrogates but also to assist the two other research lines (decomposition-based and ultra-scale optimization) in dealing with the other challenges (high dimensionality and scalability).

Several issues have been identified to make efficient and effective surrogate-assisted optimization. First, infill criteria have to be carefully defined to adaptively select the adequate sample points (in terms of surrogate precision and solution quality). The challenge is to find the best trade-off between exploration and exploitation to efficiently refine the surrogate and guide the optimization process toward the best solutions. The most popular infill criterion is probably the Expected Improvement (EI) 49 which is based on the expected values of sample points but also and importantly on their variance. This latter is inherently determined in the kriging model, this is why it is used in the state-of-the-art efficient global optimization (EGO) algorithm 49. However, such crucial information is not provided in all surrogate models (e.g. Artificial Neural Networks) and needs to be derived. In Bonus, we are currently investigating this issue. Second, it is known that surrogates allow one to reduce the computational burden for solving BOPs with time-consuming function(s). However, using parallel computing as a complementary way is often recommended and cited as a perspective in the conclusions of related publications. Nevertheless, despite being of critical importance parallel surrogate-assisted optimization is weakly addressed in the literature. For instance, in the introduction of the survey proposed in 47 it is warned that because the area is not mature yet the paper is more focused on the potential of the surveyed approaches than on their relative efficiency. Parallel computing is required at different levels that we are investigating.

Another issue with surrogate-assisted optimization is related to high dimensionality in decision as well as in objective space: it is often applied to low-dimensional problems. The joint use of decomposition, surrogates and massive parallelism is an efficient approach to deal with high dimensionality. This approach adopted in Bonus has received little effort in the literature. In Bonus, we are considering a generic framework in order to enable a flexible coupling of existing surrogate models within the state-of-the-art decomposition-based algorithm MOEA/D. This is a first step in leveraging the applicability of efficient global optimization into the multi-objective setting through parallel decomposition. Another issue which is a consequence of high dimensionality is the mixed (discrete-continuous) nature of decision variables which is frequent in real-world applications (e.g. engineering design). While surrogate-assisted optimization is widely applied in the continuous setting it is rarely addressed in the literature in the discrete-continuous framework. In 50, we have identified different ways to deal with this issue that we are investigating. Non-stationary functions frequent in real-world applications (see Section 4.1) is another major issue we are addressing using the concept of deep Gaussian Processes.

Finally, as quoted in the beginning of this section, ML-assisted optimization is mainly used to deal with BOPs with expensive functions but it will also be investigated for other optimization tasks. Indeed, ML will be useful to assist the decomposition process. In the decision space, it will help to perform the separability analysis (understanding of the interactions between variables) to decompose the vector of variables. In the objective space, ML will be useful to assist a decomposition-based many-objective algorithm in dynamically selecting a scalarizing function or updating the weighting vectors according to their performances in the previous steps of the optimization process 4. Such a data-driven ML methodology would allow us to understand what makes a problem difficult or an optimization approach efficient, to predict the algorithm performance 3, to select the most appropriate algorithm configuration 7, and to adapt and improve the algorithm design for unknown optimization domains and instances. Such an autonomous optimization approach would adaptively adjust its internal mechanisms in order to tackle cross-domain BOPs.

In a nutshell, to deal with expensive optimization the Bonus team will investigate the surrogate-based ML approach with the objective to efficiently integrate surrogates in the optimization process. The focus will especially be put on high dimensionality (e.g. using decomposition) with mixed discrete-continuous variables which is rarely investigated. The kriging metamodel (Gaussian Process or GP) will be considered in particular for engineering design (for more reliability) addressing the above issues and other major ones including mainly non stationarity (using emerging deep GP) and ultra-scale parallelization (highly needed by the community). Indeed, a lot of work has been reported on deep neural networks (deep learning) surrogates but not on the others including (deep) GP. On the other hand, ML will be used to assist decomposition: importance/interaction between variables in the decision space, dynamic building (selection of scalarizing functions, weight update, etc.) of scalarizing functions in the objective space, etc.

3.3 Ultra-scale Optimization

The third line of our research program that accentuates our difference from other (project-)teams of the related Inria scientific theme is the ultra-scale optimization. This research line is complementary to the two others, which are sources of massive parallelism and with which it should be combined to solve BOPs. Indeed, ultra-scale computing is necessary for the effective resolution of the large amount of subproblems generated by decomposition of BOPs, parallel evaluation of simulation-based fitness and metamodels, etc. These sources of parallelism are attractive for solving BOPs and are natural candidates for ultra-scale supercomputers 3. However, their efficient use raises a big challenge consisting in managing efficiently a massive amount of irregular tasks on supercomputers with multiple levels of parallelism and heterogeneous computing resources (GPU, multi-core CPU with various architectures) and networks. Raising such challenge requires to tackle three major issues: scalability, heterogeneity and fault-tolerance, discussed in the following.

The scalability issue requires, on the one hand, the definition of scalable data structures for efficient storage and management of the tremendous amount of subproblems generated by decomposition 52. On the other hand, achieving extreme scalability requires also the optimization of communications (in number of messages, their size and scope) especially at the inter-node level. For that, we target the design of asynchronous locality-aware algorithms as we did in 45, 55. In addition, efficient mechanisms are needed for granularity management and coding of the work units stored and communicated during the resolution process.

Heterogeneity means harnessing various resources including multi-core processors within different architectures and GPU devices. The challenge is therefore to design and implement hybrid optimization algorithms taking into account the difference in computational power between the various resources as well as the resource-specific issues. On the one hand, to deal with the heterogeneity in terms of computational power, we adopt in Bonus the dynamic load balancing approach based on the Work Stealing (WS) asynchronous paradigm 4 at the inter-node as well as at the intra-node level. We have already investigated such approach, with various victim selection and work sharing strategies in 556. On the other hand, hardware resource specific-level optimization mechanisms are required to deal with related issues such as thread divergence and memory optimization on GPU, data sharing and synchronization, cache locality, and vectorization on multi-core processors, etc. These issues have been considered separately in the literature including our works 8. Actually, in most of existing works related to GPU-accelerated optimization only a single CPU core is used. This leads to a huge resource wasting especially with the increase of the number of processing cores integrated into modern processors. Using jointly the two components raises additional issues including data and work partitioning, the optimization of CPU-GPU data transfers, etc.

Another issue the scalability induces is the increasing probability of failures in modern supercomputers 53. Indeed, with the increase of their size to millions of processing cores their Mean-Time Between Failures (MTBF) tends to be shorter and shorter 51. Failures may have different sources including hardware and software faults, silent errors, etc. In our context, we consider failures leading to the loss of work unit(s) being processed by some thread(s) during the resolution process. The major issue, which is particularly critical in exact optimization, is how to recover the failed work units to ensure a reliable execution. Such issue is tackled in the literature using different approaches: algorithm-based fault tolerance, checkpoint/restart (CR), message logging and redundancy. The CR approach can be system-level, library/user-level or application-level. Thanks to its efficiency in terms of memory footprint, adopted in Bonus 1, the application-level approach is commonly and widely used in the literature. This approach raises several issues mainly: (1) which critical information defines the state of the work units and allows to resume properly their execution? (2) when, where and how (using which data structures) to store it efficiently? (3) how to deal with the two other issues: scalability and heterogeneity?

The last but not least major issue which is another roadblock to exascale is the programming of massive-scale applications for modern supercomputers. On the path to exascale, we will investigate the programming environments and execution supports able to deal with exascale challenges: large numbers of threads, heterogeneous resources, etc. Various exascale programming approaches are being investigated by the parallel computing community and HPC builders: extending existing programming languages (e.g. DSL-C++) and environments/libraries (MPI+X, etc.), proposing new solutions including mainly Partitioned Global Address Space (PGAS)-based environments (Chapel, UPC, X10, etc.). It is worth noting here that our objective is not to develop a programming environment nor a runtime support for exascale computing. Instead, we aim to collaborate with the research teams (inside or outside Inria) having such objective.

To sum up, we put the focus on the design and implementation of efficient big optimization algorithms dealing jointly (uncommon in parallel optimization) with the major issues of ultra-scale computing mainly the scalability up to millions of cores using scalable data structures and asynchronous locality-aware work stealing, heterogeneity addressing the multi-core and GPU-specific issues and those related to their combination, and scalable GPU-aware fault tolerance. A strong effort will be devoted to this latter challenge, for the first time to the best of our knowledge, using application-level checkpoint/restart approach to deal with failures.

4 Application domains

4.1 Introduction

For the validation of our findings we obviously use standard benchmarks to facilitate the comparison with related works. In addition, we also target real-world applications in the context of our collaborations and industrial contracts. From the application point of view two classes are targeted: complex scheduling and engineering design. The objective is twofold: proposing new models for complex problems and solving efficiently BOPs using jointly the three lines of our research program. In the following, are given some use cases that are the focus of our current industrial collaborations.

4.2 Big optimization for complex scheduling

Three application domains are targeted: energy and transport & logistics. In the energy field, with the smart grid revolution (multi-)house energy management is gaining a growing interest. The key challenge is to make elastic with respect to the energy market the (multi-)house energy consumption and management. This kind of demand-side management will be of strategic importance for energy companies in the near future. In collaboration with the EDF energy company we are working on the formulation and solving of optimization problems on demand-side management in smart micro-grids for single- and multi-user frameworks. These complex problems require taking into account multiple conflicting objectives and constraints and many (deterministic/uncertain, discrete/continuous) parameters. A representative example of such BOPs that we are addressing is the scheduling of the activation of a large number of electrical and thermal appliances for a set of homes optimizing at least three criteria: maximizing the user's confort, minimizing its energy bill and minimzing peak consumption situations. On the other hand, we investigate the application of parallel Bayesian optimization for efficient energy storage in collaboration with the energy engineering department of University of Mons.

4.3 Big optimization for engineering design

The focus is for now put on the aerospace vehicle design, a complex multidisciplinary optimization process, we are exploring in collaboration with ONERA. The objective is to find the vehicle architecture and characteristics that provide the optimal performance (flight performance, safety, reliability, cost etc.) while satisfying design requirements 43. A representative topic we are investigating, and will continue to investigate throughout the lifetime of the project given its complexity, is the design of launch vehicles that involves at least four tightly coupled disciplines (aerodynamics, structure, propulsion and trajectory). Each discipline may rely on time-demanding simulations such as Finite Element analyses (structure) and Computational Fluid Dynamics analyses (aerodynamics). Surrogate-assisted optimization is highly required to reduce the time complexity. In addition, the problem is high-dimensional (dozens of parameters and more than three objectives) requiring different decomposition schemas (coupling vs. local variables, continuous vs. discrete even categorial variables, scalarization of the objectives). Another major issue arising in this area is the non-stationarity of the objective functions which is generally due to the abrupt change of a physical property that often occurs in the design of launch vehicles. In the same spirit than deep learning using neural networks, we use Deep Gaussian Processes (DGPs) to deal with non-stationary multi-objective functions. Finally, the resolution of the problem using only one objective takes one week using a multi-core processor. The first way to deal with the computational burden is to investigate multi-fidelity using DGPs to combine efficiently multiple fidelity models. This approach has been investigated this year within the context of the PhD thesis of A. Hebbal. In addition, ultra-scale computing is required at different levels to speed up the search and improve the reliability which is a major requirement in aerospace design. This example shows that we need to use the synergy between the three lines of our research program to tackle such BOPs.

Finally, we recently started to investigate the application of surrogate-based optimization in the epidemiologic context. Actually, we address in collaboration with Monash University (Australia) the contact reduction and vaccines allocation of Covid-19 and Tuberculosis.

5 Social and environmental responsibility

Optimization is ubiquitous to countless modern engineering and scientific applications with a deep impact on society and human beings. As such, the research of the Bonus team contributes to the establishment of high-level efficient solving techniques, improving solving quality, and addressing applications being more and more large-scale, complex, and beyond the solving ability of standard optimization techniques.

Furthermore, Bonus has performed technology transfer actions using different ways: open-source software development, transfer-to-industry initiatives, and teaching.

Our team has also initiated a start-up creation project. Actually, G. Pruvost who did his Ph.D thesis within Bonus (defended on Dec. 2021), co-founded the OPTIMO Technologies start-up, dealing with sustainable mobility issues (e.g. sustainable, personalized and optimized itinerary planning).

6 Highlights of the year

6.1 Awards

  • Best Paper Award at the ACM GECCO'2023 26 (A. Liefooghe, G. Ochoa, S. Vérel, and B. Derbel), the EMO@GECCO track (Evolutionary Multi-objective Optimization).
  • Best Paper Nomination at the ACM GECCO'2023 24 (A. Liefooghe, M.L. Ibanez), the EMO@GECCO track (Evolutionary Multi-objective Optimization).

6.2 Other highlights

  • Starting from Nov. 2023, and taking over from N. Melab, B. Derbel is the new team leader of BONUS.
  • A. Lieffoghe has been promoted Full Professor at the Université du Littoral Côte d'Opale (starting in 2024).

7 New software, platforms, open data

The core activity of Bonus is focused on the design, implementation and analysis of algorithmic approaches for efficient and effective optimization. In addition, we have an increasing activity in software development driven by our goal of making our algorithmic contributions freely available for the optimization community. On the one hand, this leads us to develop some homemade prototype codes: Permutation Branch-and-Bound (pBB) and Python library for Surrogate-based Optimization (pySBO). On the other hand, we started in 2020 to develop the more ambitious Python-based Parallel and distributed Evolving Objects (pyParadisEO) software framework. In addition, Bonus is strongly involved in the activities related to the Grid'5000 nation-wide distributed testbed as a scientific leader for the site located at Lille. Finally, the different software tools and Grid'5000 testbed are described in the following.

7.1 New software

7.1.1 pBB

  • Name:
    Permutation Branch-and-Bound
  • Keywords:
    Optimisation, Parallel computing, Data parallelism, GPU, Scheduling, Combinatorics, Distributed computing
  • Functional Description:
    The algorithm proceeds by implicit enumeration of the search space by parallel exploration of a highly irregular search tree. pBB contains implementations for single-core, multi-core, GPU and heterogeneous distributed platforms. Thanks to its hierarchical work-stealing mechanism, required to deal with the strong irregularity of the search tree, pBB is highly scalable. Scalability with over 90% parallel efficiency on several hundreds of GPUs has been demonstrated on the Jean Zay supercomputer located at IDRIS.
  • URL:
  • Contact:
    Jan Gmys

7.1.2 ParadisEO

  • Keyword:
    Parallelisation
  • Scientific Description:
    ParadisEO (PARallel and DIStributed Evolving Objects) is a C++ white-box object-oriented framework dedicated to the flexible design of metaheuristics. Based on EO, a template-based ANSI-C++ compliant evolutionary computation library, it is composed of four modules: * Paradiseo-EO provides tools for the development of population-based metaheuristic (Genetic algorithm, Genetic programming, Particle Swarm Optimization (PSO)...) * Paradiseo-MO provides tools for the development of single solution-based metaheuristics (Hill-Climbing, Tabu Search, Simulated annealing, Iterative Local Search (ILS), Incremental evaluation, partial neighborhood...) * Paradiseo-MOEO provides tools for the design of Multi-objective metaheuristics (MO fitness assignment shemes, MO diversity assignment shemes, Elitism, Performance metrics, Easy-to-use standard evolutionary algorithms...) * Paradiseo-PEO provides tools for the design of parallel and distributed metaheuristics (Parallel evaluation, Parallel evaluation function, Island model) Furthermore, ParadisEO also introduces tools for the design of distributed, hybrid and cooperative models: * High level hybrid metaheuristics: coevolutionary and relay model * Low level hybrid metaheuristics: coevolutionary and relay model
  • Functional Description:
    Paradiseo is a software framework for metaheuristics (optimisation algorithms aimed at solving difficult optimisation problems). It facilitates the use, development and comparison of classic, multi-objective, parallel or hybrid metaheuristics.
  • URL:
  • Contact:
    El-Ghazali Talbi
  • Partners:
    CNRS, Université de Lille

7.1.3 pyparadiseo

  • Keywords:
    Optimisation, Framework
  • Functional Description:
    pyparadiseo is a Python version of ParadisEO, a C++-based open-source white-box framework dedicated to the reusable design of metaheuristics. It allows the design and implementation of single-solution and population-based metaheuristics for mono- and multi-objective, continuous, discrete and mixed optimization problems.
  • URL:
  • Contact:
    Jan Gmys

7.1.4 pySBO

  • Name:
    PYthon library for Surrogate-Based Optimization
  • Keywords:
    Parallel computing, Evolutionary Algorithms, Multi-objective optimisation, Black-box optimization, Optimisation
  • Functional Description:
    The pySBO library aims at facilitating the implementation of parallel surrogate-based optimization algorithms. pySBO provides re-usable algorithmic components (surrogate models, evolution controls, infill criteria, evolutionary operators) as well as the foundations to ensure the components inter-changeability. Actual implementations of sequential and parallel surrogate-based optimization algorithms are supplied as ready-to-use tools to handle expensive single- and multi-objective problems. The illustrated documentation of pySBO is available on-line through a dedicated web-site.
  • URL:
  • Contact:
    Guillaume Briffoteaux

7.1.5 moead-framework

  • Name:
    multi-objective evolutionary optimization based on decomposition framework
  • Keywords:
    Evolutionary Algorithms, Multi-objective optimisation
  • Scientific Description:
    Moead-framework aims to provide a python modular framework for scientists and researchers interested in experimenting with decomposition-based multi-objective optimization. The original multi-objective problem is decomposed into a number of single-objective sub-problems that are optimized simultaneously and cooperatively. This Python-based library provides re-usable algorithm components together with the state-of-the-art multi-objective evolutionary algorithm based on decomposition MOEA/D and some of its numerous variants.
  • Functional Description:
    The package is based on a modular architecture that makes it easy to add, update, or experiment with decomposition components, and to customize how components actually interact with each other. A documentation is available online. It contains a complete example, a detailed description of all available components, and two tutorials for the user to experiment with his/her own optimization problem and to implement his/her own algorithm variants.
  • URL:
  • Contact:
    Geoffrey Pruvost

7.1.6 Zellij

  • Keywords:
    Global optimization, Partitioning, Metaheuristics, High Dimensional Data
  • Functional Description:
    The package generalizes a family of decomposition algorithms by implementing four distinct modules (geometrical objects, tree search algorithms, exploitation and exploration algorithms such as Genetic Algorithm, Bayesian Optimization or Simulated Annealing). The package is divided into two versions, a regular and a parallel one. The main target of Zellij is to tackle HyperParameter Optimization (HPO) and Neural Architecture Search (NAS). Thanks to to this framework, we are able to reproduce various decomposition based algorithms, such as DIRECT, Simultaneous Optimistic Optimization, Fractal Decomposition Algorithm, FRACTOP... Future works will focus on multi-objective problems, NAS, distributed version and a graphic interface for monitoring and plotting.
  • URL:
  • Contact:
    Thomas Firmin

7.2 New platforms

7.2.1 Grid'5000 testbed: major achievements in 2023

Participants: Nouredine Melab [contact person], Hugo Dominois.

  • Keywords: Experimental testbed, large-scale computing, high-performance computing, GPU computing, cloud computing, big data
  • Functional description: Grid'5000 is a project initiated in 2003 by the French government and later supported by different research organizations including Inria, CNRS, the french universities, Renater which provides the wide-area network, etc. The overall objective of Grid'5000 was to build by 2007 a mutualized nation-wide experimental testbed composed of at least 5000 processing units and distributed over several sites in France (one of them located at Lille). From a scientific point of view, the aim was to promote scientific research on large-scale distributed systems. Beyond BONUS, Grid'5000 is highly important for the HPC-related communities from our three institutions (ULille, Inria and CNRS) as well as from outside.

    Within the framework of CPER contract “Data", the equipment of Grid'5000 at Lille has been renewed in 2017-2018 in terms of hardware resources (GPU-powered servers, storage, PDUs, etc.) and infrastructure (network, inverter, etc.). The renewed testbed has been used extensively by many researchers from Inria and outside. Half-day trainings have been organized with the collaboration of Bonus to allow the newcomer users to get started with the use of the testbed. A new IA-dedicated CPER contract “CornelIA" has been accepted (2021-2027). As scientific leader of Grid'5000 at Lille, N. Melab is being strongly involved in the renewal of the equipment and the recruitment of engineering staff. B. Derbel is taking over the lead starting from late 2023.

  • URL: Grid'5000

8 New results

During the year 2023, we have addressed different issues/challenges related to the three lines of our research program. The major contributions are summarized in the following sections. Besides, alongside these contributions we came out with other contributions 13, 16, 17, 28, 25 that are not discussed here-after to keep the presentation focused on the major achievements.

8.1 Decomposition-based optimization

We report three major contributions related to decomposition-based optimization and the solving of complex high dimensional problems. The first contribution 33 concerns the investigation of fractal-based decompositon techniques for solving continous optimization problems. The second one 18 concerns the design of large-scale combinatorial graybox (decomposition-based) techniques using the so-called variable interaction graph. While the first two contributions have a relatively fundamental nature, the third contribution 14, 23 concerns a more applied setting where the goal is to derive and to study new hybrid evolutionary optimization techniques to tackle a complex layout problem coming from aeorspace vehicule design. These contributions are summarized in the following.

8.1.1 Fractal decomposition for continuous optimization problems

Participants: Thomas firmin [contact person], El-Ghazali Talbi.

In 33, we present a comparative study of 24 different and unique decomposition-based algorithms derived from Fractal Decomposition Algorithm and Simultaneous Optimistic Optimization. These algorithms were built within a generic, flexible and unified algorithmic framework named fractal-based decomposition algorithms. This generic framework is issued from previous works and is succinctly described using the Zellij software to instantiate the 24 algorithms. Under our generic framework, fractal-based decomposition algorithms are made of five independent and well-defined search components: a type of fractal, a tree search algorithm, a scoring method, an exploration, and exploitation strategies. This new family of algorithms, hierarchically decomposes an initial search space using a generic geometrical object, named fractal. The decomposition forms a rooted tree, where fractals are nodes, and the root corresponds to the initial search space. The tree is explored, exploited and expanded using the four other search components. The proposed algorithms were tested and compared to each other on the CEC2020 benchmark. Obtained performances emphasize the impact of each search component, and pointed out the scalability capacity of certain algorithms. Our results strongly suggest that some search components have major impact on FDA and SOO-based algorithms for large-scale problems, whereas others are used to fine tune performances in terms of convergence.

8.1.2 Graybox evolutionary techniques for additively decomposable large-scale pseudo-boolean optimization problems

Participants: Bilel Derbel [contact person], Lorenzo Canonne, Francisco Chicano [Univ. Malaga, Spain], Gabriela Ochoa [Univ. Stirling, UK].

DRILS algorithm (Deterministic Recombination and Iterated Local Search) is a state-of-the-art graybox for tackling large-scale k-bounded pseudo-boolean optimization problems. DRILS and its subsequent variants are based on different evolutionary search operators that use the fact that the underlying objective function can be decomposed additively on the basis of the so-called variable interaction graph (VIG). In particular, specialized graybox local search and crossover have been successfully combined within the framework of DRILS and its existing variants. However, as for any evolutionary algorithm, the initial design framework, and the underlying high-level choices and parameters, are crucially important. DRILS is no exception, and recent enhanced variants exist in the literature. In 18, we focused on two main aspects: (i) improving the performance of the latest variants of Drils, and (ii) providing a better principled understanding of graybox search behavior and dynamics. On the basis of a preliminary analysis using Local Optima Networks of small-size NKQ-landscapes, we first highlight the difference of using local search with and without crossover. We then propose to pipeline these two techniques in a simple two-phase like iterated local search scheme which is shown to provide substantial improvements over the latest DRILS variants for large-size NKQ-landscapes. We further report a dedicated analysis in an attempt to provide new insights into the impact of local search and crossover on the phenotype and the genotype of the local optima encountered in the search trajectory.

8.1.3 Variable-size optimal layout problems with application to aerospace vehicles

Participants: Juliette Gamot [contact person], Mathieu Balesdent [ONERA DTIS, Palaiseau], Arnault Tremolet [ONERA DTIS, Palaiseau], Romain Wuilbercq [ONERA DTIS, Palaiseau], Nouredine Melab, El-Ghazali Talbi.

In 14, 23, we investigated the design process of complex engineering systems involving problems in which the number and type of design variables and constraints vary throughout the optimization process based on the values of dimensional variables. This category of problems is called Variable-Size Design Space optimization. In 23, we described a two-level algorithm which combines the strength of a Swarm Intelligence algorithm based on a virtual-force system and a discrete Bayesian Optimization algorithm purposely adapted to tackle the dimensional aspect of this problem. This is also related to the second ML-asssited research axis of the team. Furthermore, in 14, we present an extended formulation to take into account a large number of architectural choices and components allocation during the design process. As a representative example of such systems, considering the layout of a satellite module, this translates the fact that the optimizer has to choose between several subdivisions of the components. For instance, one large tank of fuel might be placed as well as two smaller tanks or three even smaller tanks for the same amount of fuel. In order to tackle this NP-hard problem, a genetic algorithm enhanced by an adapted hidden-variables mechanism is proposed. This latter is illustrated on a toy case and an aerospace application case representative of real-world complexity to illustrate the performance of the proposed algorithms. The results obtained using the proposed mechanism are reported and analyzed.

8.2 ML-assisted optimization

In this axis, we investigate the ML-assisted optimization techniques following two directions: (1) efficient building of surrogates and their integration into optimization algorithms to deal with expensive black-box objective functions, and (2) automatically building/improving optimization algorithms.

Regarding the first direction, we contributed new results for integrating Walsh-based discrete surrogates for multi-objectivce expensive pseudo-boolean optimization problems 12, as well as the first investigations on designing deidicated surrogates for single-objective permutation problems using group representation theory 19. Besides, and with respect to continuous domains, we contributed in 11 the design of surrogate-based hybrid acquisition processes with application to Covid-19 contact mitigation.

Regarding the second direction, we contributed new results for tackling the automated algorithm selection problems for multi-objective optimization 27, as well as the design and the analysis of new fundamental tools for fitness landscape analysis 20, 26, 24 with the goal of enhancing the automated design of optimization algorithms both in single- and multi- objective optimization settings.

Besides, we contributed in 21 a toolkit for reliable benchmarking and research in multi-objective reinforcement learning which can be veiwed at the frontiers of the ML and the optimization fields.

These contributions are summarized in the following.

8.2.1 Benchmarking multi-objective reinforcement learning

Participants: Florian Felten [University of Luxembourg], Alegre L. Nunes [Federal University, Rio Grande], Bazzan Ana [Federal University, Rio Grande], Nowe Ann [VUB, Brussels], Grégoire Danoy [University of Luxembourg], El-Ghazali Talbi [contact person], Silva Brun Castro Da [].

Multi-objective reinforcement learning algorithms (MORL) extend standard reinforcement learning (RL) to scenarios where agents must optimize multiple, potentially conflicting, objectives, each represented by a distinct reward function. To facilitate and accelerate research and benchmarking in multi-objective RL problems, we introduce in 21 a comprehensive collection of software libraries that includes: (i) MO-Gymnasium, an easy-to-use and flexible API enabling the rapid construction of novel MORL environments. It also includes more than 20 environments under this API. This allows researchers to effortlessly evaluate any algorithms on any existing domains; (ii) MORL-Baselines, a collection of reliable and efficient implementations of state-of-the-art MORL algorithms, designed to provide a solid foundation for advancing research. Notably, all algorithms are inherently compatible with MO-Gymnasium; and (iii) a thorough and robust set of benchmark results and comparisons of MORL-Baselines algorithms, tested across various challenging MO-Gymnasium environments. These benchmarks were constructed to serve as guidelines for the research community, underscoring the properties, advantages, and limitations of each particular state-of-the-art method.

8.2.2 A Walsh-based surrogate-based framework for pseudo-boolean multi-objective optimization

Participants: Bilel Derbel [contact person], Geoffrey Pruvost [contact person], Arnaud Liefooghe, Sébastien Verel [ULCO, Calais], Qingfu Zhang [City University, Hong Kong].

In 12, we were concerned with the design and analysis of surrogate-assisted algorithms for expensive multiobjective combinatorial optimization problems. Targeting pseudo-boolean domains, we provide a fine-grained analysis of an optimization framework using the Walsh basis as a core surrogate model. The considered framework uses decomposition in the objective space (which is related to BONUS first research axis), and integrates three different components, namely, (i) an inner optimizer for searching promising solutions with respect to the so-constructed surrogate, (ii) a selection strategy to decide which solution is to be evaluated by the expensive objectives, and (iii) the strategy used to setup the Walsh order hyper-parameter. Based on extensive experiments using two benchmark problems, namely bi-objective NK-landscapes and unconstrained binary quadratic programming problems (UBQP), we conduct a comprehensive in-depth analysis of the combined effects of the considered components on search performance, and provide evidence on the effectiveness of the proposed search strategies. As a by-product, we shed more light on the key challenges for designing a successful surrogate-assisted multi-objective combinatorial search process.

8.2.3 Fourier Transform-based Surrogates for Permutation Problems

Participants: Francisco Chicano [Univ. Malaga, contact person], Bilel Derbel, Sébastien Verel [ULCO, Calais].

In the context of pseudo-Boolean optimization, surrogate functions based on the Walsh-Hadamard transform have been recently proposed with great success. This is was for instance the basis of the contribution described in the previous section. In particular, it has been shown that lower-order components of the Walsh-Hadamard transform have usually a larger influence on the value of the objective function. Thus, creating a surrogate model using the lower-order components of the transform can provide a good approximation to the objective function. The Walsh-Hadamard transform in pseudo-Boolean optimization is a particularization in the binary representation of a Fourier transform over a finite group, precisely defined in the framework of group representation theory. Using this more general definition, it is possible to define a Fourier transform for the functions over permutations. In 19, we propose the use of surrogate functions based on the Fourier transforms over the permutation space. We check how similar the proposed surrogate models are to the original objective function and we also apply regression to learn a surrogate model based on the Fourier transform. The experimental setting includes two permutation problems for which the exact Fourier transform is unknown based on the problem parameters: the Asteroid Routing Problem and the Single Machine Total Weighted Tardiness.

8.2.4 Surrogate-based hybrid acquisition processes with application to Covid-19 contact mitigation

Participants: Guillaume Briffoteaux, Nouredine Melab, Mohand Mezmaz [Univ. Mons], Daniel Tuyttens [Univ. Mons].

Surrogate models are built to produce computationally efficient versions of time-complex simulation-based objective functions so as to address expensive optimization. In surrogate-assisted evolutionary computations, the surrogate model evaluates and/or filters candidate solutions produced by evolutionary operators. In surrogate-driven optimization, the surrogate is used to define the objective function of an auxiliary optimization problem whose resolution generates new candidates. In  11, hybridization of these two types of acquisition processes is investigated with a focus on robustness with respect to the computational budget and parallel scalability. A new hybrid method based on the successive use of acquisition processes during the search outperforms competing approaches regarding these two aspects on the Covid-19 contact mitigation problem. To further improve the generalization to larger ranges of search landscapes, another new hybrid method based on the dispersion metric is proposed. The integration of landscape analysis tools in surrogate-based optimization seems promising according to the numerical results reported on the CEC2015 test suite.

8.2.5 Feature-based algorithm selection for multi-objective optimization

Participants: Arnaud Liefooghe [contct person], Sébastien Vérel [ULCO], Tinkle Chugh [Univ. Exeter, UK], Jonathan Fieldsend [Univ. Exeter, UK], Richard Allmendinger [Univ. Manchester, UK], Kaisa Miettinen [University of Jyvaskyla, Finland].

In 27, we consider the application of machine learning techniques to gain insights into the effect of problem features on algorithm performance, and to automate the task of algorithm selection for distance-based multi- and many-objective optimisation problems. This is the most extensive benchmark study of such problems to date. The problem features can be set directly by the problem generator, and include e.g. the number of variables, objectives, local fronts, and disconnected Pareto sets. Using 945 problem configurations (leading to instances) of varying complexity, we find that the problem features and the available optimisation budget (i) affect the considered algorithms (NSGA-II, IBEA, MOEA/D, and random search) in different ways and that (ii) it is possible to recommend a relevant algorithm based on problem features.

8.2.6 New fundamental AI tools from fitness landscape analysis

Participants: Arnaud Liefooghe [contct person], Manuel López-Ibáñez [Univ. Manchester, UK], Sébastien Vérel [ULCO], Francisco Chicano [Univ. Malaga, Spain], Bilel Derbel [contct person], Gabriela Ochoa [Univ. Stirling, UK].

Fitness landscape analysis is a very useful tool in optimization. It allows one to better understand the structure of the search space of a problem when there is some kind of distance or neighborhood defined over the solutions. Moreover, it allows to help designing new automated high-level AI techniques for attacking complex optimization problems, in partiuclar taking inspiration from popular ML approaches such as in supervised learning or reinforcement learning. In this context we contributed a number of results as discussed briefly in the following.

In 20, we enhance local optima networks to include precise information on the transition probabilities among local optima, yielding a Markov Chain for the visited local optima during the search. In fact, the Local Optima Networks (LON) have been proposed to serve as a summary of the landscape of a problem. The new analysis tool, called Local Optima Markov Chain (LOMA), is built on top of the static landscape information depending on the problem and includes information about algorithm dynamics. In 26, we focused on the representation of multi-objective landscapes and their multi-modality. We then revised and extended the network of Pareto local optimal solutions (PLOS-net). We first define a compressed PLOS-net which allows us to enhance its perception while preserving the important notion of connectedness between local optima. We also define a number of network metrics that characterize the PLOS-net, some of them are shown to be strongly correlated with search performance. Finally, and specifically to many-objective optimization problems, we challenged in 24 the common held assumption that clombinatorial problems with many objectives are harder to optimize than problems with two or three objectives. In particular, we provide empirical evidence that increasing the number of objectives tends to reduce the difficulty of the landscape being optimized. Of course, increasing the number of objectives brings about other challenges, such as an increase in the computational effort of many operations, or the memory requirements for storing non-dominated solutions.

8.3 Ultra-scale parallel optimization

During the year 2023, we have contributed two main results in this parallel optimization axis. In 15, we investigate the design of a parallel distributed tree-search algorithm and its implementation using the Chapel productivity-aware programming language. In 22, we propose a generic asynchronous parallel methodology of fractal-based optimization algorithms on multi-nodes and multi-CPUs distributed environments. These contributions are connected to our first research axis, and they are discussed in more details in the following.

8.3.1 Parallel distributed productivity-aware tree-search using Chapel

Participants: Guillaume Helbecque [contact person], Jan Gmys, Nouredine Melab, Tiago Carneiro [University of Luxembourg], Pascal Bouvry [University of Luxembourg].

With the recent arrival of the exascale era, modern supercomputers are increasingly big making their programming much more complex. In addition to performance, software productivity is a major concern to choose a programming language, such as Chapel, designed for exascale computing. In 15, we investigate the design of a parallel distributed tree-search algorithm, namely P3D-DFS, and its implementation using Chapel. The design is based on the Chapel's DistBag data structure, revisited by: (i) redefining the data structure for Depth-First tree-Search (DFS), henceforth renamed DistBag-DFS; (ii) redesigning the underlying load balancing mechanism. In addition, we propose two instantiations of P3D-DFS considering the Branch-and-Bound (B&B) and Unbalanced Tree Search (UTS) algorithms. In order to evaluate how much performance is traded for productivity, we compare the Chapel-based implementations of B&B and UTS to their best-known counterparts based on traditional OpenMP (intra-node) and MPI+X (inter-node). For experimental validation using 4096 processing cores, we consider the permutation flow-shop scheduling problem for B&B and synthetic literature benchmarks for UTS. The reported results show that P3D-DFS competes with its OpenMP baselines for coarser-grained shared-memory scenarios, and with its MPI+X counterparts for distributed-memory settings, considering both performance and productivity-awareness. In the context of this work, this makes Chapel an alternative to OpenMP/MPI+X for exascale programming.

8.3.2 Massively Parallel Asynchronous Fractal Optimization

Participants: Thomas Firmin [contact person], El-Ghazali Talbi.

Tightly related to our first research axis concerning decomposition-based optimization, we consider Fractal-based decomposition as a flexible framework representing a family of optimization algorithms based on a hierarchical decomposition of the search space. We have built a software called Zellij, in which we were able to instantiate popular decomposition-based algorithms. Our goal is to tackle optimization problems characterized by computationally expensive objective functions and high dimensional search space. In 22, we propose a generic asynchronous parallel methodology of fractal-based optimization algorithms on multi-nodes and multi-CPUs distributed environments. Our experimental results show a significantly reduced computation time between the mono-threaded version and the asynchronous one. The obtained results are also analyzed according to the various search components such as tree search, exploration, and exploitation strategies.

9 Bilateral contracts and grants with industry

9.1 Bilateral grants with industry

Our current industrial granted projects are completely at the heart of the Bonus project. They are summarized in the following.

  • EDF (2021-2024, Paris): this joint project with EDF, a major electrical power player in France, targets the automatic design and configuration of deep neural networks applied to the energy consumption forecasting. A budget of 62K€ is initially allocated, in the context of the PGMO programme of Jacques Hadamard foundation of mathematics. A budget of 150K€ is then allocated for funding a PhD thesis (CIFRE).
  • ONERA & CNES (2016-2023, Paris): the focus of this project with major European players in vehicle aerospace is put on the design of aerospace vehicles, a high-dimensional expensive multidisciplinary problem. Such problem needs the use of the research lines of Bonus to be tackled effectively and efficiently. Two jointly supervised PhD students (J. Pelamatti and A. Hebbal) have been involved in this project. The PhD thesis of J. Pelamatti has been defended in March 2020 and that of A. Hebbal  48 in January 2021. Another one (J. Gamot) has started in November 2020. The objective of this latter is to deal with the design and implementation of ultra-scale multi-objective highly constrained optimization methods for solving the internal layout problem of future aerospace systems.
  • Confiance.ai project (2021-2024, Paris): this joint project with the SystemX Institute of Research and Technology (IRT) and Université Polytechnique Hauts-de-France is focused on multi-objective automated design and optimization of deep neural networks with applications to embedded systems. A Ph.D student (H. Ouertatani) has been hired in Oct. 2021 to work on this topic.

10 Partnerships and cooperations

10.1 International initiatives

10.1.1 Associate Teams in the framework of an Inria International Lab or in the framework of an Inria International Program

AnyScale
  • Title:
    Parallel Fractal-based Chaotic optimization: Application to the optimization of deep neural networks for energy management
  • Duration:
    2022 - 2024
  • Coordinator:
    Rachid Ellaia (ellaia@emi.ac.ma)
  • Partners:
    • Ecole Mohammadia d'Ingénieurs Rabat (Maroc)
  • Inria contact:
    El-Ghazali Talbi
  • Summary:

    Many scientific and industrial disciplines are more and more concerned by big optimisation problems (BOPs). BOPs are characterised by a huge number of mixed decision variables and/or many expensive objective functions. Bridging the gap between computational intelligence, high performance computing and big optimisation is an important challenge for the next decade in solving complex problems in science and industry.

    The goal of this associated team project is to come up with breakthrough in nature-inspired algorithms jointly based on any-scale fractal decomposition and chaotic approaches for BOPs. Those algorithms are massively parallel and can be efficiently designed and implemented on heterogeneous exascale supercomputers including millions of CPU/GPU (Graphics Processing Units) cores. The convergence between chaos, fractals and massively parallel computing will represent a novel computing paradigm for solving complex problems.

    From the application and validation point of view, we target the automatic design of deep neural networks, applied to the prediction of the electrical enerygy consumption and production.

10.1.2 Participation in other International Programs

International associated Lab MODO

Participants: Arnaud Liefooghe, Bilel Derbel.

  • Title:
    Frontiers in Massive Optimization and Computational Intelligence (MODO)
  • Partner Institution(s):
    • Shinshu University, Japan
  • Start Date:
    2017
  • Additionnal info/keywords:
    The MODO lab global goal is to federate the French and Japanese researchers interested in tackling challenging optimization problems, where one has to deal with the dimensionality, heterogeneity and expensive objective functions, using innovative approaches at the crossroads of combinatorial optimization, fitness landscape analysis, and machine learning.
MoU RIKEN R-CCS

Participants: Bilel Derbel, Lorenzo Canonne.

  • Title:
    Memoremdum of Understanding
  • Partner Institution(s):
    • RIKEN Center of Computational Science, Japan
  • Start Date:
    2021
  • Additionnal info/keywords:
    This MoU aims at strengthening the research collaboration with one of the world-wide leading institute in HPC targeting the solving of computing intensive optimizaion problems on top of the japanese Fugaku supercomputer facilities (ranked in top of the last TOP500).

10.2 International research visitors

10.2.1 Visits of international scientists

Cosijopii Garcia Garcia

  • Status
    PhD
  • Institution of origin:
    INAOE Instituto Nacional de Astrofísica
  • Country:
    Mexico
  • Dates:
    November 2023 to April 2024
  • Context of the visit:
    Desiging and analysing evolutionary algorithm for single- and multi- Objective Neural Architecture Search (NAS)
  • Mobility program/type of mobility:
    research stay.

10.3 European initiatives

10.3.1 Other european programs/initiatives

  • ERC Generator "Exascale Parallel Nature-inspired Algorithms for Big Optimization Problems", supported by University of Lille call (2023-2025, Total: 99K€). The goal of this project is to come up with breakthrough in nature-inspired algorithms jointly based on fractal decomposition and chaotic optimization approaches for BOPs. Those algorithms are massively parallel and can be efficiently designed and implemented on heterogeneous exascale supercomputers including millions of CPU/GPU (Graphics Processing Units) cores, and neuromorphic accelerators composed of billions of spiking neurons. E-G. Talbi is the leader of this project.

10.4 National initiatives

10.4.1 ANR

  • ANR PEPR Numpex/Axis Exa-MA (2022-2027, Grant: Total: 6,5M€). The goal of the high-performance Digital for Exascale (Numpex) program, dedicated to both scientific research and industry, is twofold: (1) designing and developing the software building-blocks for the future exascale supercomputers, and (2) preparing the major application areas aimed at fully harnessing the capabilities of these latter. Numpex is composed of 5 axes including Exa-MA, which stands for Exascale computing: Methods and Algorithms and is organized in 7 WPs including Optimize at Exascale (WP5). The overall goal of WP5 consists in the design and implementation of exascale algorithms to efficiently and effectively solve large optimization problems. The research topics of the Bonus team are perfectly in line with the framework of WP5. E-G. Talbi and N. Melab are respectively the leader of and a contributor to this work-package.
  • ANR PIA Equipex+ MesoNet (2021-2027, Grant: Total: 14,2M€, For ULille: 1,4M€). The goal of the project is to set up a distributed infrastructure dedicated to the coordination of HPC and AI in France. This inclusive and structuring project, supported by GENCI partners (MESRI, CNRS, CEA, CPU, INRIA), aims to integrate at least one mesocenter by region making them regional references and relays. The infrastructure, fully integrated with the European Open Science Cloud (EOSC) initiative, should have a significant impact on the appropriation by researchers of the national and regional public HPC and AI facilities. Coordinated by GENCI, MesoNet gathers 22 partners including the mesocenter located at ULille, for which N. Melab is the co-PI. The MesoNet infrastucture is highly important for the research activities of Bonus and many other research groups including those of Inria. In addition to the funding dedicated to hardware equipment including nation-wide federated supercomputer and storage, funding will be devoted to research engineers, one of them for ULille (4,5 years), and a PhD for Bonus as well.
  • Bilateral ANR-FNR France/Luxembourg PRCI UltraBO (2023 2026, Grant: 207K€ for Bonus, PI: N. Melab) in collaboration with University of Luxembourg (Co-PI: G. Danoy). According to Top500 modern supercomputers are increasingly large (millions of cores), heterogeneous (CPU-GPU, …) and less reliable (MTBF < 1h) making their programming more complex. The development of parallel algorithms for these ultra-scale supercomputers is in its infancy especially in combinatorial optimization. Our objective is to investigate the MPI+X and PGAS-based approaches for the exascale-aware design and implementation of hybrid algorithms combining exact methods (e.g. B&B) and metaheuristics (e.g. Evolutionary Algorithms) for solving challenging optimization problems. We will address in a holistic (uncommon) way three roadblocks on the road to exascale: locality-aware ultra-scalability, CPU-GPU heterogeneity and checkpointing-based fault tolerance. Our application challenge is to solve to optimality very hard benchmark instances (e.g. Flow-shop ones unsolved for 25 years). For the validation, various-scale supercomputers will be used, ranging from petascale platforms, to be used for debugging, including Jean Zay (France), ULHPC (Luxembourg), SILECS/Grid’5000 (CPER CornelIA) and MesoNet (PIA Equipex+) to exascale supercomputers, to be used for real production, including the two first supercomputers of Top500 (Frontier via our Georgia Tech partner, Fugaku via our Riken partner) as well as the two EuroHPC coming ones.

10.5 Regional initiatives

  • CPER CornelIA (2021-2027, Grant: 820K€): this project aims at strengthening the research and infrastructure necessary for the development of scientific research in responsible and sustainable Artificial Intelligence at the regional (Hauts-de-France) level. The scientific leader at Lille (N. Melab) is in charge of the management of the the renewal of the hardware equipment of Grid'5000 nation-wide experimental testbed and hiring an engineer for its system & network administration and user support and development. B. Derbel took over N. Melab the responsability of the infrastructure managment and its coordination with other partners starting from late 2023. He is member of the CornelIA executive board.

11 Dissemination

11.1 Promoting scientific activities

11.1.1 Scientific events: organisation

General chair, scientific chair
  • E-G. Talbi (Steering committee Chair): Intl. Conf. on Optimization and Learning (OLA).
  • E-G. Talbi (Steering committee): IEEE Workshop Parallel Distributed Computing and Optimization (IPDPS/PDCO).
  • E-G. Talbi (Steering committee): Intl. Conf. on Metaheuristics and Nature Inspired Computing (META).
  • A. Liefooghe (Steering committee): Eur. Conf. on Evolutionary Computation in Combinatorial Optimisation (EvoCOP)
  • B. Derbel (special session co-chair): Advances in Decomposition based Evolutionary Multi-objecvtive Optimization (ADEMO), sepecial session at CEC/WCCI 2023 (with S. Z. Martinez, K. Li, Q. Zhang).

11.1.2 Scientific events: selection

Chair of conference program committees
  • B. Derbel (ECOM Track co-chair), ACM GECCO 2023: Genetic and Evolutionary Computation Conference (Lisbon, Portugal, 2023).
  • A. Liefooghe (Hybrid Scheduling co-chair), ACM GECCO 2023: Genetic and Evolutionary Computation Conference (Lisbon, Portugal, 2023).
Member of the conference program committees
  • The ACM Genetic and Evolutionary Computation Conference (GECCO).
  • The IEEE Congress on Evolutionary Computation (CEC).
  • European Conference on Evolutionary Computation in Combinatorial Optimization (EvoCOP).
  • Iternational Conference on Evolutionary Multi-criterion Optimization (EMO).
  • Intl. Conf. on Optimization and Learning (OLA)

11.1.3 Journal: selection

Member of the editorial boards
  • A. Liefooghe (Reproducibility board member) : ACM Transactions on Evolutionary Learning and Optimization (TELO), since 2019
  • E-G. Talbi (Editorial board member) : ACM Transactions on Evolutionary Learning and Optimization (TELO), since 2023.
  • N. Melab (Associate Editor): ACM Computing Surveys, since 2019.
Reviewer - reviewing activities
  • IEEE Transactions on Evolutionary Computation (TEVC),
  • Applied Soft Computing (Elsevier)
  • ACM Transactions on Evolutionary Learning and Optimization.
  • ACM Computing Surveys

11.1.4 Invited talks

  • E-G. Talbi, Optimization of deep neural networks, Keynote speaker, Int. Conf. on Services and Industry of the Future, Rabat, Morocco, Oct 2023.
  • B. Derbel, Decomposition in evolutionary multi-objective optimization: design, parallelism, and fitness landscape analysis, November 2023, University of Malaga.
  • N. Melab. Big Optimization using Untra-scale Computing, September 2023, University of Mons, Belgium.

11.1.5 Leadership within the scientific community

  • N. Melab: Scientific leader of Grid’5000 HPC testbed at Lille, since 2004.
  • E-G. Talbi: Co-president of the working group “META: Metaheuristics - Theory and applications”, GDR RO and GDR MACS.
  • B. Derbel: Chair of the IEEE CIS Task Force on Decomposition-based Techniques in Evolutionary Computation (DTEC)
  • E-G. Talbi: Co-Chair of the IEEE Task Force on Cloud Computing within the IEEE Computational Intelligence Society.
  • A. Liefooghe: executive board member and co-secretary of the association “Artificial Evolution” (EA).
  • A. Liefooghe: member of the scientific council of GdR RO, and co-animator of the MH2PPC axis (CNRS).

11.1.6 Scientific expertise

  • E-G. Talbi: FODECYT project expert (National Research and Development Agence, Chile), June 2023.

11.1.7 Research administration

  • N. Melab: Member of the Scientific Board for the Inria Lille research center, since Feb. 2019.
  • N. Melab: Member of the steering committee of “Maison de la Simulation”, since 2016.
  • B. Derbel: member of the CER committee (applications to Ph.D and post-doctoral positions), Inria Lille.
  • B. Derbel: Member of the Scientific Board for the MADIS doctoral school at the University of Lille, since 2022.
  • A. Liefooghe: elected member of the national council of universities (CNU 27), in charge of the evaluation process education staff: qualification to (Associate) Professor positions, promoting decisions, etc.

11.2 Teaching - Supervision - Juries

11.2.1 Teaching

Taught courses
  • International Master lecture: N. Melab, Supercomputing, 45h ETD, M2, Université de Lille, France.
  • Master lecture: N. Melab, Operations Research, 60h ETD, M1, Université de Lille, France.
  • Licence: A. Liefooghe, Introduction to Programming, 36h ETD, L1, Université de Lille, France.
  • Licence: A. Liefooghe, Algorithms and Programming, 36h ETD, L1, Université de Lille, France.
  • Licence: A. Liefooghe, Linear programming, 36h ETD, L3, Université de Lille, France.
  • Master: B. Derbel, Algorithms and Complexity, 35h, M1, Université de Lille, France
  • Master: B. Derbel, Optimization and machine learning, 24h, M1, Université de Lille, France
  • Engineering school: E-G. Talbi, Advanced optimization, 36h, Polytech'Lille, Université de Lille, France.
  • Engineering school: E-G. Talbi, Data mining, 36h, Polytech'Lille, Université de Lille, France.
  • Engineering school: E-G. Talbi, Operations research, 60h, Polytech'Lille, Université de Lille, France.
  • Engineering school: E-G. Talbi, Graphs, 25h, Polytech'Lille, Université de Lille, France.
Teaching responsibilities
  • Head of the international relations: E-G. Talbi, Polytech'Lille, Université de Lille, France.
  • Head of the international relations: B. Derbel, Computer Science Department, Faculty of Science and Technology, Université de Lille, France.
  • Master leading: N. Melab, Co-head (with T. Rey) of the international Master 2 of High-performance Computing and Simulation, Université de Lille, France.
  • Responsible for Communication: A. Liefooghe, Computer Science Department, Université de Lille, France.

11.2.2 Supervision

  • PhD defense: Lorenzo Canonne, Optimization boîte grise massivement parallèle et large échelle 35. Supervisor: B. Derbel. Defended Dec. 2023.
  • PhD defense: Raphaël Cosson, Multi-Objective Landscape Analysis and Feature-based Algorithm Selection 36. Supervisors: B. Derbel and A. Liefooghe. Defended Dec. 2023.
  • PhD defense: Juliette Gamot, Algorithms for Conditional Search Space Optimal Layout Problems 37, Nov. 2022, El-Ghazali Talbi and Nouredine Melab, co-supervisors from ONERA: L. Brevault and M. Balesdent.
  • PhD in progress: David Redon, Enabling Large Scale Computational Intelligence with HPC, started Oct. 2020, Bilel Derbel, and Pierre Fortin (Université de Lille)
  • PhD in progress: Maxime Gobert, Parallel multi-objective global optimization with applications to several simulation-based exlporation parameter, Oct 2018, Nouredine Melab (Université de Lille) and Daniel Tuyttens (Université de Mons, Belgium).
  • PhD in progress (cotutelle): Guillaume Helbecque, Productivity-aware parallel cooperative combinatorial optimization for ultra-scale supercomputers, Oct. 2021, Nouredine Melab, co-supervisor from University of Luxembourg: P. Bouvry.
  • PhD in progress: Jérôme Rouzé, Parallel Hybrid Metaheuristics for Noisy Intermediate-Scale Quantum Machines, Nov. 2023, Nouredine Melab (Université de Lille) and Daniel Tuyttens (Université de Mons).
  • PhD in progress: Thomas Firmin, Pulse neuron networks and parameter optimization for massively parallel GPU-powered clusters, Oct. 2021, El-Ghazali Talbi, co-supervisor from Emeraude Team (CRIStAL lab): P. Boulet.
  • PhD in progress: Houssem Ouertatani, Multi-objective optimization of deep neural networks for embedded applications, Oct. 2021, El-Ghazali Talbi, co-supervisor from Université Polytechnique Hauts-de-France: S. Niar.
  • PhD in progress: Julie Keisler, Réseaux de neurones profonds pour la prédiction de séries spatio-temporelles, Oct. 2022. CIFRE with EDF, El-Ghazali Talbi.

11.2.3 Juries

  • B. Derbel (Jury president): PhD thesis of Ikram Senoussaoui, Processor and memory co-scheduling of embedded real-time applications on multicore platforms, Université of Lille, defended Dec 2022.
  • E-G. Talbi (Reviewer) : PhD thesis of Ali Yaddaden, Stochastic optimization in power system incorporating renewable energy resources, Ecole Nationale Supérieure des Mines d’Ales (IMT Mines Ales), defended Nov 2023.
  • E-G. Talbi (Reviewer) : PhD thesis of Mohamed Elamine Athmani, Learning-based approaches for parallel machines scheduling problems, UTT (Troyes), Defended Dec 2023.

11.3 Popularization

11.3.1 Internal or external Inria responsibilities

  • N. Melab: Chargé de Mission of High Performance Computing and Simulation at Université de Lille, since 2010.

12 Scientific production

12.1 Major publications

  • 1 articleA.Ahcène Bendjoudi, N.Nouredine Melab and E.-G.El-Ghazali Talbi. FTH-B&B: A Fault-Tolerant HierarchicalBranch and Bound for Large ScaleUnreliable Environments.IEEE Trans. Computers6392014, 2302--2315back to text
  • 2 articleS.Sébastien Cahon, N.Nouredine Melab and E.-G.El-Ghazali Talbi. ParadisEO: A Framework for the Reusable Design of Parallel and Distributed Metaheuristics.J. Heuristics1032004, 357--380back to text
  • 3 articleF.Fabio Daolio, A.Arnaud Liefooghe, S.Sébastien Verel, H.Hernan Aguirre and K.Kiyoshi Tanaka. Problem Features versus Algorithm Performance on Rugged Multiobjective Combinatorial Fitness Landscapes.Evolutionary Computation2542017back to text
  • 4 phdthesisB.Bilel Derbel. Contributions to single- and multi- objective optimization: towards distributed and autonomous massive optimization.Université de Lille2017back to textback to textback to text
  • 5 articleB.Bilel Derbel, G.Geoffrey Pruvost, A.Arnaud Liefooghe, S.Sébastien Verel and Q.Qingfu Zhang. Walsh-based surrogate-assisted multi-objective combinatorial optimization: A fine-grained analysis for pseudo-boolean functions.Applied Soft Computing136March 2023, 110061HALDOIback to text
  • 6 articleJ.Jan Gmys, M.Mohand Mezmaz, N.Nouredine Melab and D.Daniel Tuyttens. IVM-based parallel branch-and-bound using hierarchical work stealing on multi-GPU systems.Concurrency and Computation: Practice and Experience2992017back to textback to text
  • 7 articleA.Arnaud Liefooghe, F.Fabio Daolio, S.Sébastien Verel, B.Bilel Derbel, H.Hernan Aguirre and K.Kiyoshi Tanaka. Landscape-aware performance prediction for evolutionary multi-objective optimization.IEEE Transactions on Evolutionary Computation2462020, 1063-1077HALDOIback to text
  • 8 articleT. V.Thé Van Luong, N.Nouredine Melab and E.-G.El-Ghazali Talbi. GPU Computing for Parallel Local Search Metaheuristic Algorithms.IEEE Trans. Computers6212013, 173--185back to text
  • 9 articleA.Amir Nakib, S.S. Ouchraa, N.Nadiya Shvai, L.L. Souquet and E.-G.El-Ghazali Talbi. Deterministic metaheuristic based on fractal decomposition for large-scale optimization.Appl. Soft Comput.612017, 468--485back to text
  • 10 articleT.-T.Trong-Tuan Vu and B.Bilel Derbel. Parallel Branch-and-Bound in Multi-core Multi-CPU Multi-GPU Heterogeneous Environments.Future Generation Computer Systems56March 2016, 95-109HALDOI

12.2 Publications of the year

International journals

International peer-reviewed conferences

Conferences without proceedings

Scientific books

Edition (books, proceedings, special issue of a journal)

  • 33 proceedingsT.T. FirminE.-G.E-G. TalbiA Comparative Study of Fractal-Based Decomposition Optimization.6th International Conference on Optimization and Learning1824Communications in Computer and Information ScienceSpringer Nature Switzerland; Bernabé Dorronsoro, Francisco Chicano, Gregoire Danoy, El-Ghazali TalbiMay 2023, 3-20HALDOIback to textback to text
  • 34 periodicalEditorial of the Special Issue Intelligent Solutions for Efficient Logistics and Sustainable Transportation.Applied Soft Computing133January 2023, 109961HALDOI

Doctoral dissertations and habilitation theses

  • 35 thesisL.Lorenzo Canonne. Massively parallel and large scale graybox optimization.Université de lilleDecember 2023HALback to text
  • 36 thesisR.Raphaël Cosson. Multi-Objective Landscape Analysis and Feature-based Algorithm Selection.Université de LilleDecember 2023HALback to text
  • 37 thesisJ.Juliette Gamot. Algorithms for Conditional Search Space Optimal Layout Problems.Université de LilleDecember 2023HALback to text

Reports & preprints

  • 38 reportF.Florian Felten, D.Daniel Gareev, E.-G.El-Ghazali Talbi and G.Grégoire Danoy. Hyperparameter Optimization for Multi-Objective Reinforcement Learning.University of Luxembourg2023HALDOI
  • 39 reportF.Florian Felten, E.-G.El-Ghazali Talbi and G.Grégoire Danoy. Multi-Objective Reinforcement Learning based on Decomposition: A taxonomy and framework.University of Luxembourg2023HALDOI
  • 40 reportJ.Juliette Gamot, M.Mathieu Balesdent, R.Romain Wuilbercq, A.Arnault Tremolet, N.Nouredine Melab and E.-G.El-Ghazali Talbi. A Virtual-Force Based Swarm Algorithm for Balanced Circular Bin Packing Problems.Université Lille 12023HALDOI
  • 41 reportJ.Julie Keisler, E.-G.El-Ghazali Talbi, S.Sandra Claudel and G.Gilles Cabriel. An algorithmic framework for the optimization of deep neural networks architectures and hyperparameters.Université de Lille2023HALDOI
  • 42 reportP.-G. E.Prof. El-Ghazali Talbi. Metaheuristics for (Variable-Size) Mixed Optimization Problems: A Unified Taxonomy and Survey.Université de Lille; INRIA Lille Nord Europe; CRISTAL; BONUSJanuary 2024, 55HAL

12.3 Cited publications

  • 43 inbookM.Mathieu Balesdent, L.Lo\"ic Brevault, N. B.Nathaniel B. Price, S.Sébastien Defoort, R.Rodolphe Le Riche, N.-H.Nam-Ho Kim, R. T.Raphael T. Haftka and N.Nicolas Bérend. Space Engineering: Modeling and Optimization with Case Studies.G.Giorgio FasanoJ. D.János D. PintérSpringer International Publishing2016, Advanced Space Vehicle Design Taking into Account Multidisciplinary Couplings and Mixed Epistemic/Aleatory Uncertainties1--48URL: http://dx.doi.org/10.1007/978-3-319-41508-6_1DOIback to text
  • 44 inproceedingsB.Bilel Derbel, D.Dimo Brockhoff, A.Arnaud Liefooghe and S.Sébastien Verel. On the Impact of Multiobjective Scalarizing Functions.Parallel Problem Solving from Nature - PPSN XIII - 13th International Conference, Ljubljana, Slovenia, September 13-17, 2014. Proceedings2014, 548--558back to text
  • 45 inproceedingsB.Bilel Derbel, A.Arnaud Liefooghe, G.Gauvain Marquet and E.-G.El-Ghazali Talbi. A fine-grained message passing MOEA/D.IEEE Congress on Evolutionary Computation, CEC 2015, Sendai, Japan, May 25-28, 20152015, 1837--1844back to text
  • 46 inproceedingsJ.Johann Dreo, A.Arnaud Liefooghe, S.Sébastien Verel, M.Marc Schoenauer, J. J.Juan J. Merelo, A.Alexandre Quemy, B.Benjamin Bouvier and J.Jan Gmys. Paradiseo: From a Modular Framework for Evolutionary Computation to the Automated Design of Metaheuristics.GECCO 2021 - Genetic and Evolutionary Computation Conference2021 Genetic and Evolutionary Computation Conference CompanionACM SigevoLille / Virtual, FranceACMJuly 2021, 1522-1530HALDOIback to text
  • 47 articleR.R.T. Haftka, D.D. Villanueva and A.A. Chaudhuri. Parallel surrogate-assisted global optimization with expensive functions – a survey.Structural and Multidisciplinary Optimization54(1)2016, 3--13back to text
  • 48 phdthesisA.Ali Hebbal. Deep Gaussian Processes for the Analysis and Optimization of Complex Systems -Application to Aerospace System Design.Université de LilleJanuary 2021HALback to text
  • 49 articleD.D.R. Jones, M.M. Schonlau and W.W.J. Welch. Efficient Global Optimization of Expensive Black-Box Functions.Journal of Global Optimization13(4)1998, 455--492back to textback to text
  • 50 inproceedingsJ.Julien Pelamatti, L.Loïc Brevault, M.Mathieu Balesdent, E.-G.El-Ghazali Talbi and Y.Yannick Guerin. How to deal with mixed-variable optimization problems: An overview of algorithms and formulations.Advances in Structural and Multidisciplinary Optimization, Proc. of the 12th World Congress of Structural and Multidisciplinary Optimization (WCSMO12)Springer2018, 64--82URL: http://dx.doi.org/10.1007/978-3-319-67988-4_5DOIback to textback to text
  • 51 articleF.F. Shahzad, J.J. Thies, M.M. Kreutzer, T.T. Zeiser, G.G. Hager and G.G. Wellein. CRAFT: A library for easier application-level Checkpoint/Restart and Automatic Fault Tolerance.CoRRabs/1708.020302017, URL: http://arxiv.org/abs/1708.02030back to text
  • 52 articleN.Nir Shavit. Data Structures in the Multicore Age.Communications of the ACM5432011, 76--84back to text
  • 53 articleM.Marc Snir and al.. Addressing Failures in Exascale Computing.Int. J. High Perform. Comput. Appl.282May 2014, 129--173back to text
  • 54 articleE.-G.El-Ghazali Talbi. Combining metaheuristics with mathematical programming, constraint programming and machine learning.Annals OR24012016, 171--215back to text
  • 55 articleT.-T.Trong-Tuan Vu and B.Bilel Derbel. Parallel Branch-and-Bound in multi-core multi-CPU multi-GPU heterogeneous environments.Future Generation Comp. Syst.562016, 95--109back to textback to text
  • 56 articleX.Xingyi Zhang, Y.Ye Tian, R.Ran Cheng and Y.Yaochu Jin. A Decision Variable Clustering-Based Evolutionary Algorithm for Large-Scale Many-Objective Optimization.IEEE Trans. Evol. Computation2212018, 97--112back to text
  1. 1A solution x dominates another solution y if x is better than y for all objectives and there exists at least one objective for which x is strictly better than y.
  2. 2The Pareto Front is the set of non-dominated solutions.
  3. 3In the context of Bonus, supercomputers are composed of several massively parallel processing nodes (inter-node parallelism) including multi-core processors and GPUs (intra-node parallelism).
  4. 4A WS mechanism is mainly defined by two components: a victim selection strategy which selects the processing core to be stolen and a work sharing policy which determines the part and amount of the work unit to be given to the thief upon WS request.