EN FR
EN FR
BONUS - 2021
2021
Activity report
Project-Team
BONUS
RNSR: 201722535A
Research center
In partnership with:
Université de Lille
Team name:
Big Optimization aNd Ultra-Scale Computing
In collaboration with:
Centre de Recherche en Informatique, Signal et Automatique de Lille
Domain
Applied Mathematics, Computation and Simulation
Theme
Optimization, machine learning and statistical methods
Creation of the Project-Team: 2019 June 01

Keywords

Computer Science and Digital Science

  • A1.1.1. Multicore, Manycore
  • A1.1.2. Hardware accelerators (GPGPU, FPGA, etc.)
  • A1.1.5. Exascale
  • A8.2.1. Operations research
  • A8.2.2. Evolutionary algorithms
  • A9.6. Decision support
  • A9.7. AI algorithmics

Other Research Topics and Application Domains

  • B3.1. Sustainable development
  • B3.1.1. Resource management
  • B7. Transport and logistics
  • B8.1.1. Energy for smart buildings

1 Team members, visitors, external collaborators

Faculty Members

  • Nouredine Melab [Team leader, Université de Lille, Professor, HDR]
  • Omar Abdelkafi [Université de Lille, Associate Professor]
  • Bilel Derbel [Université de Lille, Associate Professor, HDR]
  • Arnaud Liefooghe [Université de Lille, Associate Professor]
  • El-Ghazali Talbi [Université de Lille, Professor, HDR]

PhD Students

  • Brahim Aboutaib [Université de Lille, until Aug 2021]
  • Nicolas Berveglieri [Université de Lille]
  • Guillaume Briffoteaux [Université de Mons - Belgique]
  • Lorenzo Canonne [Inria]
  • Raphael Cosson [Université de Lille]
  • Thomas Firmin [Université de Lille, from Oct 2021]
  • Juliette Gamot [Inria]
  • Maxime Gobert [Université de Mons - Belgique]
  • Ali Hebbal [Université de Lille, until Feb 2021]
  • Guillaume Helbecque [Université de Lille, from Oct 2021]
  • Houssem Ouertatani [Institut de recherche technologique System X, from Oct 2021]
  • Geoffrey Pruvost [Université de Lille, until Sep 2021]
  • David Redon [Université de Lille, from Oct 2021]
  • Jeremy Sadet [Univ de Valenciennes et du Hainaut Cambrésis]

Technical Staff

  • Nassime Aslimani [Université de Lille, Engineer, until Sep 2021]
  • Dimitri Delabroye [Inria, Engineer, until Aug 2021]
  • Jan Gmys [Inria, Engineer]
  • Julie Keisler [Université de Lille, Engineer, from Oct 2021]

Interns and Apprentices

  • Ulysse Cambier [Université de Lille, from Jun 2021 until Aug 2021]
  • David Redon [Université de Lille, Mar 2021]

Administrative Assistant

  • Karine Lewandowski [Inria]

Visiting Scientists

  • Badr Abou El Majd [Université Mohammed V de Rabat-Maroc, from Sep 2021]
  • Nikolaus Frohner [Université de Vienne - Autriche, from Sep 2021]
  • Roberto Santana Hermida [University of the Basque Country, from Sep 2021]
  • Emanuel Vega Mena [Université du Chili Santiago, from Oct 2021]

2 Overall objectives

2.1 Presentation

Solving an optimization problem consists in optimizing (minimizing or maximizing) one or more objective function(s) subject to some constraints. This can be formulated as follows:

Min / Max F ( 𝐱 ) = ( f 1 ( 𝐱 ) , f 2 ( 𝐱 ) , ... , f m ( 𝐱 ) ) subject to 𝐱 Ω ,

where 𝐱=(𝐱1,𝐱2,...,𝐱𝐧) is the decision variable vector of dimension n, Ω is the domain of x (decision space), and 𝐅(𝐱) is the objective function vector of dimension m1. The objective space is composed of all values of 𝐅(𝐱) corresponding to the values of 𝐱 in the decision space.

Nowadays, in many research and application areas we are witnessing the emergence of the big era (big data, big graphs, etc). In the optimization setting, the problems are increasingly big in practice. Big optimization problems (BOPs) refer to problems composed of a large number of environmental input parameters and/or decision variables (high dimensionality), and/or many objective functions that may be computationally expensive. For instance, in smart grids, there are many optimization problems for which it has to be considered a large number of consumers (appliances, electrical vehicles, etc.) and multiple suppliers with various energy sources. In the area of engineering design, the optimization process must often take into account a large number of parameters from different disciplines. In addition, the evaluation of the objective function(s) often consist(s) in the execution of an expensive simulation of a black-box complex system. This is for instance typically the case in aerodynamics where a CFD-based simulation may require several hours. On the other hand, to meet the high-growing needs of applications in terms of computational power in a wide range of areas including optimization, high-performance computing (HPC) technologies have known a revolution during the last decade (see Top500 international ranking (Edition of November 2021)). Indeed, HPC is evolving toward ultra-scale supercomputers composed of millions of cores supplied in heterogeneous devices including multi-core processors with various architectures, GPU accelerators and MIC coprocessors.

Beyond the “big buzzword” as some say, solving BOPs raises at least four major challenges: (1) tackling their high dimensionality in the decision space; (2) handling many objectives; (3) dealing with computationally expensive objective functions; and (4) scaling up on (ultra-scale) modern supercomputers. The overall scientific objectives of the Bonus project consist in addressing efficiently these challenges. On the one hand, the focus will be put on the design, analysis and implementation of optimization algorithms that are scalable for high-dimensional (in decision variables and/or objectives) and/or expensive problems. On the other hand, the focus will also be put on the design of optimization algorithms able to scale on heterogeneous supercomputers including several millions of processing cores. To achieve these objectives raising the associated challenges a program including three lines of research will be adopted (Fig. 1): decomposition-based optimization, Machine Learning (ML)-assisted optimization and ultra-scale optimization. These research lines are developed in the following section.

Figure 1
Figure 1: Research challenges/objectives and lines

From the software standpoint, our objective is to integrate the approaches we will develop in our ParadisEO 3, 26 framework in order to allow their reuse inside and outside the Bonus team. The major challenge will be to extend ParadisEO in order to make it more collaborative with other software including machine learning tools, other (exact) solvers and simulators. From the application point of view, the focus will be put on two classes of applications: complex scheduling and engineering design.

3 Research program

3.1 Decomposition-based Optimization

Given the large scale of the targeted optimization problems in terms of the number of variables and objectives, their decomposition into simplified and loosely coupled or independent subproblems is essential to raise the challenge of scalability. The first line of research is to investigate the decomposition approach in the two spaces (decision and objective) and their combination, as well as their implementation on ultra-scale architectures. The motivation of the decomposition is twofold: first, the decomposition allows the parallel resolution of the resulting subproblems on ultra-scale architectures. Here also several issues will be addressed: the definition of the subproblems, their coding to allow their efficient communication and storage (checkpointing), their assignment to processing cores, etc. Second, decomposition is necessary for solving large problems that cannot be solved (efficiently) using traditional algorithms. Indeed, for instance with the popular NSGA-II algorithm the number of non-dominated solutions 1 increases drastically with the number of objectives leading to a very slow convergence to the Pareto Front 2. Therefore, decomposition-based techniques are gaining a growing interest. The objective of Bonus is to investigate various decomposition schemes and cooperation protocols between the subproblems resulting from the decomposition to generate efficiently global solutions of good quality. Several challenges have to be addressed: (1) how to define the subproblems (decomposition strategy), (2) how to solve them to generate local solutions (local rules), and (3) how to combine these latter with those generated by other subproblems and how to generate global solutions (cooperation mechanism), and (4) how to combine decomposition strategies in more than one space (hybridization strategy)?

The decomposition in the decision space can be performed following different ways according to the problem at hand. Two major categories of decomposition techniques can be distinguished: the first one consists in breaking down the high-dimensional decision vector into lower-dimensional and easier-to-optimize blocks of variables. The major issue is how to define the subproblems (blocks of variables) and their cooperation protocol: randomly vs. using some learning (e.g. separability analysis), statically vs. adaptively, etc. The decomposition in the decision space can also be guided by the type of variables i.e. discrete vs. continuous. The discrete and continuous parts are optimized separately using cooperative hybrid algorithms 49. The major issue of this kind of decomposition is the presence of categorial variables in the discrete part 45. The Bonus team is addressing this issue, rarely investigated in the literature, within the context of vehicle aerospace engineering design. The second category consists in the decomposition according to the ranges of the decision variables (search space decomposition). For continuous problems, the idea consists in iteratively subdividing the search (e.g. design) space into subspaces (hyper-rectangles, intervals, etc.) and select those that are most likely to produce the lowest objective function value. Existing approaches meet increasing difficulty with an increasing number of variables and are often applied to low-dimensional problems. We are investigating this scalability challenge (e.g. 10). For discrete problems, the major challenge is to find a coding (mapping) of the search space to a decomposable entity. We have proposed an interval-based coding of the permutation space for solving big permutation problems. The approach opens perspectives we are investigating 7, in terms of ultra-scale parallelization, application to multi-permutation problems and hybridization with metaheuristics.

The decomposition in the objective space consists in breaking down an original many-objective problem (MaOP) into a set of cooperative single-objective subproblems (SOPs). The decomposition strategy requires the careful definition of a scalarizing (aggregation) function and its weighting vectors (each of them corresponds to a separate SOP) to guide the search process towards the best regions. Several scalarizing functions have been proposed in the literature including weighted sum, weighted Tchebycheff, vector angle distance scaling, etc. These functions are widely used but they have their limitations. For instance, using weighted Tchebycheff might do harm diversity maintenance and weighted sum is inefficient when it comes to deal with nonconvex Pareto Fronts 41. Defining a scalarizing function well-suited to the MaOP at hand is therefore a difficult and still an open question being investigated in Bonus 6, 5. Studying/defining various functions and in-depth analyzing them to better understand the differences between them is required. Regarding the weighting vectors that determine the search direction, their efficient setting is also a key and open issue. They dramatically affect in particular the diversity performance. Their setting rises two main issues: how to determine their number according to the available computational resources? when (statically or adaptively) and how to determine their values? Weight adaptation is one of our main concerns that we are addressing especially from a distributed perspective. They correspond to the main scientific objectives targeted by our bilateral ANR-RGC BigMO project with City University (Hong Kong). The other challenges pointed out in the beginning of this section concern the way to solve locally the SOPs resulting from the decomposition of a MaOP and the mechanism used for their cooperation to generate global solutions. To deal with these challenges, our approach is to design the decomposition strategy and cooperation mechanism keeping in mind the parallel and/or distributed solving of the SOPs. Indeed, we favor the local neighborhood-based mating selection and replacement to minimize the network communication cost while allowing an effective resolution 5. The major issues here are how to define the neighborhood of a subproblem and how to cooperatively update the best-known solution of each subproblem and its neighbors.

To sum up, the objective of the Bonus team is to come up with scalable decomposition-based approaches in the decision and objective spaces. In the decision space, a particular focus will be put on high dimensionality and mixed-continuous variables which have received little interest in the literature. We will particularly continue to investigate at larger scales using ultra-scale computing the interval-based (discrete) and fractal-based (continuous) approaches. We will also deal with the rarely addressed challenge of mixed-continuous variables including categorial ones (collaboration with ONERA). In the objective space, we will investigate parallel ultra-scale decomposition-based many-objective optimization with ML-based adaptive building of scalarizing functions. A particular focus will be put on the state-of-the-art MOEA/D algorithm. This challenge is rarely addressed in the literature which motivated the collaboration with the designer of MOEA/D (bilateral ANR-RGC BigMO project with City University, Hong Kong). Finally, the joint decision-objective decomposition, which is still in its infancy 51, is another challenge of major interest.

3.2 Machine Learning-assisted Optimization

The Machine Learning (ML) approach based on metamodels (or surrogates) is commonly used, and also adopted in Bonus, to assist optimization in tackling BOPs characterized by time-demanding objective functions. The second line of research of Bonus is focused on ML-aided optimization to raise the challenge of expensive functions of BOPs using surrogates but also to assist the two other research lines (decomposition-based and ultra-scale optimization) in dealing with the other challenges (high dimensionality and scalability).

Several issues have been identified to make efficient and effective surrogate-assisted optimization. First, infill criteria have to be carefully defined to adaptively select the adequate sample points (in terms of surrogate precision and solution quality). The challenge is to find the best trade-off between exploration and exploitation to efficiently refine the surrogate and guide the optimization process toward the best solutions. The most popular infill criterion is probably the Expected Improvement (EI) 44 which is based on the expected values of sample points but also and importantly on their variance. This latter is inherently determined in the kriging model, this is why it is used in the state-of-the-art efficient global optimization (EGO) algorithm 44. However, such crucial information is not provided in all surrogate models (e.g. Artificial Neural Networks) and needs to be derived. In Bonus, we are currently investigating this issue. Second, it is known that surrogates allow one to reduce the computational burden for solving BOPs with time-consuming function(s). However, using parallel computing as a complementary way is often recommended and cited as a perspective in the conclusions of related publications. Nevertheless, despite being of critical importance parallel surrogate-assisted optimization is weakly addressed in the literature. For instance, in the introduction of the survey proposed in 43 it is warned that because the area is not mature yet the paper is more focused on the potential of the surveyed approaches than on their relative efficiency. Parallel computing is required at different levels that we are investigating.

Another issue with surrogate-assisted optimization is related to high dimensionality in decision as well as in objective space: it is often applied to low-dimensional problems. The joint use of decomposition, surrogates and massive parallelism is an efficient approach to deal with high dimensionality. This approach adopted in Bonus has received little effort in the literature. In Bonus, we are considering a generic framework in order to enable a flexible coupling of existing surrogate models within the state-of-the-art decomposition-based algorithm MOEA/D. This is a first step in leveraging the applicability of efficient global optimization into the multi-objective setting through parallel decomposition. Another issue which is a consequence of high dimensionality is the mixed (discrete-continuous) nature of decision variables which is frequent in real-world applications (e.g. engineering design). While surrogate-assisted optimization is widely applied in the continuous setting it is rarely addressed in the literature in the discrete-continuous framework. In 45, we have identified different ways to deal with this issue that we are investigating. Non-stationary functions frequent in real-world applications (see Section 4.1) is another major issue we are addressing using the concept of deep Gaussian Processes.

Finally, as quoted in the beginning of this section, ML-assisted optimization is mainly used to deal with BOPs with expensive functions but it will also be investigated for other optimization tasks. Indeed, ML will be useful to assist the decomposition process. In the decision space, it will help to perform the separability analysis (understanding of the interactions between variables) to decompose the vector of variables. In the objective space, ML will be useful to assist a decomposition-based many-objective algorithm in dynamically selecting a scalarizing function or updating the weighting vectors according to their performances in the previous steps of the optimization process 5. Such a data-driven ML methodology would allow us to understand what makes a problem difficult or an optimization approach efficient, to predict the algorithm performance 4, to select the most appropriate algorithm configuration 8, and to adapt and improve the algorithm design for unknown optimization domains and instances. Such an autonomous optimization approach would adaptively adjust its internal mechanisms in order to tackle cross-domain BOPs.

In a nutshell, to deal with expensive optimization the Bonus team will investigate the surrogate-based ML approach with the objective to efficiently integrate surrogates in the optimization process. The focus will especially be put on high dimensionality (e.g. using decomposition) with mixed discrete-continuous variables which is rarely investigated. The kriging metamodel (Gaussian Process or GP) will be considered in particular for engineering design (for more reliability) addressing the above issues and other major ones including mainly non stationarity (using emerging deep GP) and ultra-scale parallelization (highly needed by the community). Indeed, a lot of work has been reported on deep neural networks (deep learning) surrogates but not on the others including (deep) GP. On the other hand, ML will be used to assist decomposition: importance/interaction between variables in the decision space, dynamic building (selection of scalarizing functions, weight update, etc.) of scalarizing functions in the objective space, etc.

3.3 Ultra-scale Optimization

The third line of our research program that accentuates our difference from other (project-)teams of the related Inria scientific theme is the ultra-scale optimization. This research line is complementary to the two others, which are sources of massive parallelism and with which it should be combined to solve BOPs. Indeed, ultra-scale computing is necessary for the effective resolution of the large amount of subproblems generated by decomposition of BOPs, parallel evaluation of simulation-based fitness and metamodels, etc. These sources of parallelism are attractive for solving BOPs and are natural candidates for ultra-scale supercomputers 3. However, their efficient use raises a big challenge consisting in managing efficiently a massive amount of irregular tasks on supercomputers with multiple levels of parallelism and heterogeneous computing resources (GPU, multi-core CPU with various architectures) and networks. Raising such challenge requires to tackle three major issues: scalability, heterogeneity and fault-tolerance, discussed in the following.

The scalability issue requires, on the one hand, the definition of scalable data structures for efficient storage and management of the tremendous amount of subproblems generated by decomposition 47. On the other hand, achieving extreme scalability requires also the optimization of communications (in number of messages, their size and scope) especially at the inter-node level. For that, we target the design of asynchronous locality-aware algorithms as we did in 42, 50. In addition, efficient mechanisms are needed for granularity management and coding of the work units stored and communicated during the resolution process.

Heterogeneity means harnessing various resources including multi-core processors within different architectures and GPU devices. The challenge is therefore to design and implement hybrid optimization algorithms taking into account the difference in computational power between the various resources as well as the resource-specific issues. On the one hand, to deal with the heterogeneity in terms of computational power, we adopt in Bonus the dynamic load balancing approach based on the Work Stealing (WS) asynchronous paradigm 4 at the inter-node as well as at the intra-node level. We have already investigated such approach, with various victim selection and work sharing strategies in 507. On the other hand, hardware resource specific-level optimization mechanisms are required to deal with related issues such as thread divergence and memory optimization on GPU, data sharing and synchronization, cache locality, and vectorization on multi-core processors, etc. These issues have been considered separately in the literature including our works 9, 1. Actually, in most of existing works related to GPU-accelerated optimization only a single CPU core is used. This leads to a huge resource wasting especially with the increase of the number of processing cores integrated into modern processors. Using jointly the two components raises additional issues including data and work partitioning, the optimization of CPU-GPU data transfers, etc.

Another issue the scalability induces is the increasing probability of failures in modern supercomputers 48. Indeed, with the increase of their size to millions of processing cores their Mean-Time Between Failures (MTBF) tends to be shorter and shorter 46. Failures may have different sources including hardware and software faults, silent errors, etc. In our context, we consider failures leading to the loss of work unit(s) being processed by some thread(s) during the resolution process. The major issue, which is particularly critical in exact optimization, is how to recover the failed work units to ensure a reliable execution. Such issue is tackled in the literature using different approaches: algorithm-based fault tolerance, checkpoint/restart (CR), message logging and redundancy. The CR approach can be system-level, library/user-level or application-level. Thanks to its efficiency in terms of memory footprint, adopted in Bonus 2, the application-level approach is commonly and widely used in the literature. This approach raises several issues mainly: (1) which critical information defines the state of the work units and allows to resume properly their execution? (2) when, where and how (using which data structures) to store it efficiently? (3) how to deal with the two other issues: scalability and heterogeneity?

The last but not least major issue which is another roadblock to exascale is the programming of massive-scale applications for modern supercomputers. On the path to exascale, we will investigate the programming environments and execution supports able to deal with exascale challenges: large numbers of threads, heterogeneous resources, etc. Various exascale programming approaches are being investigated by the parallel computing community and HPC builders: extending existing programming languages (e.g. DSL-C++) and environments/libraries (MPI+X, etc.), proposing new solutions including mainly Partitioned Global Address Space (PGAS)-based environments (Chapel, UPC, X10, etc.). It is worth noting here that our objective is not to develop a programming environment nor a runtime support for exascale computing. Instead, we aim to collaborate with the research teams (inside or outside Inria) having such objective.

To sum up, we put the focus on the design and implementation of efficient big optimization algorithms dealing jointly (uncommon in parallel optimization) with the major issues of ultra-scale computing mainly the scalability up to millions of cores using scalable data structures and asynchronous locality-aware work stealing, heterogeneity addressing the multi-core and GPU-specific issues and those related to their combination, and scalable GPU-aware fault tolerance. A strong effort will be devoted to this latter challenge, for the first time to the best of our knowledge, using application-level checkpoint/restart approach to deal with failures.

4 Application domains

4.1 Introduction

For the validation of our findings we obviously use standard benchmarks to facilitate the comparison with related works. In addition, we also target real-world applications in the context of our collaborations and industrial contracts. From the application point of view two classes are targeted: complex scheduling and engineering design. The objective is twofold: proposing new models for complex problems and solving efficiently BOPs using jointly the three lines of our research program. In the following, are given some use cases that are the focus of our current industrial collaborations.

4.2 Big optimization for complex scheduling

Three application domains are targeted: energy, health and transport and logistics. In the energy field, with the smart grid revolution (multi-)house energy management is gaining a growing interest. The key challenge is to make elastic with respect to the energy market the (multi-)house energy consumption and management. This kind of demand-side management will be of strategic importance for energy companies in the near future. In collaboration with the EDF energy company we are working on the formulation and solving of optimization problems on demand-side management in smart micro-grids for single- and multi-user frameworks. These complex problems require taking into account multiple conflicting objectives and constraints and many (deterministic/uncertain, discrete/continuous) parameters. A representative example of such BOPs that we are addressing is the scheduling of the activation of a large number of electrical and thermal appliances for a set of homes optimizing at least three criteria: maximizing the user's confort, minimizing its energy bill and minimzing peak consumption situations.

In the healthcare domain, we have collaborated with the Beckman & Coulter company on the design and planning of large medical laboratories. This is a hot topic resulting from the mutualisation phenomenon which makes these laboratories bigger. As a consequence, being responsible for analyzing medical tests ordered by physicians on patient’s samples, these laboratories receive large amounts of prescriptions and tubes making their associated workflow more complex. Our aim was therefore to design and plan any medical laboratory to minimize the costs and time required to perform the tests. More exactly, the focus was put on the multi-objective modeling and solving of large (e.g. dozens of thousands of medical test tubes to be analyzed) strategic, tactical and operational problems such as the layout design, machine selection and configuration, assignment and scheduling.

4.3 Big optimization for engineering design

The focus is for now put on the aerospace vehicle design, a complex multidisciplinary optimization process, we are exploring in collaboration with ONERA. The objective is to find the vehicle architecture and characteristics that provide the optimal performance (flight performance, safety, reliability, cost etc.) while satisfying design requirements 40. A representative topic we are investigating, and will continue to investigate throughout the lifetime of the project given its complexity, is the design of launch vehicles that involves at least four tightly coupled disciplines (aerodynamics, structure, propulsion and trajectory). Each discipline may rely on time-demanding simulations such as Finite Element analyses (structure) and Computational Fluid Dynamics analyses (aerodynamics). Surrogate-assisted optimization is highly required to reduce the time complexity. In addition, the problem is high-dimensional (dozens of parameters and more than three objectives) requiring different decomposition schemas (coupling vs. local variables, continuous vs. discrete even categorial variables, scalarization of the objectives). Another major issue arising in this area is the non-stationarity of the objective functions which is generally due to the abrupt change of a physical property that often occurs in the design of launch vehicles. In the same spirit than deep learning using neural networks, we use Deep Gaussian Processes (DGPs) to deal with non-stationary multi-objective functions. Finally, the resolution of the problem using only one objective takes one week using a multi-core processor. The first way to deal with the computational burden is to investigate multi-fidelity using DGPs to combine efficiently multiple fidelity models. This approach has been investigated this year within the context of the PhD thesis of A. Hebbal. In addition, ultra-scale computing is required at different levels to speed up the search and improve the reliability which is a major requirement in aerospace design. This example shows that we need to use the synergy between the three lines of our research program to tackle such BOPs.

5 Highlights of the year

5.1 Awards

  • Outstanding Paper Award from HPCS'2020-2021 international conference 21. The winners are T. Carneiro and N. Melab from Bonus in collaboration with A. Hayashi and V. Sarkar from Georgia Institute of Technology, Atlanta, USA.
  • Two Best Paper nominations at ACM GECCO'2021: 24 (B. Derbel and L. Canonne), 32 (A. Liefooghe in collaboratiuon with S. Verel from Université du Littoral - Côte d'Opale (France) and B. Lacroix, A-C. Zăvoianu and J. Mccall from Robert Gordon University (Scotland, UK)).
  • Best Paper nomination at LNCS EvoCOP’2021: 23 (R. Cosson, B. Derbel, A. Liefooghe, in collaboration with H. Aguirre, K. Tanaka, Shinshu University, Japan, and Q. Zhang, City University, Honk Kong)
  • Two promoted to the second step of the exceptional class in their Full Professor positions: N. Melab and E-G. Talbi.

5.2 Other highlights

  • Involvement of Bonus in 2 projets accepted for Université de Lille : national-scale PIA3 Equipex+ MesoNet (N. Melab: scientific coordinateur for ULille, Total budget: 14,2M€ including 1,4M€ for ULille) and regional-scale INFRANUM labeling of the regional data center (HdF) (N. Melab: member of PI team of the project).
  • Local organization by Bonus of ACM GECCO'2021 at Lille, a flagship international conference in the research field of the team. This is the first time that GECCO has been organized in France since its creation in 1999. In addition, the conference attracted a record number of participants this year (more than 900)

6 New software and platforms

The core activity of Bonus is focused on the design, implementation and analysis of algorithmic approaches for efficient and effective solving of BOPs. In addition, we have an increasing activity in software development driven by our goal of making our algorithmic contributions freely available for the optimization community. On the one hand, this leads us to develop some homemade prototype codes: Permutation Branch-and-Bound (pBB) and Python library for Surrogate-based Optimization (pySBO). On the other hand, we started in 2020 to develop the more ambitious Python-based Parallel and distributed Evolving Objects (pyParadisEO) software framework. In addition, Bonus is strongly involved in the activities related to the Grid'5000 nation-wide distributed testbed as a scientific leader for the site located at Lille. Finally, the different software tools and Grid'5000 testbed are described below.

6.1 New software

6.1.1 pBB

  • Name:
    Permutation Branch-and-Bound
  • Keywords:
    Optimisation, Parallel computing, Data parallelism, GPU, Scheduling, Combinatorics, Distributed computing
  • Functional Description:
    The algorithm proceeds by implicit enumeration of the search space by parallel exploration of a highly irregular search tree. pBB contains implementations for single-core, multi-core, GPU and heterogeneous distributed platforms. Thanks to its hierarchical work-stealing mechanism, required to deal with the strong irregularity of the search tree, pBB is highly scalable. Scalability with over 90% parallel efficiency on several hundreds of GPUs has been demonstrated on the Jean Zay supercomputer located at IDRIS.
  • URL:
  • Contact:
    Jan Gmys

6.1.2 ParadisEO

  • Keyword:
    Parallelisation
  • Scientific Description:
    ParadisEO (PARallel and DIStributed Evolving Objects) is a C++ white-box object-oriented framework dedicated to the flexible design of metaheuristics. Based on EO, a template-based ANSI-C++ compliant evolutionary computation library, it is composed of four modules: * Paradiseo-EO provides tools for the development of population-based metaheuristic (Genetic algorithm, Genetic programming, Particle Swarm Optimization (PSO)...) * Paradiseo-MO provides tools for the development of single solution-based metaheuristics (Hill-Climbing, Tabu Search, Simulated annealing, Iterative Local Search (ILS), Incremental evaluation, partial neighborhood...) * Paradiseo-MOEO provides tools for the design of Multi-objective metaheuristics (MO fitness assignment shemes, MO diversity assignment shemes, Elitism, Performance metrics, Easy-to-use standard evolutionary algorithms...) * Paradiseo-PEO provides tools for the design of parallel and distributed metaheuristics (Parallel evaluation, Parallel evaluation function, Island model) Furthermore, ParadisEO also introduces tools for the design of distributed, hybrid and cooperative models: * High level hybrid metaheuristics: coevolutionary and relay model * Low level hybrid metaheuristics: coevolutionary and relay model
  • Functional Description:
    Paradiseo is a software framework for metaheuristics (optimisation algorithms aimed at solving difficult optimisation problems). It facilitates the use, development and comparison of classic, multi-objective, parallel or hybrid metaheuristics.
  • URL:
  • Contact:
    El-Ghazali Talbi
  • Partners:
    CNRS, Université Lille 1

6.1.3 pyparadiseo

  • Keywords:
    Optimisation, Framework
  • Functional Description:
    pyparadiseo is a Python version of ParadisEO, a C++-based open-source white-box framework dedicated to the reusable design of metaheuristics. It allows the design and implementation of single-solution and population-based metaheuristics for mono- and multi-objective, continuous, discrete and mixed optimization problems.
  • URL:
  • Contact:
    Jan Gmys

6.1.4 pySBO

  • Name:
    PYthon library for Surrogate-Based Optimization
  • Keywords:
    Parallel computing, Evolutionary Algorithms, Multi-objective optimisation, Black-box optimization, Optimisation
  • Functional Description:
    The pySBO library aims at facilitating the implementation of parallel surrogate-based optimization algorithms. pySBO provides re-usable algorithmic components (surrogate models, evolution controls, infill criteria, evolutionary operators) as well as the foundations to ensure the components inter-changeability. Actual implementations of sequential and parallel surrogate-based optimization algorithms are supplied as ready-to-use tools to handle expensive single- and multi-objective problems. The illustrated documentation of pySBO is available on-line through a dedicated web-site.
  • URL:
  • Contact:
    Guillaume Briffoteaux

6.2 New platforms

6.2.1 Grid'5000 testbed: major achievements in 2021

Participants: Nouredine Melab [contact person], Dimitri Delabroye.

  • Keywords: Experimental testbed, large-scale computing, high-performance computing, GPU computing, cloud computing, big data
  • Functional description: Grid'5000 is a project intiated in 2003 by the French government to promote scientific research on large scale distributed systems. The project is later supported by different research organizations including Inria, CNRS, the french universities, Renater which provides the wide-area network etc. The overall objective of Grid'5000 was to build by 2007 a nation-wide experimental testbed composed of at least 5000 processing units and distributed over several sites in France (one of them located at Lille). From a scientific point of view, the aim was to promote scientific research on large-scale distributed systems.

    Within the framework of CPER contract “Data", the equipment of Grid'5000 at Lille has been renewed in 2017-2018 in terms of hardware resources (GPU-powered servers, storage, PDUs, etc.) and infrastructure (network, inverter, etc.). During the year 2021, the testbed has been used extensively by many researchers from Inria and outside. A half-day training has been organized on April 1st to get started with Grid'5000. A new IA-dedicated CPER contract “CornelIA" has been accepted in 2021. As scientific leader of Grid'5000 at Lille, N. Melab has been involved in the proposal preparation (Grid'5000 part). He will be strongly involved in the renewal of the equipement and the recruitment of an engineer, after the departure of D. Delabroye, for its system and network administration.

  • URL: Grid'5000

7 New results

During the year 2021, we have addressed different issues/challenges related to the three lines of our research program. The major contributions are summarized in the following sections. Besides, alongside these contributions we came out with other ancillary contributions within the context of national and international collaborations. In particular, we have jointly designed various multi-objective optimization models and algorithms. In 30, 13, we investigated anytime performance and design of multi-objective algorithms. In 14, we have proposed unified polynomial dynamic programming algorithms for P-Center variants in a 2D Pareto front. In 11, we have addressed the robustness issue for fuzzy multi-objective problems. From application point of view, we tackled vehicular edge computing using Wave and 5G networks in 12, friction-induced vibration in automative engineering in 17, energy management in a microgrid in 16, image classification in 29 and optical flow in 31.

7.1 Decomposition-based optimization

We point out three major contributions in decomposition-based optimization. Firstly, we have investigated in 25 the design of escape mechanisms to improve the state-of-the-art decomposition-based evolutionary multi-objective Moea/d framework. Secondly, we have fostered in 23 a set of landscape features for multi-objective combinatorial optimization in the continuous context using decomposition for automated algorithm selection. Finally, we have proposed in 34 a decomposition-based epsilon-constraint method for solving the multi-objective 2-dimensional Vector Packing problem. The contributions are summarized in the following.

7.1.1 Enhancing Moea/d with escape mechanisms

Participants: Bilel Derbel [contact person], Geoffrey Pruvost [Chung-Ang University, Seoul], Byung-Woo Hong [Chung-Ang University, Seoul].

In 25, we have investigated the design of escape mechanisms within the state-of-the-art decomposition-based evolutionary multi-objective Moea/d framework. We propose to track the number of improvements made with respect to the single-objective sub-problems defined by decomposition. This allows us to compute an estimated sub-problem improvement probability which serves as an activation signal for some solution perturbation mechanism to occur. We report the benefits of such an approach by conducting a comprehensive experimental analysis on a broad range of combinatorial bi-objective bit-string landscapes with variable dimensions and ruggedness. Our empirical findings provide evidence on the effectiveness of the proposed escape mechanism and its ability in providing substantial improvement over conventional Moea/d. Besides, we provide a detailed analysis of parameters impact and anytime behavior in order to better highlight the strength of the proposed techniques as a function of available budget and problem characteristics.

7.1.2 Decomposition-based multi-objective landscape features and automated algorithm selection

Participants: Raphaël Cosson [contact person], Bilel Derbel [contact person], Arnaud Liefooghe [Faculty of Engineering, Nagano, Japan], Hernán Aguirre [Faculty of Engineering, Nagano, Japan], Kiyoshi Tanaka [Faculty of Engineering, Nagano, Japan], Qingfu Zhang [City University, Hong Kong].

Landscape analysis is of fundamental interest for improving our understanding on the behavior of evolutionary search, and for developing general-purpose automated solvers based on techniques from statistics and machine learning. In 23, we push a step towards the development of a landscape-aware approach by proposing a set of landscape features for multi-objective combinatorial optimization, by decomposing the original multi-objective problem into a set of single-objective sub-problems. Based on a comprehensive set of bi-objective ρmnk-landscapes and three variants of the state-of-the-art MOEA/D algorithm, we study the association between the proposed features, the global properties of the considered landscapes, and algorithm performance. We also show that decomposition-based features can be integrated into an automated approach for predicting algorithm performance and selecting the most accurate one on blind instances. In particular, our study reveals that such a landscape-aware approach is substantially better than the single best solver computed over the three considered MOEA/D variants.

7.1.3 Solving the multi-objective 2-dimensional Vector Packing problem using decomposition-based epsilon-constraint method

Participants: Nadia Dahmani [College of Technological Innovation, Zayed University, Abu Dhabi, UAE and Institut Supérieur de Gestion de Tunis, Tunisia], Saoussen Krichen [Institut Supérieur de Gestion de Tunis, Tunisia], El-Ghazali Talbi [contact person], Sanaa Kaddoura [College of Technological Innovation, Zayed University, Abu Dhabi, UAE].

In 34, an exact method is designed to solve the multiobjective 2-dimensional vector packing problem. The algorithm is an adapted version of an efficient epsilon-constraint method which proves its efficiency in solving a large variety of multi-objective optimization problems. This method is based on a clever decomposition of the initial problem into sub-problems which are iteratively solved through mathematical programming. To accelerate the search process, we propose a new integer programming model for solving the multi-objective 2-dimensional vector packing problem based on the compact model for the bin packing problem with fragile objects. Instead of scanning all possible solutions, we consider the solutions while solving a Subset-Sum problem. Hence, unuseful subproblems are avoided and thus the search space is reduced. An experimental study is performed based on instances from the literature. A comparison is performed between the exact method and a grounded metaheuristic which provides good results in solving the multi-objective 2-dimensional vector packing problem.

7.2 ML-assisted optimization

As pointed out in our research program 3.2, we investigate the ML-assisted optimization following two directions: (1) efficient building of surrogates and their integration into optimization algorithms to deal with expensive black-box objective functions, and (2) automatically building and predicting/improving optimization algorithms.

Regarding surrogate-assisted optimization, we have firstly investigated the design of evolution control ensembles in 20. These latter allow one to decide to predict, simulate or discard newly generated solutions in order to maintain a good exploitation/exploration balance in surrogate-assisted evolutionary algorithms. In 39, we have proposed a modular surrogate-assisted framework for expensive multi-objective combinatorial optimization. The proposed framework is based on Walsh basis and a decomposition-based evolutionary paradigm to maintain the solution set. Finally, in 18, we have proosed a unified way to describe various optimization algorithms that focus on common and important search components of optimization algorithms for automated design of deep neural networks.

Regarding the second direction, we have designed optimization algorithms using ML techniques. In 32, we have investigated the transfer of landscape analysis features from combinatorial to continuous optimization for the landscape-aware design of metaheuristics. In 27, we have proposed a Q-learning-based hyper-heuristic for generating efficient Unmanned Aerial Vehicles (UAV) Swarming Behaviors. Finally, we have proposed in 19 a survey on the integration approaches of Machine Learning into metaheuristics. All these major contributions are summarized in the following.

7.2.1 Evolution control ensemble models for surrogate-assisted evolutionary algorithms

Participants: Guillaume Briffoteaux [contact person], Romain Ragonnet [Monash University, Melbourne, Australia], Mohand Mezmaz [University of Mons, Belgium], Nouredine Melab [University of Mons, Belgium], Daniel Tuyttens [University of Mons, Belgium].

Finding the trade-off between exploitation and exploration in a Surrogate-Assisted Evolutionary Algorithm is challenging as the focus on the landscape being optimized moves during the search. The balancing is mainly guided by Evolution Controls, that decide to simulate, predict or discard newly generated candidate solutions. Combining Evolution Controls in ensembles allows to regulate the degree of exploitation and exploration during the search. In 20, we propose ensemble strategies including multiple Evolution Controls in order to adapt the trade-off for each region scrutinized during the search. Experiments led on benchmark problems and on a real-world application of SARS-CoV-2 Transmission Control reveal that favoring exploration at the beginning of the search and favoring exploitation at the end of the search is beneficial in many cases.

7.2.2 A modular surrogate-assisted framework for expensive multiobjective combinatorial optimization

Participants: Geoffrey Pruvost [Contact person], Bilel Derbel [Contact person], Arnaud Liefooghe [Université du Littoral - Côte d'Opale, France], Sébastien Verel [Université du Littoral - Côte d'Opale, France], Qingfu Zhang [City University, Hong Kong].

In 39, we aim at pushing a step towards the development of a surrogate-assisted methodology for expensive optimization problems that have both a combinatorial and a multiobjective nature. We target pseudo-boolean multiobjective functions, and we provide a comprehensive study on the design of a modular framework integrating three main configurable components. The proposed framework is based on the Walsh basis as a surrogate, and on a decomposition-based evolutionary paradigm for maintaining the solution set. The three considered components are: (i) the inner optimizer used for handling the soconstructed Walsh surrogate, (ii) the selection strategy allowing to decide which solution is to be evaluated by the expensive objectives, and (iii) the strategy used to setup the Walsh order hyper-parameter. Based on a thorough empirical analysis relying on two benchmark problems, namely bi-objective NK-landscapes and Unconstrained Binary Quadratic Programming (UBQP) problems, we show the effectiveness of the proposed framework with respect to the available budget in terms of calls to the evaluation function. More importantly, our empirical findings shed more lights on the combined effects of the investigated components on search performance, thus providing a better understanding of the key challenges for designing a successful surrogate-assisted multiobjective combinatorial search process.

7.2.3 Automated Design of Deep Neural Networks

Participants: El-Ghazali Talbi [Contact person].

In recent years, research in applying optimization approaches in the automatic design of deep neural networks has become increasingly popular. Although various approaches have been proposed, there is a lack of a comprehensive survey and taxonomy on this hot research topic. In 18, we propose a unified way to describe the various optimization algorithms that focus on common and important search components of optimization algorithms: representation, objective function, constraints, initial solution(s), and variation operators. In addition to large-scale search space, the problem is characterized by its variable mixed design space, it is very expensive, and it has multiple blackbox objective functions. Hence, this unified methodology has been extended to advanced optimization approaches, such as surrogate-based, multi-objective, and parallel optimization.

7.2.4 ML-assisted design of optimization algorithms

In 32, in collaboration with ULCO - Université du Littoral Côte d'Opale (France) and RGU - Robert Gordon University (Scotland, UK), we have investigated the transfer of landscape features from combinatorial to continuous optimization. Actually, we have demonstrated the application of features from landscape analysis, initially proposed for multi-objective combinatorial optimization, to a benchmark set of 1200 randomly-generated multi-objective interpolated continuous optimization problems. We have also explored the benefits of evaluating the considered landscape features on the basis of a fixed-size sampling of the search space. This allows fine control over cost when aiming for an efficient application of feature-based automated performance prediction and algorithm selection. When combining them with a classification model for automated algorithm selection, the landscape features exhibit strong predictive power with an accuracy of over 85%. This demonstrates the salience of the proposed features to characterizing the landscape of multi-objective continuous optimization problems, and to making viable recommendations for selecting a well-suited algorithm when faced with unseen problems. In particular, we discuss the problem classes for which decomposition-based optimization approaches (Section 7.1.1) seem relevant for problem solving.

In 27, in colaboration with the University of Luxembourg we have proposed a Q-learning-based hyper-heuristic for generating efficient Unmanned Aerial Vehicles (UAV) Swarming Behaviors. The motivation is that manually designing such algorithms can be very time-consuming and error prone since swarming relies on an emergent behavior which can be hard to predict from local interactions. The reported results demonstrate that it is possible to obtain efficient swarming heuristics independently of the problem size, thus allowing a fast training on small instances.

Finally, we have proposed in 19 a survey on the integration approaches of Machine Learning (ML) into metaheuristics. We have defined uniformly the various ways allowing to build synergies between the two. A detailed taxonomy is proposed according to the concerned search component: target optimization problem and low-level and high-level components of metaheuristics. Our goal is also to motivate researchers in optimization to include ideas from ML into metaheuristics. We have also identified some open research issues in this topic that need further in-depth investigations. For instance, investigating the integration of ML into exact optimization techniques (e.g., mathematical programming, branch and bound, dynamic programming, constraint programming) is pointed out as an important research challenge. In 15, we have emphasized the interest of the multiple types of hybridization between mathematical programming, machine learning and heuristics inside the Column Generation framework for solving effectively the Basic Technician Routing problem.

7.3 Towards ultra-scale Big Optimization

During the year 2021, we have firstly addressed the ultra-scale optimization research line using parallel tree search-based exact optimization methods. More exactly, the focus was put on the design and implementation of efficient algorithms dealing with three major challenges: GPU-aware heterogeneity, software productivity awareness and scalability in terms of number of processing cores and size of tackled problem instances. Two approaches have been investigated: and evolutionary approach using Message Passing Interface or MPI+X 38 and a revolutionary approach using the Partitioned Global Address Space or PGAS-based Chapel language favoring the software productivity awareness 21, 22. These contributions are pioneering in the context of parallel optimization. Secondly, we have proposed a new parallel acquisition function to sample new points in Bayesian optimization using either a single surrogate model 28 or multiple ones 33. The proposed approach is based on adaptive binary decomposition of the decision space favoring diversification and parallelization. Finally, in 24 we introduced parallelism into the Hill Climbing local search algorithm to speed up its neighborhood exploration process and solve efficiently large-scale NK-lanscapes.

7.3.1 Solving large permutation flow-shop scheduling problems on GPU-accelerated supercomputers

Participants: Jan Gmys [Contact person].

In 38, we have proposed a scalable GPU-accelerated parallel Branch-and-Bound algorithm designed using an MPI+X (X=PThread+Cuda) approach. Two algorithmic and implementation issues of ultra-scale computing are (uncommonly) addressed: scalability and GPU-aware heterogeneity. Actually, the major challenge consists in efficiently performing parallel depth-first traversal of a highly irregular, fine-grained search tree on large distributed systems composed of hundreds of massively parallel accelerator devices and multi-core processors. The contribution has been validated on the permutation flow-shop scheduling, which is a well-known hard combinatorial optimization problem, using the GPU-powered Jean Zay peta-scale supercomputer. Among the 120 standard benchmark flow-shop instances proposed by E. Taillard in 1993, 23 remained unsolved for almost three decades. The proposed approach allowed us to solve to optimality 11 of these latter instances and to improve the best known upper bounds for 8 of them, using up to 384 V100 GPUs (2 million CUDA cores) and 3840 CPU cores. The optimality proof for the largest solved instance requires about 64 CPU-years of computation using 256 GPUs and over 4 million parallel search agents. Moreover, the traversal of the search tree was completed in 13 hours while exploring 339 Tera-nodes.

7.3.2 Towards Chapel-based Exascale Tree Search Algorithms: dealing with multiple GPU accelerators

Participants: Tiago Carneiro [Contact person], Nouredine Melab [Contact person], Akihiro Hayashi [Georgia Institute of Technology, Atlanta, USA], Vivek Sarkar [Georgia Institute of Technology, Atlanta, USA].

In 21, we revisited the design and implementation of distributed tree search algorithms dealing with multiple GPU accelerators, in addition to scalability and productivity awareness, using Chapel. To the best of our knowledge, these three challenges of ultra-scale computing have never been addressed jointly in the literature of parallel combinatorial optimization. A large scale distributed backtracking for enumerating all valid solutions of the N-Queens problem was conceived using the PGAS-based approach. Actually, the proposed algorithm exploits Chapel’s distributed iterators combined with pre-compiled CUDA kernels through Chapel’s C-interoperability layer. According to the reported performance results, the implementation of the proposed algorithm is quite slower than a hand-optimized CUDA-C implementation, when taking into account single-locale execution. However, in distributed scenarios, the proposed algorithm scales very well, achieving more than 90% of the linear speedup for the biggest test-cases. This contribution has been presented at the Chapel Implementers and Users Workshop (CHIUW'2021) 22 organized by the Chapel's team from HPE/Cray.

7.3.3 Adaptive Space Partitioning for Parallel Bayesian Optimization

Participants: Maxime Gobert [Contact person], Jan Gmys [Contact person], Nouredine Melab [University of Mons, Belgium], Daniel Tuyttens [University of Mons, Belgium].

Bayesian optimization and parallel computing are common ways to deal with the optimization of highly time-consuming functions/simulators. However, combining them is still an open question raising several difficulties. In 28, we have proposed a new approach to provide large and well-chosen batches of points to evaluate in parallel in the context of batch-parallel Global Optimization. The approach is based on a recursive design space decomposition represented by a binary tree. A performance analysis is led on three benchmark functions chosen to represent known difficulties in global optimization. The results show that the proposed method challenges the state-of-the-art method q-EGO approach. On the other hand, we have extended the approach in 33 using multiple surrogate models instead of a single one to improve the diversity. The reported results show that the approach is promising.

7.3.4 Parallel Hill Climber for large-scale NK-landscapes using graph coloring

Participants: Bilel Derbel [Contact person], Lorenzo Canonne.

Efficient hill climbers are at the heart of the latest gray-box optimization techniques, where some structural information about the optimization problem is available. Focusing on NK-landscapes as a challenging class of k-bounded pseudo-boolean functions, we propose in 24 a quality-enhanced and a time-accelerated hill climber. Our investigations are based on the idea of performing several simultaneous moves at each iteration of the search process. This is enabled using graph coloring to structure the interacting variables of an NK-landscape, and to identify subsets of possibly improving independent moves in the Hamming distance one neighborhood. Besides being extremely competitive with respect to the state-of-the-art first- and best- ascent serial variants, our initial design exposes a natural degree of parallelism allowing us to convert our serial algorithm into a parallel hill climber without further design efforts. As such, we also provide a multi-threaded implementation using up to 10 shared-memory CPU-cores. Using a range of large-scale random NK-landscapes with up to 106 variables, we provide evidence on the efficiency and effectiveness of the proposed hill climber and we highlight the strength of our parallel design in attaining substantial acceleration factors for the largest experimented functions.

8 Bilateral contracts and grants with industry

8.1 Bilateral grants with industry

Our current industrial granted projects are completely at the heart of the Bonus project. They are summarized in the following.

  • EDF (2021-2024, Paris): this joint project with EDF, a major electrical power player in France, targets the automatic design and configuration of deep neural networks applied to the energy consumption forecasting. A budget of 62K€ is initially allocated, in the context of the PGMO programme of Jacques Hadamard foundation of mathematics. A budget of 150K€ is then allocated for funding a PhD thesis (CIFRE).
  • ONERA & CNES (2016-2023, Paris): the focus of this project with major European players in vehicle aerospace is put on the design of aerospace vehicles, a high-dimensional expensive multidisciplinary problem. Such problem needs the use of the research lines of Bonus to be tackled effectively and efficiently. Two jointly supervised PhD students (J. Pelamatti and A. Hebbal) have been involved in this project. The PhD thesis of J. Pelamatti has been defended in March 2020 and that of A. Hebbal 36 in January 2021. Another one (J. Gamot) has started in November 2020. The objective of this latter is to deal with the design and implementation of ultra-scale multi-objective highly constrained optimization methods for solving the internal layout problem of future aerospace systems.
  • IRT SystemX project (2021-2024, Paris): Multi-objective automated design and optimization of deep neural networks - application to embedded systems. A budget of 150K€ is allocated for funding a PhD thesis.
  • Imprevo: Stratup creation initiative: The Imprevo startup proposal has been integrated in the Inria Startup Studio program in October 2020. The projet benefited in 2021 from the recruitment of a temporal engineer.

9 Partnerships and cooperations

9.1 International initiatives

9.1.1 Inria international partners

Informal international partners
  • School of Public Health and Preventive Medicine, Monash University, Australia.
  • Robert Gordon University, UK.
  • Habanero Extreme Scale Software Research Laboratory, Georgia Tech, Atlanta, USA.

9.1.2 Participation in other international programs

  • International Lab MODO
    • Title: Frontiers in Massive Optimization and Computational Intelligence (MODO)
    • International Partner (Institution - Laboratory - Researcher): Shinshu University, Japan
    • Start year: 2017
    • See also: MODO
    • Abstract: The aim of MODO is to federate French and Japanese researchers interested in the dimensionality, heterogeneity and expensive nature of massive optimization problems. The team receives a yearly support for international exchanges and shared manpower (joint PhD students).
    • Title: Memoramdum of Understanding (MoU)
    • International Partner (Institution - Laboratory - Researcher): Riken Center for Computational Science (R-CCS), Japan
    • Start year: 2021

9.2 International research visitors

Research stays abroad
  • E-G. Talbi, many working one-week stays throughout the year, at University of Luxembourg (Luxembourg), University of Elche (Spain) and EMI - University of Rabat(Morocco)
  • N. Melab, University of Mons, Belgium, many working meetings throughout the year.

9.3 European initiatives

9.3.1 Collaborations with major European organizations

  • University of Mons, Belgium, Parallel surrogate-assisted optimization, large-scale exact optimization, two joint PhDs (M. Gobert and G. Briffoteaux).
  • University of Luxembourg, Q-Learning-based Hyper-Heuristic for Generating UAV Swarming Behaviours.
  • University of Coimbra and University of Lisbon, Portugal, Exact and heuristic multi-objective search.
  • University of Elche and University of Murcia, Spain, Matheuristics for DEA.

9.4 National initiatives

9.4.1 ANR

  • ANR Equipex+ MesoNet (2021-2026, Budget: Total: 27,3M€, For ULille: 2,3M€). The goal of the project is to strengthen the ongoing effort dedicated to the structuring of national and regional offers in numerical simulation through high performance computing (HPC), associated with artificial intelligence (AI) methods. Furthermore, it aims at offering academic and industrial researchers, Research Infrastructures staff, students, a shared response adapted to their digital equipment needs. The long-term objective is to set up a distributed infrastructure dedicated to the coordination of HPC AI in France. This inclusive and structuring project, supported by the GENCI partners (MESRI, CNRS, CEA, CPU, INRIA), aims to integrate at least one mesocenter by region and make them regional references and relays. The infrastructure, fully integrated with the European Open Science Cloud (EOSC) initiative, should have a significant impact on the appropriation by researchers of the national and regional public computing and AI facilities. MesoNet gathers 20 partners including the mesocenter located at Université de Lille. N. Melab is the scientific coordinator for ULille. The MesoNet infrastucture is highly important for the research activities of Bonus and many other research groups.
  • Bilateral ANR/RGC France/Hong Kong PRCI (2016-2021), “Big Multi-objective Optimization” in collaboration with City University of Hong Kong.

9.5 Regional initiatives

  • CPER ELSAT (2015-2021): in this project, focused on ecomobility, security and adaptability in transport, the Bonus team is involved in the transversal research line: planning and scheduling of maintenance logistics in transportation. The team got support for a one-year (2019-2020) engineer position (N. Aslimani). The support/contract has been extended by 10 months (December 2020 to September 2021).

10 Dissemination

10.1 Promoting scientific activities

10.1.1 Scientific events: organisation

General chair, scientific chair
  • A. Liefooghe (workshop co-chair): Landscape-aware heuristic search (LAHS), workshop at GECCO 2021, Lille, France, July 2021 (with N. Veerapen, K. Malan, S. Verel and G. Ochoa).
  • N. Melab (Workshop co-chair): Intl. Workshop on Parallel Optimization using/for Multi- and Many-core High Performance Computing (HPCS/POMCO'2020), Barcelona, Spain, Dec. 10-14, 2020 (postponed to March 22-27, 2021).
  • J. Gmys (Workshop co-chair): Intl. Workshop on the Synergy of Parallel Computing, Optimization and Simulation (HPCS/PaCOS'2020), Barcelona, Spain, Dec. 10-14, 2020 (postponed to March 22-27, 2021).
  • E-G. Talbi (Steering committee Chair): Intl. Conf. on Optimization and Learning (OLA'2021), Catania (Sicilia), Italy, June 21-23, 2021.
  • E-G. Talbi (Steering committee): 11th IEEE Workshop Parallel Distributed Computing and Optimization (IPDPS/PDCO'2021), held virtually, May 17, 2021.
  • B. Derbel (workshop co-chair): Advances in Decomposition based Evolutionary Multi-objecvtive Optimization (ADEMO), workshop at CEC 2021, Kraków, Poland (VIRTUAL) (with S. Z. Martinez, K. Li, Q. Zhang).
  • B. Derbel (workshop co-chair): Decomposition Techniques in Evolutionary Optimization (DTEO), workshop at GECCO 2021, Lille, France (VIRTUAL) (with Ke. Li, X. Li, Q. Zhang).
  • E-G. Talbi (Co-President): intl. Metaheuristics Summer School – MESS 2020+1 on “Learning and optimization from big data", Catania, Italy, June 15-18, 2021.
  • N. Melab (Co-chair): IBM Q-2021: 2nd IBM Q Workshop to get started with Quantum computing/programming, 25-29 Oct. 2021, Lille.
  • N. Melab (Chair): 2 simulation and HPC-related seminars at Université de Lille: IDRIS national supercomputing center, Paris and regional mesocenter from Université de Lille, 2021.

10.1.2 Scientific events: selection

Member of the conference program committees
  • The ACM Genetic and Evolutionary Computation Conference (GECCO), Lille, France, July 10-14, 2021.
  • IEEE Congress on Evolutionary Computation (CEC), Kraków, Poland, 2021.
  • 21th European Conference on Evolutionary Computation in Combinatorial Optimization (EvoCOP), Virtual, 2021.
  • 11th Intl. Conference on Evolutionary Multi-criterion Optimization (EMO), Shenzhen, China, 2021.
  • 16th Workshop on Foundations of Genetic Algorithms (FOGA), Dornbirn, Austria, 2021.
  • Intl. Workshop on Parallel Optimization using/for Multi- and Many-core High Performance Computing (HPCS/POMCO'2020), Barcelona, Spain, Dec. 10-14, 2020 (postponed to March 22-27, 2021).
  • Intl. Workshop on the Synergy of Parallel Computing, Optimization and Simulation (HPCS/PaCOS'2020+1), Barcelona, Spain (held virtually), Mar. 22-27, 2021.
  • IEEE Intl. Workshop on Parallel/Distributed Computing and Optimization (IPDPS/PDCO), held virtually, May 17, 2021.
  • Intl. Conf. on Optimization and Learning (OLA'2021), Catania (Sicilia), Italy, June 21-23, 2021.
  • Intl. Symposium on Intelligent Distributed Computing (IDC'2021), Online, Italy, Sep. 16-18, 2021.

10.1.3 Journal

Member of the editorial boards
  • N. Melab: Associate Editor of ACM Computing Surveys (IF: 10.282), since 2019.
  • A. Liefooghe: Reproducibility Board Member of ACM Transactions on Evolutionary Learning and Optimization.
  • E-G. Talbi: Guest Editor (with L. Amodeo and F. Yalaoui) of a book on Heuristics for Optimization and Learning in Springer, Studies in Computational Intelligence (SCI), 2021.
Reviewer - reviewing activities
  • Transactions on Evolutionary Computation (IEEE TEC, IF: 11.554), IEEE.
  • ACM Computing Surveys (IF: 10.282), ACM.
  • IEEE Transactions on Services Computing (IEEE TSC, IF: 8.216), IEEE.
  • ACM Transactions on Evolutionary Learning and Optimization.
  • Journal of Computational Science (JoCS, IF: 3.976), Elsevier.

10.1.4 Invited talks

  • E-G. Talbi. An optimization vision of machine learning, Invited speaker, Polytechnic University of Mohamed VI, Benguerir, Morocco, Jan 2021.
  • E-G. Talbi. Automated design of deep neural networks, Keynote speaker, International conference on Embedded Systems and Artificial Intelligence (ESAI'21), Fez, Morocco, March 2021.
  • E-G. Talbi. Neural networks architecture search and hyperparameter optimization, Invited speaker, University of Bucharest, Bucarest, Romania, Nov 2021.

10.1.5 Leadership within the scientific community

  • N. Melab: scientific leader of Grid'5000 at Lille, since 2004.
  • E-G. Talbi: Co-president of the working group “META: Metaheuristics - Theory and applications”, GDR RO and GDR MACS.
  • E-G. Talbi: Co-Chair of the IEEE Task Force on Cloud Computing within the IEEE Computational Intelligence Society.
  • A. Liefooghe: co-secretary of the association “Artificial Evolution” (EA).
  • A. Liefooghe: member of the scientific council of GdR RO, and co-animator of the MH2PPC axis (CNRS)

10.1.6 Scientific expertise

  • N. Melab: Reviewer expert for Czech Science Foundation, 2021.
  • N. Melab: Reviewer expert the European PRACE support to mitigate impact of COVID-19 pandemic, Fast Track, 2021.
  • N. Melab: Member of the advisory committee for the IT and maganement engineer training at Faculté Polytechnique de Mons, Belgium.

10.1.7 Research administration

  • N. Melab: Member of the Scientific Board (Bureau Scientfique du Centre) for the Inria Lille - Nord Europe research center, since Feb. 2019.
  • N. Melab: Member of the steering committee of “Maison de la Simulation”, since 2016.
  • B. Derbel: member of the CER committee, Inria Lille — Nord Europe.
  • A. Liefooghe: elected member of the national council of universities (CNU 27)

10.2 Teaching - supervision - juries

10.2.1 Teaching

Taught courses
  • International Master lecture: N. Melab, Supercomputing, 45h ETD, M2, Université de Lille, France.
  • Master lecture: N. Melab, Operations Research, 60h ETD, M1, Université de Lille, France.
  • Licence: A. Liefooghe, Introduction to Programming, 58.5h ETD, L1, Université de Lille, France.
  • Licence: A. Liefooghe, Algorithms and Programming, 36h ETD, L1, Université de Lille, France.
  • Licence: A. Liefooghe, Algorithms and Data structure, 36h ETD, L2, Université de Lille, France.
  • Licence: A. Liefooghe, Linear programming, 18h ETD, L3, Université de Lille, France.
  • Licence: A. Liefooghe, Graphs, 18h ETD, L3, Université de Lille, France.
  • Master: A. Liefooghe, Multi-criteria Decision Aid and Optimization, 30h ETD, M2, Université de Lille, France.
  • Master: B. Derbel, Algorithms and Complexity, 35h, M1, Université de Lille, France
  • Master: B. Derbel, Optimization and machine learning, 24h, M1, Université de Lille, France
  • Licence: B. Derbel, Object oriented design, 24h, L3, Université de Lille, France
  • Licence: B. Derbel, Software engineering, 24h, L3, Université de Lille, France
  • Licence: B. Derbel, Algorithms and data structures, 35h, L2, Université de Lille, France
  • Licence: B. Derbel, Object oriented programming, 35h, L2, Université de Lille, France
  • Engineering school: E-G. Talbi, Advanced optimization, 36h, Polytech'Lille, Université de Lille, France.
  • Engineering school: E-G. Talbi, Data mining, 36h, Polytech'Lille, Université de Lille, France.
  • Engineering school: E-G. Talbi, Operations research, 60h, Polytech'Lille, Université de Lille, France.
  • Engineering school: E-G. Talbi, Graphs, 25h, Polytech'Lille, Université de Lille, France.
  • Licence: O. Abdelkafi, Computer Science, 46.5 ETD, L1, Université de Lille, France.
  • Licence: O. Abdelkafi, Web Technologies, 36 ETD, L1, Université de Lille, France.
  • Licence: O. Abdelkafi, Unix system introduction, 6 ETD, L2, Université de Lille, France.
  • Licence: O. Abdelkafi, Web Technologies, 24 ETD, L2 S3H, Université de Lille, France.
  • Licence: O. Abdelkafi, object-oriented programming, 36 ETD, L2, Université de Lille, France.
  • Licence: O. Abdelkafi, Relational Databases, 36h ETD, L3, Université de Lille, France.
  • Licence: O. Abdelkafi, Algorithmic for Operations Research, 36h ETD, L3, Université de Lille, France.
Teaching responsabilities
  • Master leading: N. Melab, Co-head (with B. Merlet) of the international Master 2 of High-perforpmance Computing and Simulation, Université de Lille, France.
  • Master leading: B. Derbel, head of the Master MIAGE, Université de Lille, France.
  • Master leading: A. Liefooghe, supervisor of the Master 2 MIAGE IPI-NT, Université de Lille, France (until mid-2021).
  • Responsible for Communication: A. Liefooghe, Computer Science Department, Université de Lille, France.
  • Head of the international relations: E-G. Talbi, Polytech'Lille, Université de Lille, France.

10.2.2 Supervision

  • PhD defense 37: Geoffrey Pruvost, Contributions to Multi-objective optimization based on Decomposition, Dec. 3rd, 2021, Bilel Derbel and Arnaud Liefooghe.
  • PhD defense 36: (collaboration with ONERA): Ali Hebbal, Deep Gaussian Processes for the Analysis and Optimization of Complex Systems -Application to Aerospace System Design, Jan. 21th, 2021, El-Ghazali Talbi and Nouredine Melab, co-supervisors from ONERA: L. Brevault and M. Balesdent.
  • PhD in progress (cotutelle): Guillaume Briffoteaux, Parallel Surrogate-based Algorithms for Expensive Optimization Problems, Oct 2017, Defense planned for Mar. 2022, Nouredine Melab (Université de Lille) and Daniel Tuyttens (Université de Mons, Belgium).
  • PhD in progress (cotutelle): Maxime Gobert, Parallel multi-objective global optimization with applications to several simulation-based exlporation parameter, Oct 2018, Nouredine Melab (Université de Lille) and Daniel Tuyttens (Université de Mons, Belgium).
  • PhD in progress: Jeremy Sadet, Surrogate-based optimization in automotive brake design, Oct 2018, El-Ghazali Talbi (Université de Lille), Thierry Tison (Université Polytechnique Hauts-de-France, France).
  • PhD in progress: Nicolas Berveglieri, Meta-models and machine learning for massive expensive optimization, Oct 2018, Defense planned for Dec. 2021, Bilel Derbel and Arnaud Liefooghe.
  • PhD in progress: David Redon, Enabling Large Scale Computational Intelligence with HPC, started Oct. 2020, Bilel Derbel, and Pierre Fortin (Université de Lille)
  • PhD in progress: Raphael Cosson, Design, selection and configuration of adaptive algorithms for cross-domain optimization, Nov. 2019, Bilel Derbel and Arnaud Liefooghe.
  • PhD in progress (cotutelle): Alexandre Jesus, Algorithm selection in multi-objective optimization, Bilel Derbel and Arnaud Liefooghe (Université de Lille), Luís Paquete (University of Coimbra, Portugal).
  • PhD in progress: Juliette Gamot, Multidisciplinary design and analysis of aerospace concepts with a focus on internal placement optimization, Nov. 2020, El-Ghazali Talbi and Nouredine Melab, co-supervisors from ONERA: L. Brevault and M. Balesdent.
  • PhD in progress: Lorenzo Canonne, Massively Parallel Gray-box and Large Scale Optimization, Oct. 2020, Bilel Derbel, Arnaud Liefooghe and Omar Abdelkafi.
  • PhD in progress: Guillaume Helbecque, Productivity-aware parallel cooperative combinatorial optimization for ultra-scale supercomputers, Oct. 2021, Nouredine Melab, co-supervisor from University of Luxembourg: P. Bouvry.
  • PhD in progress: Thomas Firmin, Pulse neuron networks and parameter optimization for massively parallel GPU-powered clusters, Oct. 2021, El-Ghazali Talbi, co-supervisor from Emeraude Team (CRIStAL labs): P. Boulet.
  • PhD in progress: Houssem Ouertatani, Multi-objective optimization of deep neural networks for embedded applications, Oct. 2021, El-Ghazali Talbi, co-supervisor from Université Polytechnique Hauts-de-France: S. Niar.
Juries
  • N. Melab (Reviewer): PhD thesis of Vincent Hénaux, Generation of Local Search Algorithms, Université d'Angers, France, defended on Nov. 18th, 2021.
  • N. Melab (Invited Expert): PhD thesis of Arcadi Llazan, Design of optimization methods based on fractal analysis for dynamic optimization. Application to deep learning for computer vision, Université Paris-Est, France, first-year evaluation (CST) on Sep. 10th, 2021.
  • E-G. Talbi (Reviewer): PhD thesis of Seddik Hadjadj, Problèmes de tournées de véhicules pour la livraison de béton frais, Université de Lyon, France, defended on May, 2021.
  • E-G. Talbi (Reviewer): PhD thesis of Hend Arbaoui, Algorithms for biological data analysis for multifactorial diseases, University of Tunis El Manar, Tunisia, defended on Sep., 2021.
  • E-G. Talbi (Reviewer): PhD thesis of Yiyi Xu, Sim-optimisation du système de traitement des bio-déchets en utilisant des systèmes multi-agents et d’optimisation multi-objectif, Université de Rouen Normandie, defended on Nov., 2021.
  • E-G. Talbi (Reviewer): PhD thesis of Fatima Abderrabi, Ordonnancement de la production des repas d’un hôpital dans un contexte d’amélioration du bien-être au travail, Université Technologique de Troyes, defended on Nov., 2021.
  • E-G. Talbi (Reviewer): PhD thesis of Alice Berthier, Transition vers une usine 4.0 grâce à l’utiisation d’outils d’aide à la décision dans le secteur textile, Université Technologique de Troyes, Tunisia, defended on Nov., 2021.
  • E-G. Talbi (Reviewer): PhD thesis of Jorg Willi Stork, Way of the fittest, University of Vrije Universiteit Amsterdam (Netherlands) and Cologne (Germany), Tunisia, defended on Dec., 2021.

10.3 Popularization

10.3.1 Internal or external Inria responsibilities

  • N. Melab: Chargé de Mission of High Performance Computing and Simulation at Université de Lille, since 2010.

11 Scientific production

11.1 Major publications

  • 1 articleO.Omar Abdelkafi, L.Lhassane Idoumghar and J.Julien Lepagnot. A Survey on the Metaheuristics Applied to QAP for the Graphics Processing Units.Parallel Processing Letters2632016, 1--20
  • 2 articleA.Ahcène Bendjoudi, N.Nouredine Melab and E.-G.El-Ghazali Talbi. FTH-B&B: A Fault-Tolerant HierarchicalBranch and Bound for Large ScaleUnreliable Environments.IEEE Trans. Computers6392014, 2302--2315
  • 3 articleS.Sébastien Cahon, N.Nouredine Melab and E.-G.El-Ghazali Talbi. ParadisEO: A Framework for the Reusable Design of Parallel and Distributed Metaheuristics.J. Heuristics1032004, 357--380
  • 4 articleF.Fabio Daolio, A.Arnaud Liefooghe, S.Sébastien Verel, H.Hernan Aguirre and K.Kiyoshi Tanaka. Problem Features versus Algorithm Performance on Rugged Multiobjective Combinatorial Fitness Landscapes.Evolutionary Computation2542017
  • 5 phdthesisB.Bilel Derbel. Contributions to single- and multi- objective optimization: towards distributed and autonomous massive optimization.Université de Lille2017
  • 6 inproceedingsB.Bilel Derbel, A.Arnaud Liefooghe, Q.Qingfu Zhang, H.Hernan Aguirre and K.Kiyoshi Tanaka. Multi-objective Local Search Based on Decomposition.Parallel Problem Solving from Nature - PPSN XIV - 14th International Conference, Edinburgh, UK, September 17-21, 2016, Proceedings2016, 431--441
  • 7 articleJ.Jan Gmys, M.Mohand Mezmaz, N.Nouredine Melab and D.Daniel Tuyttens. IVM-based parallel branch-and-bound using hierarchical work stealing on multi-GPU systems.Concurrency and Computation: Practice and Experience2992017
  • 8 inproceedingsA.Arnaud Liefooghe, B.Bilel Derbel, S.Sébastien Verel, H.Hernan Aguirre and K.Kiyoshi Tanaka. Towards Landscape-Aware Automatic Algorithm Configuration: Preliminary Experiments on Neutral and Rugged Landscapes.Evolutionary Computation in Combinatorial Optimization - 17th European Conference, EvoCOP 2017, Amsterdam, The Netherlands, April 19-21, 2017, Proceedings2017, 215--232
  • 9 articleT. V.Thé Van Luong, N.Nouredine Melab and E.-G.El-Ghazali Talbi. GPU Computing for Parallel Local Search Metaheuristic Algorithms.IEEE Trans. Computers6212013, 173--185
  • 10 articleA.Amir Nakib, S.S. Ouchraa, N.Nadiya Shvai, L.L. Souquet and E.-G.El-Ghazali Talbi. Deterministic metaheuristic based on fractal decomposition for large-scale optimization.Appl. Soft Comput.612017, 468--485

11.2 Publications of the year

International journals

  • 11 articleO.Oumayma Bahri and E.-G.El-Ghazali Talbi. Robustness-based approach for fuzzy multi-objective problems.Annals of Operations Research2961-2January 2021, 707-733
  • 12 articleA.Alisson Barbosa De Souza, P. A.Paulo Antonio Leal Rego, T.Tiago Carneiro, P. H.Paulo Henrique Gonçalves Rocha and J.José Neuman De Souza. A Context-Oriented Framework for Computation Offloading in Vehicular Edge Computing using WAVE and 5G Networks.Vehicular Communications2021
  • 13 articleA.Alexandre Borges de Jesus, L.Luis Paquete and A.Arnaud Liefooghe. A model of anytime algorithm performance for bi-objective optimization.Journal of Global Optimization792021, 329-350
  • 14 articleN.Nicolas Dupin, F.Frank Nielsen and E.-G.El-Ghazali Talbi. Unified Polynomial Dynamic Programming Algorithms for P-Center Variants in a 2D Pareto Front.Mathematics 94February 2021, 453
  • 15 articleN.Nicolas Dupin, R.Rémi Parize and E.-G.El-Ghazali Talbi. Matheuristics and Column Generation for a Basic Technician Routing Problem.Algorithms1411November 2021, 313
  • 16 articleZ.Zineb Garroussi, R.Rachid Ellaia, E. G.El Ghazali Talbi and J. Y.Jean Yves Lucas. A hybrid backtracking search algorithm for energy management in a microgrid.International Journal of Mathematical Modelling and Numerical Optimisation1122021, 143
  • 17 articleJ.Jérémy Sadet, F.Franck Massa, T.Thierry Tison, I.Isabelle Turpin, B.Bertrand Lallemand and E.-G.El-Ghazali Talbi. Homotopy perturbation technique for improving solutions of large quadratic eigenvalue problems: Application to friction-induced vibration.Mechanical Systems and Signal Processing153May 2021, 107492
  • 18 articleE.-G.El-Ghazali Talbi. Automated Design of Deep Neural Networks.ACM Computing Surveys542April 2021, 1-37
  • 19 articleE.-G.El-Ghazali Talbi. Machine Learning into Metaheuristics.ACM Computing Surveys546July 2021, 1-32

International peer-reviewed conferences

  • 20 inproceedingsG.Guillaume Briffoteaux, R.Romain Ragonnet, M.Mohand Mezmaz, N.Nouredine Melab and D.Daniel Tuyttens. Evolution Control Ensemble Models for Surrogate-Assisted Evolutionary Algorithms.High Performance Computing and Simulation 2020Barcelona, SpainMarch 2021
  • 21 inproceedingsT.Tiago Carneiro, N.Nouredine Melab, A.Akihiro Hayashi and V.Vivek Sarkar. Towards Chapel-based Exascale Tree Search Algorithms: dealing with multiple GPU accelerators.HPCS 2020 - The 18th International Conference on High Performance Computing & SimulationProceedings of HPCS 2020 - The 18th International Conference on High Performance Computing & SimulationBarcelona / Virtual, SpainMarch 2021
  • 22 inproceedingsT.Tiago Carneiro and N.Nouredine Melab. Towards Ultra-scale Exact Optimization Using Chapel.The 8th Annual Chapel Implementers and Users Workshop1The 8th Annual Chapel Implementers and Users Workshop1Seattle, United StatesJune 2021
  • 23 inproceedingsR.Raphaël Cosson, B.Bilel Derbel, A.Arnaud Liefooghe, H.Hernán Aguirre, K.Kiyoshi Tanaka and Q.Qingfu Zhang. Decomposition-based multi-objective landscape features and automated algorithm selection.EvoCOP 2022 - 22nd European Conference on Evolutionary Computation in Combinatorial OptimizationVirtual Event, SpainMarch 2021, 34-50
  • 24 inproceedingsB.Bilel Derbel and L.Lorenzo Canonne. A graph coloring based parallel hill climber for large-scale NK-landscapes.GECCO '21: Genetic and Evolutionary Computation ConferenceLille France, FranceACMJuly 2021, 216-224
  • 25 inproceedingsB.Bilel Derbel, G.Geoffrey Pruvost and B.-W.Byung-Woo Hong. Enhancing Moea/d with Escape Mechanisms.2021 IEEE Congress on Evolutionary Computation (CEC)Kraków, PolandIEEEJune 2021, 1163-1170
  • 26 inproceedingsJ.Johann Dreo, A.Arnaud Liefooghe, S.Sébastien Verel, M.Marc Schoenauer, J. J.Juan J. Merelo, A.Alexandre Quemy, B.Benjamin Bouvier and J.Jan Gmys. Paradiseo: From a Modular Framework for Evolutionary Computation to the Automated Design of Metaheuristics: 22 Years of Paradiseo.2021 Genetic and Evolutionary Computation Conference CompanionGECCO 2021 - Genetic and Evolutionary Computation Conference2021 Genetic and Evolutionary Computation Conference CompanionLille / Virtual, FranceACMJuly 2021, 1522-1530
  • 27 inproceedingsG.Gabriel Duflo, G.Grégoire Danoy, E.-G.El-Ghazali Talbi and P.Pascal Bouvry. A Q-Learning Based Hyper-Heuristic for Generating Efficient UAV Swarming Behaviours.ACIIDS 13th Asian Conference Intelligent Information and Database SystemsPhuket, ThailandApril 2021, 768-781
  • 28 inproceedingsM.Maxime Gobert, J.Jan Gmys, N.Nouredine Melab and D.Daniel Tuyttens. Adaptive Space Partitioning for Parallel Bayesian Optimization.International Conference on High Performance Computing & Simulation (HPCS)HPCS 2020 - The 18th International Conference on High Performance Computing & SimulationProceedings of HPCS 2020 - The 18th International Conference on High Performance Computing & SimulationBarcelona / Virtual, SpainMarch 2021
  • 29 inproceedingsY.Yuna Han, B.Bilel Derbel and B.-W.Byung-Woo Hong. Convolutional Neural Networks based on Random Kernels in the Frequency Domain.2021 International Conference on Information Networking (ICOIN)Jeju Island, South KoreaIEEEJanuary 2021, 671-673
  • 30 inproceedingsA.Alexandre Jesus, L.Luís Paquete, B.Bilel Derbel and A.Arnaud Liefooghe. On the design and anytime performance of indicator-based branch and bound for multi-objective combinatorial optimization.GECCO 2021 - The Genetic and Evolutionary Computation ConferenceLille, FranceACMJuly 2021, 234-242
  • 31 inproceedingsJ.Jaehwan Kim, B.Bilel Derbel and B.-W.Byung-Woo Hong. Motion Estimation via Scale-Space in Unsupervised Deep Learning.2021 International Conference on Information Networking (ICOIN)Jeju Island, South KoreaIEEEJanuary 2021, 730-731
  • 32 inproceedingsA.Arnaud Liefooghe, S.Sébastien Verel, B.Benjamin Lacroix, A.-C.Alexandru-Ciprian Zăvoianu and J.John McCall. Landscape features and automated algorithm selection for multi-objective interpolated continuous optimisation problems.GECCO 2021 - The Genetic and Evolutionary Computation ConferenceLille, FranceACMJune 2021, 421-429

Conferences without proceedings

  • 33 inproceedingsM.Maxime Gobert, J.Jan Gmys, N.Nouredine Melab and D.Daniel Tuyttens. Space Partitioning with multiple models for Parallel Bayesian Optimization.OLA 2021 - Optimization and Learning AlgorithmSicilia / Virtual, ItalyJune 2021

Scientific book chapters

  • 34 inbookN.Nadia Dahmani, S.Saoussen Krichen, E.-G.El-Ghazali Talbi and S.Sanaa Kaddoura. Solving the Multi-objective 2-Dimensional Vector Packing Problem Using epsilon-constraint Method.Trends and Applications in Information Systems and TechnologiesMarch 2021, 96-104

Edition (books, proceedings, special issue of a journal)

  • 35 bookF.Farouk Yalaoui, L.Lionel Amodeo and E.-G.El-Ghazali Talbi. Heuristics for Optimization and Learning.906Studies in Computational Intelligence (SCI)Springer2021

Doctoral dissertations and habilitation theses

  • 36 thesisA.Ali Hebbal. Deep Gaussian Processes for the Analysis and Optimization of Complex Systems -Application to Aerospace System Design.Université de LilleJanuary 2021
  • 37 thesisG.Geoffrey Pruvost. Contributions to multiobjective optimization based on decomposition.Université de LilleDecember 2021

Reports & preprints

11.3 Cited publications

  • 40 inbookM.Mathieu Balesdent, L.Lo\"ic Brevault, N. B.Nathaniel B. Price, S.Sébastien Defoort, R.Rodolphe Le Riche, N.-H.Nam-Ho Kim, R. T.Raphael T. Haftka and N.Nicolas Bérend. Space Engineering: Modeling and Optimization with Case Studies.G.Giorgio FasanoJ. D.János D. PintérSpringer International Publishing2016, Advanced Space Vehicle Design Taking into Account Multidisciplinary Couplings and Mixed Epistemic/Aleatory Uncertainties1--48URL: http://dx.doi.org/10.1007/978-3-319-41508-6_1
  • 41 inproceedingsB.Bilel Derbel, D.Dimo Brockhoff, A.Arnaud Liefooghe and S.Sébastien Verel. On the Impact of Multiobjective Scalarizing Functions.Parallel Problem Solving from Nature - PPSN XIII - 13th International Conference, Ljubljana, Slovenia, September 13-17, 2014. Proceedings2014, 548--558
  • 42 inproceedingsB.Bilel Derbel, A.Arnaud Liefooghe, G.Gauvain Marquet and E.-G.El-Ghazali Talbi. A fine-grained message passing MOEA/D.IEEE Congress on Evolutionary Computation, CEC 2015, Sendai, Japan, May 25-28, 20152015, 1837--1844
  • 43 articleR.R.T. Haftka, D.D. Villanueva and A.A. Chaudhuri. Parallel surrogate-assisted global optimization with expensive functions – a survey.Structural and Multidisciplinary Optimization54(1)2016, 3--13
  • 44 articleD.D.R. Jones, M.M. Schonlau and W.W.J. Welch. Efficient Global Optimization of Expensive Black-Box Functions.Journal of Global Optimization13(4)1998, 455--492
  • 45 inproceedingsJ.Julien Pelamatti, L.Loïc Brevault, M.Mathieu Balesdent, E.-G.El-Ghazali Talbi and Y.Yannick Guerin. How to deal with mixed-variable optimization problems: An overview of algorithms and formulations.Advances in Structural and Multidisciplinary Optimization, Proc. of the 12th World Congress of Structural and Multidisciplinary Optimization (WCSMO12)Springer2018, 64--82URL: http://dx.doi.org/10.1007/978-3-319-67988-4_5
  • 46 articleF.F. Shahzad, J.J. Thies, M.M. Kreutzer, T.T. Zeiser, G.G. Hager and G.G. Wellein. CRAFT: A library for easier application-level Checkpoint/Restart and Automatic Fault Tolerance.CoRRabs/1708.020302017, URL: http://arxiv.org/abs/1708.02030
  • 47 articleN.Nir Shavit. Data Structures in the Multicore Age.Communications of the ACM5432011, 76--84
  • 48 articleM.Marc Snir and al.. Addressing Failures in Exascale Computing.Int. J. High Perform. Comput. Appl.282May 2014, 129--173
  • 49 articleE.-G.El-Ghazali Talbi. Combining metaheuristics with mathematical programming, constraint programming and machine learning.Annals OR24012016, 171--215
  • 50 articleT.-T.Trong-Tuan Vu and B.Bilel Derbel. Parallel Branch-and-Bound in multi-core multi-CPU multi-GPU heterogeneous environments.Future Generation Comp. Syst.562016, 95--109
  • 51 articleX.Xingyi Zhang, Y.Ye Tian, R.Ran Cheng and Y.Yaochu Jin. A Decision Variable Clustering-Based Evolutionary Algorithm for Large-Scale Many-Objective Optimization.IEEE Trans. Evol. Computation2212018, 97--112
  1. 1A solution x dominates another solution y if x is better than y for all objectives and there exists at least one objective for which x is strictly better than y.
  2. 2The Pareto Front is the set of non-dominated solutions.
  3. 3In the context of Bonus, supercomputers are composed of several massively parallel processing nodes (inter-node parallelism) including multi-core processors and GPUs (intra-node parallelism).
  4. 4A WS mechanism is mainly defined by two components: a victim selection strategy which selects the processing core to be stolen and a work sharing policy which determines the part and amount of the work unit to be given to the thief upon WS request.