INOCS is a cross-border “France-Belgium” project team in the Applied Mathematics Computation and Simulation INRIA domain. The main goal of this team is the study of optimization problems involving complex structures. The scientific objectives of INOCS are related to modeling and methodological concerns. The INOCS team will focus on:

Even if CS problems are in general NP-hard due to their complex nature, exact solution methods or matheuristics (heuristics based on exact optimization methods) will be developed by INOCS. The scientific contribution of INOCS will result in a toolbox of models and methods to solve challenging real life problems.

The research program development of INOCS is to move alternatively:

Even if these two axes are developed sequentially in a first phase, their interactions will lead us to explore them jointly in the mid-term.

An optimization problem consists in finding a best solution from a set of feasible solutions. Such a problem can be typically modeled as a mathematical program in which decision variables must
(i) satisfy a set of constraints that translate the feasibility of the solution and
(ii) optimize some (or several) objective function(s).
Optimization problems are usually classified into strategic, tactical and operational problems, according to types of decision to be taken.

We consider that an optimization problem presents a complex structure (CS) when it involves decisions of different types/nature (i.e. strategic, tactical or operational) and/or presents some hierarchical leader-follower structure. The set of constraints may usually be
partitioned into global constraints, linking variables associated with the different types/nature of decision, and constraints involving each type of variables separately. Optimization problems with complex structure lead to extremely challenging problems since a global optimum with respect to the whole sets of decision variables and of constraints must be determined.

Significant progress has been made in optimization to solve academic problems. Nowadays large-scale instances of some Our vision within INOCS is to make the same advances while addressing CS optimization problems. To achieve this goal we aim to develop global solution approaches at the opposite of the current trend. INOCS team members have already proposed some successful methods following this research lines to model and solve CS problems (e.g. ANR project RESPET, Brotcorne et al. 51, 52, Gendron et al. 53, 54, 55, and Strack et al. 56). However, these are preliminary attempts and a number of challenges regarding modeling and methodological issues have still to be met.

A classical optimization problem can be formulated as follows:

In this problem,

INOCS team plan to address optimization problem where two types of decision are addressed jointly and are interrelated. More precisely, let us assume that variables

In this model,

The INOCS team plans to model optimization CS problems according to three types of optimization paradigms: large scale complex structures optimization, bilevel optimization and robust/stochastic optimization. These paradigms instantiate specific variants of the generic model.

Large scale complex structures optimization problems can be formulated through the simplest variant of the generic model
given above. In this case, it is assumed that

Bilevel programs allow the modeling of situations in which a decision-maker, hereafter the leader, optimizes his objective by taking
explicitly into account the response of another decision maker or set of decision makers (the follower) to his/her decisions. Bilevel programs are closely related to Stackelberg (leader-follower) games as well as to the principal-agent paradigm in economics. In other words, bilevel programs can be considered as demand-offer equilibrium models where the demand is the result of another mathematical problem.
Bilevel problems can be formulated through the generic CS model when

In robust/stochastic optimization, it is assumed that the data related to a problem are subject to uncertainty. In stochastic optimization, probability distributions governing the data are known, and the objective function involves mathematical expectation(s). In robust optimization, uncertain data take value within specified sets, and the function to optimize is formulated in terms of a min-max objective typically (the solution must be optimal for the worst-case scenario). A standard modeling of uncertainty on data is obtained by defining a set of possible scenarios that can be described explicitly or implicitly. In stochastic optimization, in addition, a probability of occurrence is associated with each scenario and the expected objective value is optimized.

Standard solution methods developed for CS problems solve independent
sub-problems associated with each type of variables without explicitly
integrating their interactions or integrating them iteratively in a
heuristic way. However these subproblems are intrinsically linked and
should be addressed jointly. In mathematicaloptimization
a classical approach is to approximate the convex hull of the integer
solutions of the model by its linear relaxation. The main solution
methods are (1) polyhedral solution methods which strengthen this linear
relaxation by adding valid inequalities, (2) decomposition solution
methods (Dantzig Wolfe, Lagrangian Relaxation, Benders decomposition)
which aim to obtain a better
approximation and solve it by generating extreme points/rays. Main
challenges are (1) the analysis of the strength of the cuts and their
separations for polyhedral solution methods, (2) the decomposition
schemes and (3) the extreme points/rays generations for the
decomposition solution methods.

The main difficulty in solving bilevel problems is due to their
non convexity and non differentiability. Even linear bilevel programs,
where all functions involved are affine, are computationally challenging
despite their apparent simplicity. Up to now, much research has been devoted to
bilevel problems with linear or convex follower problems. In this case, the problem can be reformulated as a
single-level program involving complementarity constraints, exemplifying
the dual nature, continuous and combinatorial, of bilevel programs.

In energy, the team mainly focuses on pricing models for demand side management, on bids definition in the Energy market and on the design and pricing of electric cars charging stations.

Demand side management methods are traditionally used to control electricity demand which became quite irregular recently and resulted in inefficiency in supply. We have explored the relationship between energy suppliers and customers who are connected to a smart grid. The smart grid technology allows customers to keep track of hourly prices and shift their demand accordingly, and allows the provider to observe the actual demand response to its pricing strategy. We tackle pricing problems in energy according to the bilevel optimization approaches. Some research works in this domain are supported by bilateral grants with EDF.

The increasing number of agents, with different characteristics) interacting on the energy market leads to the definition of new types of bidding process. We have modeled this problem has a bilevel one where the lower lever is the instance allocating the bids (the ISO).

The proliferation of electric cars in cities has lead to the challenging problem of designing and pricing charging stations in order to smooth the demand over time. We are modeling this problem as a bilevel one where the lower lever represents the choice of users in a preference list.

In transportation and logistics, the team addresses mainly integrated problems, which require taking into account simultaneously different types of decision. Examples are location and routing, inventory management and routing or staff scheduling and warehouse operations management. Such problems occur from the supply chain design level to the logistic facility level.

In telecommunications, the team mainly focuses on network design problems and on routing problems. Such problems are optimization problems with complex structure, since the optimization of capacity installation and traffic flow routing have to be addressed simultaneously.

Group testing is a screening strategy that involves dividing a population into several disjointed groups of subjects. In its simplest implementation, each group is tested with a single test in the first phase, while in the second phase only subjects in positive groups, if any, need to be tested again individually.

To contribute to the effort to tackle the COVID-19 sanitary crisis, we developed this software which allows to create groups of a individuals to test via the group testing technique while minimizing a linear combination of the expected number of false negative and false positive classifications.

The test design problem is modeled as a constrained shortest path problem on a specific graph and we design and implement an ad hoc algorithm to solve this problem. We validate the algorithm on instances based on Santé Publique France data on Covid-19 screening tests.

This software is a toolbox that contains algorithms that are frequently used to solve optimization problems tackled by (but not only) the team.

The objective of the toolbox is to contain a set of code skeletons that allow researchers to integrate adequate data structures and basic algorithms for different structures complexity that appears in the optimization problems we study. The current version of the toolbox contains classical heuristic tools (generic local search) to solve, among others, the vehicle rouring problem and its variants. It also contain a code to exactly and heuristically solve the Shortest Path Problem with Ressource Constraints that is usually encountered in the resolution of problem with Branch-and-Price algorithms.

The future objective is to include automatic reformulation tools for bi-level optimization problems and state-of-the-art codes for the development of decomposition methods.

Order picking is the process of retrieving products from inventory. It is mostly done manually by dedicated employees called pickers and is considered the most expensive of warehouse operations. To reduce the picking cost, customer orders can be grouped into batches that are then collected by traveling the shortest possible distance. We propose an exponential linear programming formulation to tackle the joint order batching and picker routing problem. Variables, or columns, are related to the picking routes in the warehouse. Computing such routes is generally an intractable routing problem and relates to the well known traveling salesman problem (TSP). Nonetheless, the rectangular warehouse's layouts can be used to efficiently solve the corresponding TSP and take into account in the development of an efficient subroutine, called oracle. We therefore investigate whether such an oracle allows for an effective exponential formulation. Experimented on a publicly available benchmark, the algorithm proves to be very effective. It improves many of the best known solutions and provides very strong lower bounds. Finally, this approach is also applied to the HappyChic industrial case to demonstrate its interest for this field of application 16.

In order to avoid trucks entering the city centers, two stage delivery systems are used in city logistics. First, goods are delivered by trucks to depots/hubs located in the outskirts of the city. Then, eco-friendly vehicles are used to bring merchandise to final customers located in city centers. On this subject we conducted two research projects. In the first project, we address specifically the case where a (single) city hub is located in the city center. We investigate the synchronization between the two echelons in two-echelon urban distribution systems. The two echelons are synchronized in time but also with regards to the capacity of the city hub. As far as we know, this is the first study considering the latter issue in the context of two-echelon distribution. To deal with the synchronization while optimizing the distribution, we propose a three-phase heuristic solution approach. At first, the approach optimizes the distribution for the second echelon. Then, it manages the synchronization. Finally, it optimizes the distribution for the first echelon. Population-based metaheuristics and integer programs are used. Results show the effectiveness of the method and permit to derive managerial insights on the distribution. Experiments are based on instances generated from the network Wien 25. Although several city authorities have promoted different measures to foster the implementation of small urban consolidation centers in a two-tier system, only a few authors have addressed the joint problem of operating these facilities and providing services to customers. In the second project, we show how the problem can be modeled as a new variant of the bin packing, for which we provide a mixed integer programming formulation and two heuristics that are shown to be quite effective in solving efficiently and to near optimality the problem. The application of our approach on real data from the city of Turin puts into highlight the superiority of the consolidation approach, including the bundle of goods from different providers, stockholding and other value-added logistics services, over the classical single-tier approach. In addition, we conduct a thorough analysis of some emerging aspects of the on-demand economy, as the consideration of customers’ preferences and the integration of multiple delivery options 26.

Planning transportation operations within a supply chain is a difficult task that is often outsourced to logistics providers, in practice. At the tactical level, the problem of distributing products through a multi-echelon network is defined in the literature as the Logistics Service Network Design Problem (LSNDP). We study a LSNDP variant inspired by the management of restaurant supply chains. In this problem, a third party carrier seeks to cost-effectively source and fulfill customer demands of products through a tri-echelon supply chain composed of suppliers, warehouses, and customers. We propose an exact solution method based on partial Benders decompositions, where the master problem is strengthened by the addition of aggregated information derived from the subproblem. More specifically, we introduce a high-level dynamic Benders approach where the aggregated information used to strengthen the master is refined iteratively. In an extensive computational study, we demonstrate that our dynamic Benders strategy produces provably high-quality solutions and we validate the interest of refining the master problem in the course of a partial Benders decomposition-based scheme 14. However, realistic instances are too large to be solved in acceptable run-times, we develop a network reduction heuristic inspired by the recent Dynamic Discretization Discovery algorithm. Through an extensive series of experiments carried out on instances based on the operations of an industrial partner, we demonstrate the efficiency of the proposed approach. We also investigate the impact of the distribution strategy used in practice to determine the transportation plan and how this distribution strategy can be modified to reduce the overall logistics cost 13.

Infrastructure network design constitutes a major step in the planning of a transportation network whose purpose is to improve the mobility of the inhabitants of a city or metropolitan area. Since it is generally too expensive to connect all the existing facilities, one must determine a subnetwork that serves at best the traffic demand. Depending on the application, different optimality measures can be considered. In the area of passengers transportation, the aim is to get the infrastructure close to potential customers. In this framework, the goal may be to minimize the maximum routing cost for an origin destination pair when using the new network. Alternatively, the traffic between an origin and a destination may be considered as captured if the cost or travel time when using the network is not larger than the cost or travel time of the best alternative solution (not using the new network). In this case, one might select a sub(network) from an underlying network with the aim of capturing or covering as much traffic for a reasonable construction cost. 41 is devoted to this problem, called the Maximum Covering Network Design Problem (MC) as well as to the closely related problem called, Partial Covering Network Design Problem (PC), in which one minimizes the network design cost for building the network under the constraint that a minimum percentage of the total traffic demand is covered. After presenting models for problems (MC) and (PC), we propose exact methods based on Benders decomposition. Our computational experiments show that our Benders implementations are competitive with exact and non-exact methods in the literature.

We study routing problems that arise in the context of last mile delivery when multiple delivery options are proposed to the customers. The most common option to deliver packages is home/workplace delivery. Besides, the delivery can be made to pick-up points such as dedicated lockers or stores. In recent years, a new concept called trunk/in-car delivery has been proposed. Here, customers' packages can be delivered to the trunks of cars. Our goal is to model and develop efficient solution approaches for routing problems in this context, in which each customer can have multiple shipping locations. First, we study the single-vehicle case in the considered context, which is modeled as a Generalized Traveling Salesman Problem with Time Windows (GTSPTW). Four mixed integer linear programming formulations 27, 29 and an efficient branch-and-cut algorithm are proposed 28. Then, we study the multi-vehicle case which is denoted Generalized Vehicle Routing Problem with Time Windows (GVRPTW). An efficient column generation based heuristic is proposed to solve it 36.

To date, the research on agriculture vehicles in general and Agriculture Mobile Robots (AMRs) in particular has focused on a single vehicle (robot) and its agriculture-specific capabilities. The potential impact of automating AMR fleet coordination on commercial agriculture is immense. Major conglomerates with large and heterogeneous fleets of agriculture vehicles could operate on huge land areas without human operators to effect precision farming. In this project, we consider the Agriculture Fleet Vehicle Routing Problem (AF-VRP) which, to the best of our knowledge, differs from any other version of the Vehicle Routing Problem studied so far. We focus on the dynamic and decentralised version of this problem applicable in environments involving multiple agriculture machinery and farm owners where concepts of fairness and equity must be considered. Such a problem combines three related problems: the dynamic assignment problem, the dynamic 3-index assignment problem and the capacitated arc routing problem. We review the state-of-the-art and categorize solution approaches as centralised, distributed and decentralised, based on the underlining decision-making context and discuss open challenges in applying distributed and decentralised coordination approaches to this problem 24, 30.

Group testing is a screening strategy that involves dividing a population into several disjointed groups of subjects. In its simplest implementation, each group is tested with a single test in the first phase, while in the second phase only subjects in positive groups, if any, need to be tested again individually. In this project, we address the problem of group testing design, which aims to determine a partition into groups of a finite population in such a way that cardinality constraints on the size of each group and a constraint on the expected total number of tests are satisfied while minimizing a linear combination of the expected number of false negative and false positive classifications. First, we show that the properties and model introduced by Aprahmian et al. 49 can be extended to the group test design problem, which is then modeled as a constrained shortest path problem on a specific graph. We design and implement an ad hoc algorithm to solve this problem. On instances based on Santé Publique France data on Covid-19 screening tests, the results of the computational experiments are very promising 37.

In many other countries home chemotherapy is a rising trend. Home chemotherapy services aim to assist cancer patients to remain safe and comfortable in their own homes while continuing to receive their treatment, avoiding hospitalization or admission to outpatient chemotherapy facilities. Home chemotherapy also contributes to the employability of patients, enabling them to remain active for longer periods and with better health conditions. Besides increasing the comfort of the patients, home chemotherapy may help relieve congestion in outpatient chemotherapy services. At the operational level, a complex scheduling problem underlies the daily home chemotherapy planning process. It calls for the determination of an integrated drug production and administration schedule. Indeed, injectable preparations for cancer treatment have a short stability time, i.e. they may expire within a few hours after their production start time. Consequently, they may not be produced ahead of time and then stored. The resulting absence of inventories implies that the production of drugs has to be carefully scheduled jointly with their administration. To address this integrated problem, we use a variable neighborhood search that iteratively creates new drug production schedules meant to relax as much as possible the constraints imposed on the administration operations due to production start times. A local search component mainly concentrates on optimizing the administration sequences, while a linear program is used to find optimal start times for drug production and administration each time a promising set of sequences is found 32.

The discrete ordered median problem consists in locating

The segmented isotonic regression problem consists in fitting a curve to a cloud of data points under the conditions that the fitted curve must be non-increasing (or non-decreasing) and piecewise constant (or, equivalently, stepwise), with a predefined limited number of pieces (also referred to as steps or blocks in what follows). This problem is inspired by the bidding rules that large consumers or a pool of small consumers must comply with when participating in an electricity market. Their bids for purchasing electricity in these markets must be often submitted in the form of a non-increasing stepwise price-consumption curve, for which the maximum number of bid blocks is also constrained. These curves reflect how consumers value electricity and therefore, their sensitivity to its price (which is referred to as consumers' elasticity). With the advent of Information and Communications Technologies and the roll-out of the so-called smart grids, small consumers of electricity are being provided with the means to actively adjust their consumption in response to the electricity price. However, their consumption patterns are still uncertain, dynamic and affected by other factors different from the electricity price. The result is that estimating a bidding curve that properly reflects consumers' sensitivity to the electricity price is a statistical challenge.

In 42, we provide an algorithm to efficiently computes that curve from a set of price-consumption observations. To ease the computational burden of the proposed algorithm, we develop various strategies to efficiently calculate upper and lower bounds that substantially reduce the number of paths to be explored. Numerical results reveal that our algorithm is able to provide the globally optimal monotone stepwise curve for samples with thousands of data points in less than a few hours.

Given a graph, its line graph is anothegraph whose vertices are the edges of the original one. Further an edge links two nodes of the line graph if and only if the corresponding edges of the original graph share a node. A graph is said to be line-invertible if it is isomorphic to the line graph of some other graph, called the root. Although obtaining the line graph of a given graph is straightforward, doing the reverse is not a trivial task.

In genetics, haplotypes codify certain regions of the genome that show a statistically significant variability within a population. It has been observed that such variability plays an important role in human variation and genetic diseases. Haplotype phasing consists of estimating the haplotypes that produced a current population of genotypes, and is a primary problem in the analysis of genetic data. In this context, consistency relations between genotypes that could have been originated from a common ancestor are codified by a graph. Root graph reconstruction is useful here to estimate the original population size, that is, the number of generating haplotypes. However, if all the consistency relations are considered, sometimes reconstruction from the graph is not possible. In other words, the graph encoding consistency relations is not line-invertible. In these cases, one needs to disregard some of these relations, that is, to delete some of the edges of the consistency graph. A combinatorial problem then arises, namely which edges to remove so that the graph is disrupted as little as possible. In 22, we study the problem of identifying a set of edges of minimum cardinality that have to be deleted from a graph so that it becomes line-invertible. We propose different integer linear programming models as well as valid inequalities to strengthen them. Our computational experiments allow empirical comparison between the different models and ultimately demonstrate their utility.

Power systems face higher flexibility requirements from generation to consumption due to the increasing penetration of non-controllable distributed renewable energy. In this context, demand side management aims at reducing excessive load fluctuation and match the price of energy to their real cost for the grid. Pricing models for demand side management methods are traditionally used to control electricity demand. First, we proposed bilevel pricing models to explore the relationship between energy suppliers and customers who are connected to a smart grid. The smart grid technology allows customers to keep track of hourly prices and shift their demand accordingly, and allows the provider to observe the actual demand response to its pricing strategy. Moreover, we assumed that the smart grid optimizes the usage of a renewable energy generation source and a storage capacity. Results over a rolling horizon were obtained (Léonard Von Niederhausern PhD thesis 57). Next, we considered four types of actors: furnishers sell electricity, local agents trade and consume energy, aggregators trade energy and provide energy to end-users, who consume it. This gives rise to three levels of optimization. The interaction between aggregators and their end-users is modeled with a bilevel program, and so is the interaction between furnishers, and local agents and aggregators. Since solving bilevel programs is difficult in itself, solving trilevel programs requires particular care. We proposed three possible approaches, two of them relying on a characterization of the intermediary optimization level 12, 57. Finally, Time and-Level-of-Use is a recently proposed energy pricing scheme, designed for the residential sector and providing suppliers with robust guarantee on the consumption. We formulate the supplier decision as a bilevel, bi-objective problem optimizing for both financial loss and guarantee. A decomposition method is proposed, related to the optimal value transformation. It allows for the computation of an exact solution by finding possible Pareto optimal candidate solutions and then eliminating dominated ones. Numerical results on experimental residential power consumption data show the method effectively finds the optimal candidate solutions while optimizing costs only or incorporating risk aversion at the lower-level 15.

One of the most frequently used approaches to solve linear bilevel optimization problems consists in replacing the lower-level problem with its Karush–Kuhn–Tucker (KKT) conditions and by reformulating the KKT complementarity conditions using techniques from mixed-integer linear optimization. The latter step requires to determine some big-

One of the main concerns in management and economic planning is to sell the right product to the right customer for the right price. Companies in retail and manufacturing employ pricing strategies to maximize their revenues.

In the Rank Pricing Problem (RPP), a firm intends to maximize its profit through the pricing of a set of products to sell. Customers are interested in purchasing at most one product among a subset of products. To do so, they are endowed with a ranked list of preferences and a budget. Their choice rule consists in purchasing the highest-ranked product in their list and whose price is below their budget. In 44, we consider an extension of RPP, the Rank Pricing Problem with Ties (RPPT), in which we allow for indifference between products in the list of preferences of the customers. We propose different mixed integer programming formulations for the problem and valid inequalities to strengthen them. Computational experiments assess the performance of the proposed approaches.

Next, in 18, we analyze a product pricing problem with single-minded customers, each interested in buying a bundle of products. The objective is to maximize the total revenue and we assume that supply is unlimited for all products. We contribute to a missing piece of literature by giving some mathematical formulations for this single-minded bundle pricing problem. We first present a mixed-integer nonlinear program with bilinear terms in the objective function and the constraints. By applying classical linearization techniques, we obtain two different mixed-integer linear programs. We then study the polyhedral structure of the linear formulations and obtain valid inequalities based on an RLT-like framework. We develop a Benders decomposition to project strong cuts from the tightest model onto the lighter models. We conclude this work with extensive numerical experiments to assess the quality of the mixed-integer linear formulations, as well as the performance of the cutting plane algorithms and the impact of the preprocessing on computation times.

Consider a graph

First, we addressed a multi-product location problem in which a retail firm has several malls with a known location. A particular product comes in p types. Each mall has a limited capacity for products to be sold at that location, so the firm has to choose what products to sold at what mall. Furthermore, the firm can apply discrete levels of discount on the products. The objective of the firm is to find what products to sell at which mall, with what level of discount, so that its profit is maximized. Consumers are located in points of the region. Each consumer has a different set of acceptable products, and will purchase one of these, or none if it is not convenient for her. Consumers maximize their utility. The agents (firm and consumers) play a Stackelberg game, in which the firm is the leader and the customers the follower. Once the firm decides the products to sell at each mall and the possible discounts, consumers purchase (or not) one of their acceptable products wherever their utility is maximized. We model the problem using bilevel formulations, which are compared on known instances from the literature 43.

We address the problem of locating, designing and pricing electric car charging stations by explicitly integrating the preferences of the users into the decision making process. More precisely we consider a strategic-operational decision making process whose goal is to generate revenue and smooth out the demand over time. This last objective is a very important one in the current context of renewable energy production uncertainty. The problem is modeled as a bilevel one. The users behviour is represented by a choice in a preference list defined according to treshold values on prices and distances. Integer variables can be relaxed in the lower level problem. The bilevel model is reformulated as a single level MIP program. Numerical results are provided on randomly generated instances.

We formulate a Stackelberg Security game that coordinates resources in a border patrol problem. In this security domain, resources from different precincts have to be paired to conduct patrols in the border due to logistic constraints. Given this structure the set of pure defender strategies is of exponential size. We describe the set of mixed strategies using a polynomial number of variables but exponentially many constraints that come from the matching polytope. We then include this description in a mixed integer formulation to compute the Strong Stackelberg Equilibrium efficiently with a branch and cut scheme. Since the optimal patrol solution is a probability distribution over the set of exponential size, we also introduce an efficient sampling method that can be used to deploy the security resources every shift. Our computational results evaluate the efficiency of the branch and cut scheme developed and the accuracy of the sampling method. We show the applicability of the methodology by solving a real world border patrol problem 17.

The scientific interest in computational bilevel optimization increased a lot over the last decade and is still growing. Independent of whether the bilevel problem itself contains integer variables or not, many state-of-the-art solution approaches for bilevel optimization make use of techniques that originate from mixed-integer programming. These techniques include branch-and-bound methods, cutting planes and, thus, branch-and-cut approaches, or problem specific decomposition methods. In 46, we review bilevel-tailored approaches that exploit these mixed-integer programming techniques to solve bilevel optimization problems.

During the last decades, the European gas market has undergone ongoing liberalization, resulting in the so-called entry-exit market system. The main goal of this market reorganization is the decoupling of trading and actual gas transport. To achieve this goal within the European entry-exit market, gas traders interact with transport system operators (TSOs) via bookings and nominations. A booking is a capacity-right contract in which a trader reserves a maximum injection or withdrawal capacity at an entry or exit node of the TSO’s network. On a day-ahead basis, these traders are then allowed to nominate an actual load flow up to the booked capacity. To this end, the traders specify the actual amount of gas to be injected to or withdrawn from the network such that the total injection and withdrawal quantities are balanced. On the other hand, the TSO is responsible for the transport of the nominated amounts of gas. By having signed the booking contract, the TSO guarantees that the nominated amounts can actually be transported through the network. More precisely, the TSO needs to be able to transport every set of nominations that complies with the signed booking contracts. Thus, an infinite number of possible nominations must be anticipated and checked for feasibility when the TSO accepts bookings. As a consequence, the entry-exit market decouples trading and transport. However, it also introduces many new challenges, e.g. the checking of feasibility of bookings or the computation of bookable capacities on the network.

Deciding the feasibility of a booking can be seen as an adjustable robust feasibility problem, where the set of booking-compliant nominations is the uncertainty set. The feasibility of a booking, if the underlying network is a tree, can be decided in polynomial time. In 47, we extend the knowledge on the frontier of hardness by showing that deciding the feasibility of a booking on single-cycle networks is in

Bilevel optimization problems embed the optimality conditions of a sub-problem into the constraints of
a decision-making process. A general question of bilevel optimization occurs where the lower-level is
solved (only) to near-optimality. Solving bilevel problems under limited deviations of the lower-level
variables was introduced under the term “

Utocat (2018-02020): Study optimization problems arising in the blockchain

In collaboration with Ecole des Mines de Saint-Etienne (Gardanne), IFSTTAR (Champs-sur-Marne), HappyChic (Tourcoing).
This project addresses human resources management in warehouses which supply either sale points (B2B) or final consumers (B2C). Nowadays, such warehouses are under pressure. This is mainly due to the no inventory policy at the sale points and to the constant growth of e-commerce sales in France and Europe. In terms of logistics, this translates into an increasing number of parcels to prepare and to ship to satisfy an order, which is known typically a few hours before. Moreover, the total number of products to be packed varies very significantly from day-to-day by a factor of at least 3 (https://

The novelty of the project is twofold: (1) The human factor is explicitly be taken into account. It is integrated in the mathematical models and algorithms that are developed for the project. The aim is to improve the quality of employees' work ensuring the efficiency of the logistic system; (2) Problems at different decision levels are integrated and tackled jointly. At the tactical level, the main problematics are workload smoothing and the management of the storage zone. At operational level, the major issues concern the rearrangement of the picking zone, the picking tours, and the dynamic reorganization of activities to manage uncertainties.

Bilevel optimization is a branch of mathematical optimization that deals with problems whose constraints embed an auxiliary optimization problem. The F.R.S.-FNRS research project “bilevel optimization” (2018 – 2020) will study such bilevel problems with bilinear objectives and simple second level problems. Each follower chooses one strategy in a given fixed set of limited size. Two classes of such problems will be studied: Pricing Problems and Stackelberg Security Games.

In pricing problems, prices for products must be determined to maximize the revenue of a leader given specific behaviors of customers (followers). More precisely, we will consider the single minded pricing problem and the rank pricing problem.

In Stackelberg games, mixed strategies to cover targets, must be determined in order to maximize the defender expected payoff given that attackers (followers) attack targets that maximize their own payoffs.

A hybrid approach is a method in which we try to combine as well as possible the components of several (at least two) resolution approaches from several communities: exact methods, heuristics, metaheuristics. In this project, we are interested in hybrid methods known as matheuristics based on mathematical programming and techniques resulting from metaheuristics. Our work focuses on theoretical and methodological aspects to allow the emergence of new and efficient methods to tackle challenging and (very) large optimization problems. Such methods are applied to the solution of transportation problems that arise particularly in city logistics.

Annals of Operations Research, Applied Computing and Informatics, Central European Journal of Operations Research, Computers & Operations Research, Computational Optimization and Applications, Discrete Applied Mathematics, EURO Journal on Transportation and Logistics, European Journal of Operational Research, IISE Transactions, INFORMS Journal on Computing, International Journal of Management Science and Engineering Management, Mathematical Programming Computation, Networks, Omega, Operations Research, Optimization and Engineering, RAIRO - Operations Research, Transportation Science, IEEE Transactions on Power Systems, IEEE Transactions on Smart Grids, IEEE Power Engineering Letters: Luce Brotcorne, Diego Cattaruzza, Bernard Fortz, Martine Labbé, Maxime Ogier, Frédéric Semet.

Innovation research network days (Journées du réseau de recherche sur l'innovation; RRI) Lille, November 2020, “De l'entrepôt au client final: exemples d'optimisation de la chaîne logistique”, Luce Brotcorne