INOCS is a cross-border “France-Belgium” project team in the Applied Mathematics Computation and Simulation Inria domain. The main goal of this team is the study of optimization problems involving complex structures. The scientific objectives of INOCS are related to modeling and methodological concerns. The INOCS team will focus on:

Even if CS problems are in general NP-hard due to their complex nature, exact solution methods or matheuristics (heuristics based on exact optimization methods) will be developed by INOCS. The scientific contribution of INOCS will result in a toolbox of models and methods to solve challenging real life problems.

The research program development of INOCS is to move alternatively:

Even if these two axes are developed sequentially in a first phase, their interactions will lead us to explore them jointly in the mid-term.

An optimization problem consists in finding a best solution from a set of feasible solutions. Such a problem can be typically modeled as a mathematical program in which decision variables must
(i) satisfy a set of constraints that translate the feasibility of the solution and
(ii) optimize some (or several) objective function(s).
Optimization problems are usually classified into strategic, tactical and operational problems, according to types of decision to be taken.

We consider that an optimization problem presents a complex structure (CS) when it involves decisions of different types/nature (i.e. strategic, tactical or operational) and/or presents some hierarchical leader-follower structure. The set of constraints may usually be
partitioned into global constraints, linking variables associated with the different types/nature of decision, and constraints involving each type of variables separately. Optimization problems with complex structure lead to extremely challenging problems since a global optimum with respect to the whole sets of decision variables and of constraints must be determined.

Significant progress has been made in optimization to solve academic problems. Nowadays large-scale instances of some Our vision within INOCS is to make the same advances while addressing CS optimization problems. To achieve this goal we aim to develop global solution approaches at the opposite of the current trend. INOCS team members have already proposed some successful methods following this research lines to model and solve CS problems (e.g. ANR project RESPET, Brotcorne et al. 48, 49, Gendron et al. 50, 51, 52, and Strack et al. 55). However, these are preliminary attempts and a number of challenges regarding modeling and methodological issues have still to be met.

A classical optimization problem can be formulated as follows:

In this problem,

The INOCS team plans to address optimization problem where two types of decision are addressed jointly and are interrelated. More precisely, let us assume that variables

In this model,

The INOCS team plans to model optimization CS problems according to three types of optimization paradigms: large scale complex structures optimization, bilevel optimization and robust/stochastic optimization. These paradigms instantiate specific variants of the generic model.

Large scale complex structures optimization problems can be formulated through the simplest variant of the generic model
given above. In this case, it is assumed that

Bilevel programs allow the modeling of situations in which a decision-maker, hereafter the leader, optimizes his objective by taking
explicitly into account the response of another decision maker or set of decision makers (the follower) to his/her decisions. Bilevel programs are closely related to Stackelberg (leader-follower) games as well as to the principal-agent paradigm in economics. In other words, bilevel programs can be considered as demand-offer equilibrium models where the demand is the result of another mathematical problem.
Bilevel problems can be formulated through the generic CS model when

In robust/stochastic optimization, it is assumed that the data related to a problem are subject to uncertainty. In stochastic optimization, probability distributions governing the data are known, and the objective function involves mathematical expectation(s). In robust optimization, uncertain data take value within specified sets, and the function to optimize is formulated in terms of a min-max objective typically (the solution must be optimal for the worst-case scenario). A standard modeling of uncertainty on data is obtained by defining a set of possible scenarios that can be described explicitly or implicitly. In stochastic optimization, in addition, a probability of occurrence is associated with each scenario and the expected objective value is optimized.

Standard solution methods developed for CS problems solve independent
subproblems associated with each type of variables without explicitly
integrating their interactions or integrating them iteratively in a
heuristic way. However these subproblems are intrinsically linked and
should be addressed jointly. In mathematicaloptimization
a classical approach is to approximate the convex hull of the integer
solutions of the model by its linear relaxation. The main solution
methods are (1) polyhedral solution methods which strengthen this linear
relaxation by adding valid inequalities, (2) decomposition solution
methods (Dantzig Wolfe, Lagrangian Relaxation, Benders decomposition)
which aim to obtain a better
approximation and solve it by generating extreme points/rays. Main
challenges are (1) the analysis of the strength of the cuts and their
separations for polyhedral solution methods, (2) the decomposition
schemes and (3) the extreme points/rays generations for the
decomposition solution methods.

The main difficulty in solving bilevel problems is due to their
non convexity and non differentiability. Even linear bilevel programs,
where all functions involved are affine, are computationally challenging
despite their apparent simplicity. Up to now, much research has been devoted to
bilevel problems with linear or convex follower problems. In this case, the problem can be reformulated as a
single-level program involving complementarity constraints, exemplifying
the dual nature, continuous and combinatorial, of bilevel programs.

In energy, the team mainly focuses on pricing models for demand side management, on bids definition in the Energy market and on the design and pricing of electric cars charging stations.

Demand side management methods are traditionally used to control electricity demand which became quite irregular recently and resulted in inefficiency in supply. We have explored the relationship between energy suppliers and customers who are connected to a smart grid. The smart grid technology allows customers to keep track of hourly prices and shift their demand accordingly, and allows the provider to observe the actual demand response to its pricing strategy. We tackle pricing problems in energy according to the bilevel optimization approaches. Some research works in this domain are supported by bilateral grants with EDF.

The increasing number of agents, with different characteristics) interacting on the energy market leads to the definition of new types of bidding process. We have modeled this problem has a bilevel one where the lower lever is the instance allocating the bids (the ISO).

The proliferation of electric cars in cities has lead to the challenging problem of designing and pricing charging stations in order to smooth the demand over time. We are modeling this problem as a bilevel one where the lower lever represents the choice of users in a preference list.

In transportation and logistics, the team addresses mainly integrated problems, which require taking into account simultaneously different types of decision. Examples are location and routing, inventory management and routing or staff scheduling and warehouse operations management. Such problems occur from the supply chain design level to the logistic facility level.

In telecommunications, the team mainly focuses on network design problems and on routing problems. Such problems are optimization problems with complex structure, since the optimization of capacity installation and traffic flow routing have to be addressed simultaneously.

Group testing is a screening strategy that involves dividing a population into several disjointed groups of subjects. In its simplest implementation, each group is tested with a single test in the first phase, while in the second phase only subjects in positive groups, if any, need to be tested again individually.

To contribute to the effort to tackle the COVID-19 sanitary crisis, we developed this software which allows to create groups of a individuals to test via the group testing technique while minimizing a linear combination of the expected number of false negative and false positive classifications.

The test design problem is modeled as a constrained shortest path problem on a specific graph and we design and implement an ad hoc algorithm to solve this problem. We validate the algorithm on instances based on Santé Publique France data on Covid-19 screening tests.

This software is a toolbox that contains algorithms that are frequently used to solve optimization problems tackled by (but not only) the team.

The objective of the toolbox is to contain a set of code skeletons that allow researchers to integrate adequate data structures and basic algorithms for different structures complexity that appears in the optimization problems we study. The current version of the toolbox contains classical heuristic tools (generic local search) to solve, among others, the vehicle rouring problem and its variants. It also contain a code to exactly and heuristically solve the Shortest Path Problem with Ressource Constraints that is usually encountered in the resolution of problem with Branch-and-Price algorithms.

The future objective is to include automatic reformulation tools for bi-level optimization problems and state-of-the-art codes for the development of decomposition methods.

We reviewed the Operations Research models and methods to plan and manage City Logistics systems, in particular their supply components in 42. We considered the main planning issues and challenges, and reviewed the proposed methodologies. This article concludes with a discussion on perspectives for City Logistics and decision-support methodological developments.

In 32, we focus on two-tier city logistics systems which are playing a very important role nowadays in the management of urban freight activities. Although several city authorities have promoted different measures to foster the implementation of small urban consolidation centers in a two-tier system, only a few authors have addressed the joint problem of operating these facilities and providing services to customers. We show how the problem can be modeled as a new variant of the bin packing, for which we provide a mixed integer programming formulation and two heuristics that are shown to be quite effective in solving efficiently and to near optimality the problem. The application of our approach on real data from the city of Turin puts into highlight the superiority of the consolidation approach, including the bundle of goods from different providers, stockholding and other value-added logistics services, over the classical single-tier approach. In addition, the paper provides a thorough analysis of some emerging aspects of the on-demand economy, as the consideration of customers’ preferences and the integration of multiple delivery options. The managerial insights coming from this work will be part of the new Logistics and Mobility Plan to be activated in 2022 in the Piedmont region, Italy.

Planning transportation operations within a supply chain is a difficult task that is often outsourced to logistics providers, in practice. At the tactical level, the problem of distributing products through a multi-echelon network is defined in the literature as the Logistics Service Network Design Problem (LSNDP). In the LSNDP, a logistics service provider seeks to cost-effectively source and fulfill customer demands of products within a multi-echelon distribution network. However, many industrial settings yield instances of the LSNDP that are too large to be solved in reasonable run-times by off-the-shelf optimization solvers. We introduce an exact Benders decomposition algorithm based on partial decompositions that strengthens the master problem with information derived from aggregated subproblem data. More specifically, the proposed Meta Partial Benders Decomposition intelligently switches from one master problem to another by changing both the amount of subproblem information to include in the master as well as how it is aggregated. Through an extensive computational study, we show that the approach outperforms existing benchmark methods and demonstrate the benefits of dynamically refining the master problem in the course of a partial Benders decomposition-based scheme. 39. However, realistic instances are still too large to be solved in acceptable run-times, we develop a network reduction heuristic inspired by the recent Dynamic Discretization Discovery algorithm. Through an extensive series of experiments carried out on instances based on the operations of an industrial partner, we demonstrate the efficiency of the proposed approach. We also investigate the impact of the distribution strategy used in practice to determine the transportation plan and how this distribution strategy can be modified to reduce the overall logistics cost 12.

Infrastructure network design constitutes a major step in the planning of a transportation network whose purpose is to improve the mobility of the inhabitants of a city or metropolitan area. Since it is generally too expensive to connect all the existing facilities, one must determine a subnetwork that serves at best the traffic demand. Depending on the application, different optimality measures can be considered. In the area of passengers transportation, the aim is to get the infrastructure close to potential customers. In this framework, the goal may be to minimize the maximum routing cost for an origin destination pair when using the new network. Alternatively, the traffic between an origin and a destination may be considered as captured if the cost or travel time when using the network is not larger than the cost or travel time of the best alternative solution (not using the new network). In this case, one might select a sub(network) from an underlying network with the aim of capturing or covering as much traffic for a reasonable construction cost. Our contribution in 16 is devoted to this problem, called the Maximum Covering Network Design Problem (MC) as well as to the closely related problem called, Partial Covering Network Design Problem (PC), in which one minimizes the network design cost for building the network under the constraint that a minimum percentage of the total traffic demand is covered. After presenting models for problems (MC) and (PC), we propose exact methods based on Benders decomposition. Our computational experiments show that our Benders implementations are competitive with exact and non-exact methods in the literature.

We address a Multi-commodity two-echelon distribution problem where three sets of stakeholders are involved: suppliers, distribution centers, and customers. Multiple commodities have to be sent from suppliers to customers, using multiple distribution centers for consolidation purposes. Commodities are collected from the suppliers and delivered to the distribution centers with direct trips, while a fleet of homogeneous vehicles distributes commodities to customers. Commodities are compatible, that is any vehicle can transport any set of commodities as long as its capacity is not exceeded. The goal is to minimize the total transportation cost from suppliers to customers. We present two sequential schemes based on the solution, in a different order, of a collection and a delivery subproblem. In both cases, the solution of the first subproblem determines the quantity of each commodity at each distribution center. The second subproblem takes this information as input. We also propose different strategies to guide the solution of the first subproblem in order to take into account the impact of its solution on the second subproblem. The proposed sequential heuristics are evaluated and compared both on randomly generated instances and on a case study related to a short and local fresh food supply chain. The results show the impact of problem characteristics on solution strategies 22.

We study routing problems that arise in the context of last mile delivery when multiple delivery options are proposed to the customers. The most common option to deliver packages is home/workplace delivery. Besides, the delivery can be made to pick-up points such as dedicated lockers or stores. In recent years, a new concept called trunk/in-car delivery has been proposed. Here, customers' packages can be delivered to the trunks of cars. Our goal is to model and develop efficient solution approaches for routing problems in this context, in which each customer can have multiple shipping locations. First, we study the single-vehicle case in the considered context, which is modeled as a Generalized Traveling Salesman Problem with Time Windows (GTSPTW). Four mixed integer linear programming formulations are proposed 33. Then, we study the multi-vehicle case which is denoted Generalized Vehicle Routing Problem with Time Windows (GVRPTW). We present a set covering formulation for the GVRPTW which is used to provide a column generation based heuristic to solve it. The proposed solving method combines several components including a construction heuristic, a route optimization procedure, local search operators and the generation of negative reduced cost routes. Experimental results on benchmark instances show that the proposed algorithm is efficient and high-quality solutions for instances with up to 120 clusters are obtained within short computation times 34.

To date, the research on agriculture vehicles in general and Agriculture Mobile Robots (AMRs) in particular has focused on a single vehicle (robot) and its agriculture-specific capabilities. Very little work has explored the coordination of fleets of such vehicles in the daily execution of farming tasks. This is especially the case when considering overall fleet performance, its efficiency and scalability in the context of highly automated agriculture vehicles that perform tasks throughout multiple fields potentially owned by different farmers and/or enterprises. The potential impact of automating AMR fleet coordination on commercial agriculture is immense. Major conglomerates with large and heterogeneous fleets of agriculture vehicles could operate on huge land areas without human operators to effect precision farming. In this paper, we propose the Agriculture Fleet Vehicle Routing Problem (AF-VRP) which, to the best of our knowledge, differs from any other version of the Vehicle Routing Problem studied so far. We focus on the dynamic and decentralized version of this problem applicable in environments involving multiple agriculture machinery and farm owners where concepts of fairness and equity must be considered. Such a problem combines three related problems: the dynamic assignment problem, the dynamic 3-index assignment problem and the capacitated arc routing problem. We review the state-of-the-art and categorise solution approaches as centralized, distributed and decentralized, based on the underlining decision-making context. Finally, we present open challenges in applying distributed and decentralized coordination approaches to this problem 30.

The delegated portfolio management has been at the core of financial debates, leading to a growing research effort to provide modeling and solution approaches. This class of problems focuses on investors relying upon decentralized affiliates for the specialized selection of investment options. In 31 we propose a novel optimization framework for multi-market portfolio management, where a central headquarter delegates the market-wise portfolio selection to specialized affiliates. Being averse to risk, the headquarter endogenously sets the maximum expected loss (in the form of conditional value at risk) for the affiliates, who respond designing portfolios and retaining portions of the expected investment returns as management fees. We show that the problem is NP-Hard and propose a decomposition procedure and strong valid inequalities, capable of boosting the efficiency of the computational solution, when instances become large. In the same line, optimality bounds exploiting overlooked properties of the conditional value at risk are deduced, to provide almost exact solutions with few seconds of computation. Building on this theoretical development, we conduct computational tests using comprehensive firm-level data from 1999 to 2014 on 7256 U.S. listed enterprises. These tests support the effectiveness of the decomposition procedure, as well as the one of the strong valid inequalities.

The discrete ordered median problem (DOMP) consists in locating

The segmented isotonic regression problem consists in fitting a curve to a cloud of data points under the conditions that the fitted curve must be non-increasing (or non-decreasing) and piecewise constant (or, equivalently, stepwise), with a predefined limited number of pieces (also referred to as steps or blocks in what follows). This problem is inspired by the bidding rules that large consumers or a pool of small consumers must comply with when participating in an electricity market. Their bids for purchasing electricity in these markets must be often submitted in the form of a non-increasing stepwise price-consumption curve, for which the maximum number of bid blocks is also constrained. These curves reflect how consumers value electricity and therefore, their sensitivity to its price (which is referred to as consumers' elasticity). With the advent of Information and Communications Technologies and the roll-out of the so-called smart grids, small consumers of electricity are being provided with the means to actively adjust their consumption in response to the electricity price. However, their consumption patterns are still uncertain, dynamic and affected by other factors different from the electricity price. The result is that estimating a bidding curve that properly reflects consumers' sensitivity to the electricity price is a statistical challenge.

In 17, we provide an algorithm to efficiently computes that curve from a set of price-consumption observations. To ease the computational burden of the proposed algorithm, we develop various strategies to efficiently calculate upper and lower bounds that substantially reduce the number of paths to be explored. Numerical results reveal that our algorithm is able to provide the globally optimal monotone stepwise curve for samples with thousands of data points in less than a few hours.

The weighted region problem (WRP) is a generalization of the shortest path problem considered in a geometric domain where the travel distance is region-dependent. More precisely, given a subdivision of the plane in polyhedra with different associated weights, the WRP asks for the Euclidean shortest path between two points but taking into account that the distance traversed along a polyhedron has to be multiplied by its associated weight.
Besides its mathematical interest, the WRP is motivated by its application to design the route of robots through zones with different terrains that are traversed at different speeds. Other practical applications of the WRP have been proposed for instance in geographical information systems (GIS) and in Seismology.
In 28 we consider the Weighted Region Problem with different

Given a graph, its line graph is another graph whose vertices are the edges of the original one. Further an edge links two nodes of the line graph if and only if the corresponding edges of the original graph share a node. A graph is said to be line-invertible if it is isomorphic to the line graph of some other graph, called the root. Although obtaining the line graph of a given graph is straightforward, doing the reverse is not a trivial task.

In genetics, haplotypes codify certain regions of the genome that show a statistically significant variability within a population. It has been observed that such variability plays an important role in human variation and genetic diseases. Haplotype phasing consists of estimating the haplotypes that produced a current population of genotypes, and is a primary problem in the analysis of genetic data. In this context, consistency relations between genotypes that could have been originated from a common ancestor are codified by a graph. Root graph reconstruction is useful here to estimate the original population size, that is, the number of generating haplotypes. However, if all the consistency relations are considered, sometimes reconstruction from the graph is not possible. In other words, the graph encoding consistency relations is not line-invertible. In these cases, one needs to disregard some of these relations, that is, to delete some of the edges of the consistency graph. A combinatorial problem then arises, namely which edges to remove so that the graph is disrupted as little as possible. In 26, we study the problem of identifying a set of edges of minimum cardinality that have to be deleted from a graph so that it becomes line-invertible. We propose different integer linear programming models as well as valid inequalities to strengthen them. Our computational experiments allow empirical comparison between the different models and ultimately demonstrate their utility.

In 46, a transmission-distribution systems flexibility market is introduced, in which system operators (SOs) jointly procure flexibility from different systems to meet their needs (balancing and congestion management) using a common market. This common market is, then, formulated as a cooperative game aiming at identifying a stable and efficient split of costs of the jointly procured flexibility among the participating SOs to incentivize their cooperation. The non-emptiness of the core of this game is then mathematically proven, implying the stability of the game and the naturally-arising incentive for cooperation among the SOs. Several cost allocation mechanisms are then introduced, while characterizing their mathematical properties. Numerical results focusing on an interconnected system (composed of the IEEE 14-bus transmission system and the Matpower 18-bus, 69-bus, and 141-bus distributions systems) showcase the cooperation-induced reduction in systemwide flexibility procurement costs, and identifies the varying costs borne by different SOs under various cost allocations methods.

Power systems face higher flexibility requirements from generation to consumption due to the increasing penetration of non-controllable distributed renewable energy. In this context, demand side management aims at reducing excessive load fluctuation and match the price of energy to their real cost for the grid. Pricing models for demand side management methods are traditionally used to control electricity demand. First, we proposed bilevel pricing models to explore the relationship between energy suppliers and customers who are connected to a smart grid. The smart grid technology allows customers to keep track of hourly prices and shift their demand accordingly, and allows the provider to observe the actual demand response to its pricing strategy. Moreover, we assumed that the smart grid optimizes the usage of a renewable energy generation source and a storage capacity. Results over a rolling horizon were obtained (Léonard Von Niederhausern PhD thesis 56). Time and Level-of-Use is a recently proposed energy pricing scheme, designed for the residential sector and providing suppliers with robust guarantee on the consumption. We formulate the supplier decision as a bilevel problem optimizing for financial loss. Numerical results on experimental residential power consumption data show the method effectively finds the optimal candidate 11.

In 35, we consider the interaction between the distribution grid (physical level) managed by the distribution system operator (DSO), and a financial market in which prosumers optimize their demand, generation, and bilateral trades in order to minimize their costs subject to local constraints and bilateral trading reciprocity coupling constraints. We model the interaction problem between the physical and financial levels as a noncooperative generalized Nash equilibrium problem. We compare two designs of the financial level prosumer market: a centralized design and a peer-to-peer fully distributed design. We prove the Pareto efficiency of the equilibria under homogeneity of the trading cost preferences. In addition, we prove that the pricing structure of our noncooperative game does not permit free-lunch behavior. Finally, in the numerical section we provide additional insights on the efficiency loss with respect to the different levels of agents' flexibility and amount of renewables in the network. We also quantify the impact of the prosumers' pricing on the noncooperative game social cost.

Broader results are obtained in 36, relying on duality theory extension.

One of the most frequently used approaches to solve linear bilevel optimization problems consists in replacing the lower-level problem with its Karush–Kuhn–Tucker (KKT) conditions and by reformulating the KKT complementarity conditions using techniques from mixed-integer linear optimization. The latter step requires to determine some big-

One of the main concerns in management and economic planning is to sell the right product to the right customer for the right price. Companies in retail and manufacturing employ pricing strategies to maximize their revenues.

In the Rank Pricing Problem (RPP), a firm intends to maximize its profit through the pricing of a set of products to sell. Customers are interested in purchasing at most one product among a subset of products. To do so, they are endowed with a ranked list of preferences and a budget. Their choice rule consists in purchasing the highest-ranked product in their list and whose price is below their budget. In 20, we consider an extension of RPP, the Rank Pricing Problem with Ties (RPPT), in which we allow for indifference between products in the list of preferences of the customers. We propose different mixed integer programming formulations for the problem and valid inequalities to strengthen them. Computational experiments assess the performance of the proposed approaches. In 43, we generalize the RPP assuming that each product has a limited amount of copies for sale, and we call this extension the Capacitated Rank Pricing Problem (CRPP). We compare the envy-free allocation of the products (a fairness criterion requiring that customers receive their highest-ranked product given the pricing) with the envy version of the problem. Next, we focus on the CRPP with envy. We introduce two integer linear formulations for the CRPP and derive valid inequalities leveraging the structure of the problem. Afterwards, we develop separation procedures for the families of valid inequalities of greater size. The performance of the formulations and the resolution algorithms developed is tested by means of extensive computational experiments.

Consider a graph

We formulate a Stackelberg Security game that coordinates resources in a border patrol problem. In this security domain, resources from different precincts have to be paired to conduct patrols in the border due to logistic constraints. Given this structure the set of pure defender strategies is of exponential size. We describe the set of mixed strategies using a polynomial number of variables but exponentially many constraints that come from the matching polytope. We then include this description in a mixed integer formulation to compute the Strong Stackelberg Equilibrium efficiently with a branch and cut scheme. Since the optimal patrol solution is a probability distribution over the set of exponential size, we also introduce an efficient sampling method that can be used to deploy the security resources every shift. Our computational results evaluate the efficiency of the branch and cut scheme developed and the accuracy of the sampling method. We show the applicability of the methodology by solving a real world border patrol problem 15.

We analyze the scheduling of unpredictable fare inspections in proof-of-payment transit systems, where the transit operator chooses a collection of patrol paths (one for each patrol) every day with some probability in order to avoid any regularity that could be exploited by opportunistic passengers. We use a Stackelberg game approach to represent the hierarchical decision-making process between the transit operator and opportunistic passengers, whose decision on whether to evade the fare depends on the inspection probabilities set by the transit operator. Unlike previous work, we use an exact formulation of the inspection probabilities that allows us to develop new heuristics for the fare inspection scheduling problem, and to assess their solution quality in terms of their optimality gap 14.

With the emerging deregulated electricity markets, a part of the electricity trading takes place in day-ahead markets where producers and retailers place bids in order to maximize their profit. We present a price-maker model for strategic bidding from the perspective of a producer in Price Coupled Regions (PCR) considering a capacitated transmission network between local day-ahead markets. The aim for the bidder is to establish a production plan and set its bids taking into consideration the reaction of the market. We consider the problem as deterministic, that is, the bids of the competitors are known in advance. We are facing a bilevel optimization problem where the first level is a Unit Commitment problem, modeled as a Mixed Integer Linear Program (MILP), and the second level models a market equilibrium problem through a Linear Program. The problem is first reformulated as a single level problem. Properties of the optimal spot prices are studied to obtain an extended formulation that is linearized and tightened using new valid inequalities. Several properties of the spot prices allow to reduce significantly the number of binary variables. Two novel heuristics are proposed 18.

The scientific interest in computational bilevel optimization increased a lot over the last decade and is still growing. Independent of whether the bilevel problem itself contains integer variables or not, many state-of-the-art solution approaches for bilevel optimization make use of techniques that originate from mixed-integer programming. These techniques include branch-and-bound methods, cutting planes and, thus, branch-and-cut approaches, or problem specific decomposition methods. In 24, we review bilevel-tailored approaches that exploit these mixed-integer programming techniques to solve bilevel optimization problems.

In the context of network design, the bilevel paradigm is especially relevant when the designer of a network does not have a direct control of user flows, who are assigned according to their own logic. In the chapter 38, we illustrate, through four distinct applications, the modelling and algorithmic issues that characterize bilevel network design problems. Throughout, we assign a broad sense to the term ‘network design’, meaning any program that involves the determination of variables that impact the structure of a graph or a network.

Poisoning attack is one of the attack types commonly studied in the field of adversarial machine learning. The adversary generating poison attacks is assumed to have access to the training process of a machine learning algorithm and aims to prevent the algorithm from functioning properly by injecting manipulative data while the algorithm is being trained. In 47, our focus is on poisoning attacks against linear regression models which target to weaken the prediction power of the attacked regression model. We propose a bilevel optimization problem to model this adversarial process between the attacker generating poisoning attacks and the learner which tries to learn the best predictive regression model. We give an alternative single level optimization problem by benefiting from the optimality conditions of the learner’s problem. A commercial solver is used to solve the resulting single level optimization problem where we generate the whole set of poisoning attack samples at once. Besides, an iterative approach that allows to determine only a portion of poisoning attack samples at every iteration is introduced. The proposed attack strategies are shown to be superior than a benchmark algorithm from the literature by carrying out extensive experiments on two realistic datasets.

Near-optimality robustness extends multilevel optimization with a limited deviation of a lower level from its optimal solution, anticipated by higher levels. We analyze the complexity of near-optimal robust multilevel problems, where near-optimal robustness is modeled through additional adversarial decision makers. Near-optimal robust versions of multilevel problems are shown to remain in the same complexity class as the problem without near-optimality robustness under general conditions 13.

During the last decades, the European gas market has undergone ongoing liberalization, resulting in the so-called entry-exit market system. The main goal of this market reorganization is the decoupling of trading and actual gas transport. To achieve this goal within the European entry-exit market, gas traders interact with transport system operators (TSOs) via bookings and nominations. A booking is a capacity-right contract in which a trader reserves a maximum injection or withdrawal capacity at an entry or exit node of the TSO’s network. On a day-ahead basis, these traders are then allowed to nominate an actual load flow up to the booked capacity. To this end, the traders specify the actual amount of gas to be injected to or withdrawn from the network such that the total injection and withdrawal quantities are balanced. On the other hand, the TSO is responsible for the transport of the nominated amounts of gas. By having signed the booking contract, the TSO guarantees that the nominated amounts can actually be transported through the network. More precisely, the TSO needs to be able to transport every set of nominations that complies with the signed booking contracts. Thus, an infinite number of possible nominations must be anticipated and checked for feasibility when the TSO accepts bookings. As a consequence, the entry-exit market decouples trading and transport. However, it also introduces many new challenges, e.g. the checking of feasibility of bookings or the computation of bookable capacities on the network.

Deciding the feasibility of a booking can be seen as an adjustable robust feasibility problem, where the set of booking-compliant nominations is the uncertainty set. The feasibility of a booking, if the underlying network is a tree, can be decided in polynomial time. In 27, we extend the knowledge on the frontier of hardness by showing that deciding the feasibility of a booking on single-cycle networks is in

In 45, we consider networks with linearly modeled active elements such as compressors and control valves that do not lie on cycles of the network. Since these active elements allow the TSO to control the gas flow, the single-level approaches from the literature are no longer applicable. We thus present a bilevel approach to decide the feasibility of bookings in networks with active elements. Besides the classical Karush–Kuhn–Tucker reformulation, we obtain three problem-specific optimal-value-function reformulations, which also lead to novel characterizations of feasible bookings in active networks. We compare the performance of our methods by a case study based on data from the GasLib.

The SEC-OREA project enables local energy communities (LECs) to participate in the decarbonisation of the energy sector by developing advanced efficient algorithms and analytics technologies.

LECs are an efficient way to manage energy by increasing the use of renewable energy sources (RES) at a local level. We aim to co-create an overarching LEC enabling framework with our stakeholders. Our goal is to create technical tools to empower citizens and place them at the core of the Energy Union. It is important to ensure that the participation of LECs in the energy systems transformation has the desired effect of decarbonising the system, and that this is done in a fair manner without destabilising the power system. We propose to use business analytics to define our stakeholders' research and innovation question, and to use energy analytics and operational research techniques to gather appropriate data, build statistical and machine learning models, and mathematical optimization models to find solutions and support decision making by our stakeholders.

The consortium brings together expertise from Business, Climatology, Computational Methods, Secure ICT, and Power Systems. We reach across the EU with researchers, innovators and stakeholders in Belgium, France, Ireland, Latvia and Portugal. Stakeholders include a national meteorological service, an LEC, and a distribution system operator (DSO).

We use climate services to gather energy-relevant pan-European indicators of climate trends and variability. We model energy consumption data to understand and create dynamic scenarios of electricity consumption. This allows us to capture the uncertainty in the availability of RES, and the electricity demand of the LEC. We create ensemble models of climate dependent RES and consumer electricity demand. We create and solve a set of mathematical optimization models to solve the multilateral economic dispatch (MED) of the LEC RES in a fair manner. We evaluate the implications of the LEC activity and net demand on sample grid topologies, and support the DSO to understand the impacts of, and requirements for LECs on the low voltage (LV) distribution network. This understanding supports the LEC and DSO decisions on asset reinforcement, network power flow and congestion management.

We provide recommendations for an overarching LEC enabling framework to ensure safe reliable efficient sustainable operation of the LEC and LV network. Our framework will allow LEC members to take ownership of the energy transition, benefit from the new technologies we develop and so reduce their bills and their carbon footprint. We provide business model analyses, efficient scalable multilateral economic dispatch and energy analytics algorithms, and integrated climate/LEC/LV models to support all stakeholder decision makers.

In collaboration with Ecole des Mines de Saint-Etienne (Gardanne), IFSTTAR (Champs-sur-Marne), HappyChic (Tourcoing).

This project addresses human resources management in warehouses which supply either sale points (B2B) or final consumers (B2C). Nowadays, such warehouses are under pressure. This is mainly due to the no inventory policy at the sale points and to the constant growth of e-commerce sales in France and Europe. In terms of logistics, this translates into an increasing number of parcels to prepare and to ship to satisfy an order, which is known typically a few hours before. Moreover, the total number of products to be packed varies very significantly from day-to-day by a factor of at least 3.

The novelty of the project is twofold: (1) The human factor is explicitly be taken into account. It is integrated in the mathematical models and algorithms that are developed for the project. The aim is to improve the quality of employees' work ensuring the efficiency of the logistic system; (2) Problems at different decision levels are integrated and tackled jointly. At the tactical level, the main problematics are workload smoothing and the management of the storage zone. At operational level, the major issues concern the rearrangement of the picking zone, the picking tours, and the dynamic reorganization of activities to manage uncertainties.

Annals of Operations Research, Applied Computing and Informatics, Central European Journal of Operations Research, Computers & Operations Research, Computational Optimization and Applications, Discrete Applied Mathematics, EURO Journal on Transportation and Logistics, European Journal of Operational Research, IISE Transactions, INFORMS Journal on Computing, International Journal of Management Science and Engineering Management, Mathematical Programming Computation, Networks, Omega, Operations Research, Optimization and Engineering, RAIRO - Operations Research, Transportation Science, IEEE Transactions on Power Systems, IEEE Transactions on Smart Grids, IEEE Power Engineering Letters, IEEE Transactions on Control of Network Systems: Luce Brotcorne, Diego Cattaruzza, Bernard Fortz, Martine Labbé, Hélène Le Cadre, Maxime Ogier, Frédéric Semet.