EN FR
EN FR

2023Activity reportProject-TeamEDGE

RNSR: 202124188E
  • Research center Inria Centre at the University of Bordeaux
  • In partnership with:Université de Bordeaux, CNRS
  • Team name: Extended formulations and Decomposition for Generic optimization problems
  • In collaboration with:Institut de Mathématiques de Bordeaux (IMB)
  • Domain:Applied Mathematics, Computation and Simulation
  • Theme:Optimization, machine learning and statistical methods

Keywords

Computer Science and Digital Science

  • A6.2.6. Optimization
  • A7.1.3. Graph algorithms
  • A8.1. Discrete mathematics, combinatorics
  • A8.2. Optimization
  • A8.2.1. Operations research
  • A8.7. Graph theory
  • A9.7. AI algorithmics

Other Research Topics and Application Domains

  • B3.1. Sustainable development
  • B3.1.1. Resource management
  • B4.2. Nuclear Energy Production
  • B4.4. Energy delivery
  • B6.5. Information systems
  • B7. Transport and logistics
  • B9.5.2. Mathematics

1 Team members, visitors, external collaborators

Research Scientists

  • Ayse Nur Arslan [INRIA, Researcher]
  • Ruslan Sadykov [INRIA, Researcher, until Aug 2023, HDR]

Faculty Members

  • François Clautiaux [Team leader, UNIV BORDEAUX, Professor Delegation, HDR]
  • Boris Detienne [UNIV BORDEAUX, Associate Professor]
  • Aurélien Froger [UNIV BORDEAUX, Associate Professor]
  • Pierre Pesneau [UNIV BORDEAUX, Associate Professor]

PhD Students

  • Komlanvi Parfait Ametana [UNIV BORDEAUX]
  • Isaac Balster [INRIA]
  • Cecile Dupouy [KEDGE B. S., CIFRE, from Sep 2023]
  • Patxi Flambard [INRIA, from Oct 2023]
  • Mickael Gaury [KEDGE B. S., until Aug 2023]
  • Mellila Kechir [KEDGE B. S., until Aug 2023]
  • Daniiil Khachai [KEDGE B. S., until Aug 2023]
  • Johan Leveque [La Poste]
  • Sylvain Lichau [UNIV BORDEAUX]
  • Luis Lopes Marques [INRIA]
  • John Jairo Quiroga Orozco [ORANGE]
  • Fulin Yan [CATIE, CIFRE, from Mar 2023]

Technical Staff

  • Najib Errami [INRIA, Engineer, from Dec 2023]
  • Najib Errami [INRIA, Engineer, until Oct 2023]
  • Laurent Facq [CNRS, Engineer, from Mar 2023]

Interns and Apprentices

  • Patxi Flambard [INRIA, Intern, from Mar 2023 until Aug 2023]
  • Quentin Monnereau [INRIA, Apprentice, from Dec 2023]
  • Matthieu Varenne [INRIA, Intern, from May 2023 until Jul 2023]

Administrative Assistant

  • Joelle Rodrigues [INRIA]

External Collaborators

  • Artur Alves Pessoa [UFF NITEROI BRAZIL]
  • Walid Klibi [KEDGE B. S.]

2 Overall objectives

Nowadays, integer programming based methods are able to solve effectively a large number of classic combinatorial optimization problems (travelling-salesman, knapsack, facility location problems). Dramatic improvements on benchmarks were obtained due to better linear programming solvers, better (re)formulations, new polyhedral results, new efficient algorithms, fine-tuned implementations, and more powerful computers.

Practitioners are now trying to tackle more advanced problems where the degree of complexity of the systems to optimize has increased substantially. In particular, numerous factors should be taken into account in the decision process: ecological impact, new regulations, presence of commercial partners or aggressive competitors, limitation on previously abundant resources, and so forth. Many problems also come with uncertain parameters (energy production, consumer behavior, weather, disasters, resource breakdowns...), and have to be solved in a dynamic environment, where solutions have to be modified/reoptimized on-the-fly to account for last-minute changes.

In project EDGE, our main objective is to propose new mathematical models, algorithms, and efficient implementations of these algorithms to deal with families of integer programming problems arising from these complex systems, which are out-of-reach for current state-of-the-art optimization methods. Although universal algorithms able to deal with all types of constraints cannot be achieved in practice, we will seek results that are as broadly applicable as possible by designing so-called generic methods, i.e., methods that address abstract mathematical models that can be later specialized for a large number of problems. We will mainly rely on two families of mathematical tools: decomposition techniques and extended formulations.

Decomposition methods are valuable tools to address complex optimization problems. Beyond the obvious advantages of the divide-and-conquer paradigm, they offer a way to produce stronger formulations, effective algorithms applicable to a wide range of problems, and they allow the use of right mathematical tools for each subsystem (also called subproblem). Several issues have to be addressed when one wants to tackle modern integrated real-life problems by means of decomposition methods. An important limitation of these methods is that they are generally efficient only for problems with a specific structure (typically independent subproblems connected by few common constraints). Many problems we address in this new project either do not have this decomposable structure, or the structure is broken by so-called non-robust cuts, which are applied during the solution process. In this case, new decomposition methods are needed to deal effectively with these connecting constraints. Another issue stems from the large number of different subsystems. A methodological challenge is to find a good trade-off between the quantity of information passed from one subproblem to another and the difficulty to integrate the common information into them.

Extended formulations are also a focus of our team. The past ten years have seen much progress in this field, which aims at reformulating effectively a problem/polyhedron with the help of (exponentially many) additional variables. In particular, network-flow-based reformulations have received an increasing interest from the community. A considerable difficulty to overcome when dealing with such a reformulation is to handle its size. One of the most promising paths is to study so-called aggregation and disaggregation techniques, where the level of detail of the formulation is modified iteratively. Similar approaches are popular in other fields of applied mathematics, where detailed information is only needed in some specific parts of the model. In integer programming, determining the suitable level of precision needed for a given subsystem is a major challenge, since the combinatorial structure of the subproblem preclude techniques based on derivatives. Although preliminary results indicate that one can achieve considerable improvements for some specific problems, there is a lack of a theoretical framework that would allow to develop these techniques for a larger class of polyedral structures.

We use our methodological tools to address robust optimization, which is an increasingly popular approach to handle the uncertainty arising in mixed-integer linear optimization problems. This optimization paradigm describes the variability of the uncertain parameters through bounded uncertainty sets (and thus replacing the expectation, used in stochastic optimization, with the worst-case objective realization). In particular, we plan to study robust problems with integer recourse. From a theoretical point of view, these problems are known to be Σ2p-complete in general (and thus simply verifying that a solution is feasible is an NP-hard problem). We will work on determining the frontier between tractable and non-tractable problems by studying the structure of specific subsets of robust problems characterized by: their deterministic counterpart; their uncertainty set; and the difficulty of optimizing the recourse subproblem. Our first steps in this direction show that decomposition methods and extended formulations can be instrumental in achieving this objective.

State-of-the-art optimization methods often being too hard to use, or too cumbersome to adapt to a new problem, their use is typically limited to a small community of experts. Keeping this in mind, we additionally concentrate on developing high-level modeling tools and solvers with the aim of facilitating the adaptation of our algorithms by colleagues and O.R. scientists. Doing so will maximize the impact of our research in academia as well as in the industry.

3 Research program

3.1 Decomposition methods

Dantzig-Wolfe decomposition  33 is a well-established approach for solving hard and/or large-scale combinatorial optimization problems. Problems that are originally formulated as Mixed-Integer Programming problems (MIPs) are reformulated by defining a (generally exponential) number of variables which correspond to the solutions of subproblem(s) obtained by subsets of variables and constraints of the original MIP formulation. To solve the linear approximation of the Dantzig-Wolfe reformulation, the column generation procedure is employed, which iteratively generates missing variables by solving the subproblem(s). Cut generation is then employed to strengthen the linear approximation, and branching is performed to find an optimal solution to the problem. The overall approach is called the branch-cut-and-price (BCP) method  36. Recent advances, including those developed by our previous project team ReAlOpt, helped significantly improve the efficiency of general-purpose BCP algorithms. To use these algorithms one just has to specify the master problem, and a formulation or an algorithm to solve the column generation subproblem. The improvements include stabilization  59, pre-processing, diving heuristics  66, and strong branching, among others. Generic BCP codes implementing these improvements made the branch-cut-and-price approach more accessible to users.

A major limitation of BCP algorithms is their inefficiency when they face constraints that break the decomposable structure of the problem. Prominent examples are synchronisation and precedence constraints  37. Moreover, it became clear recently that many efficient BCP approaches are "non-robust", i.e., column and cut generation are interdependent. Thus, for every subproblem type, dedicated cut-generation approaches should be considered. Few such approaches are known in the literature  34, 57. Their development for different families of cutting planes and different subproblem types is a very promising research direction. Devising BCP algorithms for problems that do not have the decomposability property is a difficult and high-risk task. However, in the case of success, the impact on the practice of Operations Research would be significant.

Our numerical experience shows that generic BCP approaches  42, 43 (in which the subproblems are solved as generic MIPs) are rarely competitive with the traditional branch-and-cut method used in widely available and highly efficient MIP solvers. On the other hand, very efficient BCP approaches specialized for particular problems exist in the scientific literature. In such approaches, subproblems are of a certain type and are solved by fast specialized algorithms. These ad-hoc BCP approaches are however rarely used in practice due to the complexity of their implementation  57, CITATION NOT FOUND: Sadykov2020. An important goal of our project is to propose innovative methods that have the efficiency of the most advanced specialized algorithms in the literature, yet can be applied to many practical cases. This can be done by developing BCP algorithms which are not completely generic (i.e., they cannot be applied to any model obtained by Dantzig-Wolfe reformulation) but can be used for large classes of combinatorial optimization problems. There is a clear need for efficient and reusable semi-generic approaches, that can be applied to a large number of problems inside a specific class. We have identified several types of subproblems that are often encountered in practical applications as substructures: resource-constrained paths in graphs  48 and hyper-graphs  31, constrained network flows, and decision diagrams  23. Developing efficient algorithms that can be applied to the broadest possible class of such subproblems is a hard task. We will seek the right trade-off between generality and efficiency.

Recently, we have developed the first "semi-generic" BCP solver, VRPSolver, which relies on solving resource constrained shortest path subproblems  58. We have shown that this solver can be successfully applied to many vehicle routing variants and some packing problems. An open question is whether there are other classes of problems which can be efficiently solved by "semi-specialized" BCP solvers. Scheduling and network design problems are good candidates for such classes. Capabilities of current BCP solvers however need to be extended to efficiently handle these problems. One of the drawbacks of our solver is the absence of good embedded heuristics to quickly find feasible solutions. Diving heuristics  66 can in principle be used, but their speed and performance are generally not satisfactory.

Another challenge is to design a modelling paradigm that can produce formulations for "semi-specialized" BCP solvers. In this context, defining a MIP and applying the standard Danzig-Wolfe reformulation technique is cumbersome as the subproblems cannot be easily defined as MIPs. New innovative modelling paradigms should be developed to simplify the problem definition and thus extend the usage of BCP approaches. They should be simple enough to attract non-specialists in the domain, and complex enough to pass the structural information to the BCP solver. This is mandatory to achieve state-of-the-art performance. One can find inspiration in the notion of global constraints designed in the field of Constraint Programming.

Benders' decomposition  22 is also becoming increasingly effective for several benchmark problems. Recent methods based on this reformulation approach involve solving many similar subproblems repeatedly, and make use of sophisticated stabilization techniques. We plan to build on the knowledge developed on Dantzig-Wolfe decomposition to develop techniques to improve Benders' decomposition, which do not use any assumption on the specific structure of the subproblem. A considerable challenge is to find effective methods for handling integer subproblems in a generic manner.

Another way to improve Benders' decomposition tools is to use machine learning techniques. Such algorithms can also be used to parameterize the different parts of the method or discover some information to help the convergence of these algorithms. This is a promising path for solving stochastic programs with a large number of scenarios. Several scientific questions have to be addressed, including the possible adaptation of stabilization techniques that need a complete dual information for proving the convergence.

In line with the challenges highlighted above, one of our first objectives is to improve the genericity of our BCP solver for the resource constrained path structure. For the moment, it supports only models with standard additive resources. We intend also to cover the case when cost and resource consumption of the current arc in the path depends not only on the arc itself, but also on the accumulated resource consumption of previous arcs in the path. This would allow to solve several important classes of problems such as electric vehicle routing problems, and scheduling problems with additive objective functions much more efficiently. This implies improving both the algorithms used and the modelling capabilities of the solver to support more general resources.

In the longer term, we plan to develop semi-generic BCP approaches and solvers for problems which do not contain the resource constrained path structure in graphs. Alternative structures which can be considered are spanning trees, network-flow problems in hyper-graphs, context-free grammars (both mentioned in Section 3.2), and decision diagrams.

Another interesting idea to explore in this direction is developing decomposition approaches for multi-stage sequential decision-making problems under uncertainty. Such problems can be modeled as multi-agent Markov Decision Processes (MDP). We believe that mathematical programming decomposition algorithms in which sub-problems are single-agent MDPs solved by dynamic programming constitute a very promising alternative to currently used approaches for multi-agent MDPs. Moreover, there are currently no approaches for such problems which provide exact solutions and non-trivial valid bounds on the optimal value.

3.2 Extended formulations

Many successful extended formulations are based on network-flow models. These models have excellent polyhedral properties, since the constraint matrix of a flow problem is totally unimodular, and thus all extreme-point solutions of the linear program related to this subsystem are integer if the right-hand sides of the constraints are integer. Additional constraints generally break this property, but in many cases, the linear relaxation remains of excellent quality.

Network-flow reformulations can be obtained when all or parts of discrete optimization problems can be described by regular or algebraic languages, or by recursive (such as dynamic programming) formulations, on top of which additional constraints such as resource constraints, disjunctions, are specified. Karp and Held  49 provided a systematic approach to building dynamic programming recurrence equations for a large class of optimization problems by characterizing the representation of discrete decision processes by monotone sequential decision processes. Martin et al.  55 characterized discrete optimization problems that can be solved via dynamic programming using directed hypergraphs  19. The formalism of sequential decision processes allows to build an internal representation of the sequential structure using network-flows in state-graphs or hypergraphs. Additional constraints can be embedded in this representation by increasing the size of the network, for example by discretizing quantities such as time (scheduling), load (vehicle routing), or width (bin-packing)  44. This has several advantages: 1) enforcing the consideration of constraints using the graph structure (that is convexifying these constraints), 2) easing the modeling of complex constraints, 3) facilitating the consideration of resource-dependent parameters  68.

The main drawback of these formulations is their typically exponential or pseudo-polynomial number of variables and constraints. This is different from models that arise from a Dantzig-Wolfe decomposition, which have an exponential number of variables, but a generally small number of constraints. The size of network-flow models is usually far too large to make them solvable by modern integer programming, SAT or constraint programming solvers.

To overcome this limitation, aggregation and disaggregation techniques, based either on a scaling of the input parameters or on aggregating nodes of the network, are good candidates. As they generally make use of sub-routines, they allow an efficient hybridization of different optimization paradigms. Several methods sharing common ideas have been introduced in different communities. For solving dynamic programs, techniques based on iterative state-space relaxations  29 have proved their efficiency since the 1980s. The most popular methods are the decremental state-space relaxation  25, 64 and the successive sublimation dynamic programming  47. In the field of MIP models, the most promising algorithms for solving exponentially-large network-flow formulations are also based on an increasingly refined series of relaxations (see  30, 26, 63). These methods produce initial models with fewer variables and fewer constraints than the original ones and iteratively refine them after identifying new variables and/or constraints to add. Although these methods have been shown to be efficient for several benchmark problems, only few generic methods have been proposed (see  27 and 51).

Iterative aggregation/disaggregation techniques have some relationship to other methods. They share similarities with Benders' decomposition or Logic-based Benders' decomposition. Both methods rely initially on a relaxation of the problem. However, in Benders' decomposition constraints are iteratively introduced to discard infeasible solutions, while iterative aggregation/disaggregation-based techniques may also introduce new variables. Similarities with optimization methods using decision diagrams constructed by (iterative) state aggregation procedures can also be highlighted  23. Decision diagrams are graphs storing possible variable assignments satisfying some constraints and are particularly used in CP/SAT solvers.

A conclusion of the above is that similar aggregation and disaggregation techniques are used under different names in different fields (MIP, DP, CP/SAT), leading to a scattered scientific literature. Further, most of the proposed techniques are problem-dependent (with the notable exception of decision diagrams, which offer a more general setting, although we are not aware of any generic implementation). Our goal is to go in the direction of developing a unifying formalism and express the main methods of the literature within this formalism. This generic view aims to bring together methods whose proximity has never really been highlighted. Existing algorithms may benefit from algorithmic components that have proven their effectiveness in other contexts. As an example, existing MIP algorithms could benefit from state-of-the-art approaches developed for decision diagrams.

We will study and design effective methods based on aggregating and disaggregating models. It is necessary to develop a high-level modeling tool to specify these models only implicitly. The use of algebraic languages to produce such higher-level formulations is a direction we are considering to explore. We also believe that a primal-dual solution approach is worth exploring. Such an approach will rely on the local refinement of a coarse representation of the global problem when needed. The idea is to derive primal and dual bounds on the objective value of the problem with a lower computational effort compared to working with the original model. We can distinguish two types of aggregation: 1) a conservative aggregation ensures that every solution of the original model is also a solution of the aggregated model (i.e., the aggregated model is a relaxation of the original model) and 2) a heuristic aggregation ensures that every solution of the aggregated model is also a solution of the original model. After projecting the original model onto an aggregated one, we aim to converge to an optimal solution without reaching the size of the original model. To keep the model tractable, one can make use of primal and dual information from the different aggregated models to fix variables values (using Lagrangian filtering for example). We will focus on finding a systematic way to 1) aggregate an initial model to yield either a relaxation or a restriction and 2) iteratively disaggregate the current model for obtaining better dual bounds. A path worth exploring is to dynamically aggregate the models taking advantage of the information learned in the current phase to build stronger models  46. We will also study an emerging technique consisting in the joint use of several aggregated models with decision synchronization  56, 52 which can be helpful to derive stronger bounds and to keep tractable network-flow models.

For designing heuristics, the problems generated by dynamic programs / algebraic languages are suited to so-called structured learning techniques, where the objective is to learn how to approximate a hard combinatorial problem (typically a sequencing problem with resource constraints) by means of a simpler problem (typically a shortest-path problem on a graph). More classical learning techniques can also be used to guess the right parameters (lagrangian or surrogate multipliers, or to guess the right decisions to make (state-space modification, relaxation of some constraints, etc.).

In line with the challenges highlighted above, we started to address automatic heuristic for dynamic programs with side constraints based on structured-learning techniques. This is a work that had already begun in a collaboration between F. Clautiaux, A. Froger and A. Parmentier (CERMICS). The objective of the project is to determine the best way to apply machine-learning techniques to obtain heuristics via projection, pruning etc. To work on this challenge, we collaborate with Centre Aquitain des Technologies de l'Information et Électroniques (CATIE). Fulin Yan, a student from INSA Rennes, has done an internship with our team on this subject. We are now waiting for a funding answer to propose him a CIFRE PhD thesis.

We additionally started to lay the scientific foundation for a generic framework that embed various techniques from dynamic programming based on regular languages, resource-constrained shortest path, and decision diagrams. We are comparing the different elements coming from the different fields (SAT / MIP / heuristics), determining the common parts and analyzing the main differences between them. Our objective is to develop a common formalism and express the most important methods of the literature using this formalism. This is a work in progress between F. Clautiaux, B. Detienne and A. Froger. We hired Luis Lopes Marques as a PhD student starting from October 2022 to work on this subject. This thesis is financed by our ANR project AD-LIB.

In the near future, we will design exact convergent methods based on aggregation/disaggregation techniques (e.g., use of problem-dependent or primal/dual information) relying on the generic framework cited above. We will focus on finding a systematic way 1) to aggregate an initial model to yield either a relaxation or a restriction and 2) to iteratively disaggregate the current model for obtaining better dual bounds. This is part of our ANR project AD-LIB.

We will also study matheuristics for problems based on time-indexed models. This work will focus on the ability of the method to provide the best possible solution within a given time limit. This is a challenge when solving operational problems where the decision-making time is limited. The key area of research will be (i) how to disaggregate in a way that a good feasible solution can be provided as fast as possible (search strategy) and (ii) how to modify infeasible solutions provided by an aggregated model in order to make them feasible without impairing their cost. This is also a part of the ANR project AD-LIB.

Upon sucessful completion of the above objectives, we may explore many extensions and generalizations to our framework. Among them, we may cite new mechanisms to integrate several different state-space relaxations in the same solution process and extensions to non-linear constraints and objective / dynamic programs based on different semi-rings than (max,+) in n. In the longer term, we wish to explore the links with polyedral theory, and develop extensions of the framework from regular languages to algebraic languages (and to extend our algorithms from graph-based algorithms to hypergraph-based algorithms). Polyhedral theory can be used to improve the performance of our algorithms for solving dynamic programs with side constraints, and to provide the equivalent of the branch-and-cut algorithm for dynamic programs.

3.3 Structure analysis and problem specific studies

Polyhedral analysis  15 remains a part of our project. When integer linear programming methods are involved, information on the structure of the polyhedron representing the feasible region is key for solving hard problems effectively. The most spectacular success of this kind of methods can be found in Concorde  14, a software dedicated to the traveling salesman problem. Network design has also been the subject of many contributions (see e.g.  38). Since we aim to propose methods that can apply to broad classes of problems, we will focus on families of constraints/problems that occur in a large number of practical problems (including capacity, disjunction, precedence or set-covering constraints). There are two promising directions that we can follow in order to study these structures.

In the case where the linear relaxation of an integer programming formulation has an excellent quality, but requests a large computational time, being able to compute good dual solutions rapidly allows to provide useful dual bounds for combinatorial optimization problems. This technique was used for the classical cutting-stock problem under the name dual-feasible functions (see  17). These functions lead to bounds of linear complexity that have an excellent behaviour on average. This concept has already been extended to several packing and location problems (see  50, 60). We plan to explore further extensions to common problem/constraint types.

Many hard problems on graphs become easy when the graph has a special structure  39, typically perfect graphs such as trees, interval or triangulated graphs (see for example  53, 65). In most problems, the graphs do not belong to an easy class. However, one can compute effective relaxations by exhibiting subgraphs of suitable structure in these general graphs. Unfortunately for many classes the problem of finding a subgraph belonging to this class maximizing a certain criterion is NP-hard. We plan to study methods for finding efficient models to compute such relaxations.

In line with the challenges highlighted above, our first objective is to work on polyhedral analysis and computational studies of formulations for the "best" interval subgraph of a general graph. We have identified several families of formulations, among them strong formulations based on an exponential number of variables and/or constraints. Pricing algorithms and separation problems have to be studied in order to efficiently solve these formulations. The same work can be undertaken with chordal graphs, which offer a stronger relaxation (since they are a superclass of interval graphs), for a higher computational cost.

In the longer term, we plan to work on the extension of our work on so-called dual feasible functions to more complex cases. The theory of superadditive and dual-feasible functions have mainly been studied in the case of a single knapsack constraint. Extensions to the intersection of the knapsack polytope with polytopes related to highly structured constraints (such as precedences, or disjunctions) is already a hard challenge.

More generally, we plan to work on several aspects related to polyedral theory: study of extended formulations generated by algebraic grammars. A possible objective is to use implicit formulations expressed in different state spaces to derive efficient cuts for a base formulation ; determining whether considering the structure of the problems will be a stronger tool to derive dual-feasible functions than learning to guess the best dual values from the input parameters (for instance, using structured learning to compute a smaller dual polytope from an initial one) ; studying decomposition schemes based on graph decomposition techniques, such as path or tree-decompositions.

4 Application domains

Although we aim to develop generic methodologies, the types of problems solved will play an important role in the choice of paradigms employed, and in the methods used to address these problems. We will mainly focus on two classes of problems: those that possess several levels of decisions, and those that have some uncertain parameters.

These types of problems are prelavent in certain application fields, including energy, and supply-chain. In both fields, our objectives are in line with aspirations of modern societies (reducing the pollution, improving the sustainability of human activities, including a better and more robust design of supply-chains). Our optimization tools will be useful in accompanying the profound shifts that are needed in these sectors, by taking into account the interconnection between the different problems faced by the companies.

In energy, the main challenges we face are related to the uncertainty of both production and consumption. The uncertainty in production grows with the development of renewable energy production, while uncertainty in consumption remains driven by weather. This leads to large-scale robust and/or stochastic optimization problems, which push our methodologies to their limits. A typical problem is to ensure the technical feasibility of some energy transition scenarios where the percentage of nuclear power in the energetic mix decreases, leading to an increasing level of uncertainty.

Supply Chain and logistics are also fertile playgrounds for our research. In particular, successful applications in routing, production planning, inventory control, warehouse optimization, or network design are numerous. Besides, new technologies such as the internet of things or physical internet bring new core optimization problems and the need for new relevant mathematical models and solutions approaches. The main challenges are to deal with integrated problems, including location decisions, inventory, routing, packing, employee timetabling, among others. Current methods tend to take tactical and operational decisions independently despite the fact that they are interrelated problems. Supply chains also have to be robust to uncertain parameters (from regular variations of the system parameters to disaster management).

4.1 Integrated problems

Most practical optimization problems are complex, i.e., they involve different types of decisions to be made. Such problems are commonly called integrated. They often involve different time scales related to strategic, tactical and operational decisions. A classical approach to solve such problems is their disintegration into independent problems or stages, where the solution of a stage is the input for next one. We prefer the term "disintegration" here instead of "decomposition" to avoid confusion with decomposition approaches presented above. The disintegration approach usually results in highly sub-optimal solutions. There is large potential for improving the quality of solutions if two and more decision stages are considered together.

Recently there has been an increase in interest for solving integrated optimization problems. Such problems may involve for example integration of production and outbound distribution (production-distribution problem  41), facility location and vehicle routing (location-routing problem  67), inventory management and vehicle routing (inventory routing problem  32), and different levels of distribution (two-echelon routing  54 and cross-docking based distribution).

As we point out above, integrated problems are common in practice but difficult to solve, since they involve synchronization between stages, and different time scales in different stages. Moreover, different classes of problems are usually encountered in different stages. This means that different exact approaches are usually applied to solve such classes of problems. It is often not known how one can efficiently combine these approaches to solve an integrated problem. Thus, heuristic algorithms are used in a vast majority of cases. However, we believe that advances in efficiency and ease of use of exact algorithms and decomposition algorithms in particular allow us to think of applying them here.

Exact approaches are usually limited to small instances of integrated problems, while real-life instances are tackled by heuristic approaches. Nevertheless, for estimating the quality of these heuristics we need approaches that obtain lower bounds of a good quality, even for large scale instances. We think that BCP algorithms are good candidates to obtain such bounds. The column generation approach has however several drawbacks when applied to large-scale integrated problems. When solving pure academic problems, the bottleneck of BCP algorithms is usually the solution of the pricing problem. On the contrary, when dealing with integrated real-life problems, the size of the master problem may become too large and the solution of the restricted master LP becomes a bottleneck. One possible approach to tackle this problem is a dynamic aggregation of constraints in the master LP  28. Another approach could be to use machine learning techniques to choose “good” columns from a large pool generated by the pricing problem. The goal here is to help the column generation algorithm converge faster in the case of a “heavy” master problem or to find “compatible” columns to obtain better primal solutions.

Column generation-based algorithms may also prove useful for obtaining good feasible solutions for integrated problems. A potential directon is to develop matheuristics, i.e., heuristic methods that make use of sophisticated algorithms initially designed for exact methods. The goal is to benefit from the two worlds: exact methods to exploit the special structures of some subproblems, and heuristics to produce solutions in a small amount of time. Column generation-based matheuristics have already shown good results for some integrated problems  16. A common approach is to use heuristics and/or column generation to obtain interesting columns and then solve the restricted master problem as a MIP with a time limit. Then the set of columns can possibly be updated and the restricted master is resolved. However, generic frameworks for such approaches are absent, although they would be highly welcome for a wide class of problems.

Our initial efforts will be concentrated on one or two well-chosen problems of this type. The inventory routing problem (IRP) is probably the most "academic" and simple to state example of integrated problems which stays very hard to solve for both exact and heuristic methods 35. In this problem, the decision maker in charge of the delivery is also in charge of the stock level of his/her customer. IRP is widely encountered in practice. The integration of electric vehicles in transportation is also a timely topic. Planning the charging operations of electric vehicles is necessary to account for a wide variety of costs (e.g., time-dependent energy costs), charging infrastructure constraints (e.g., grid restrictions, limited number of chargers), battery degradation, and the potential integration of Vehicle-to-Grid technologies. Taking this variety of constraints into account introduces complex coupling decisions between charging and routing, which leads to integrated problems.

We plan to design a comprehensive modelling and solution framework for planning the charging activities of a fleet of electric vehicles considering time-dependent costs and the potential integration of Vehicle-to-Grid technologies. Particular attention is directed towards decreasing the maximum power of energy required in the planning interval. This will be a joint work with O. Jabali (Politecnico di Milano, Italy) that has already began with a master student from Politecnico di Milano visiting our research team for 4 months last year and studying the case of a fleet of electric buses. Our objectives are to design advanced mathematical methods to produce high-quality solutions in a time efficient manner as well as to propose an exact solution method to solve electric routing problems (E-VRPs) with limited charging infrastructure constraints (based on a previous work  40) using a Branch-and-cut-and-price algorithm. We will investigate the trade-offs that exist between driving time and energy consumption (e.g., existence of alternative paths, speed reduction) either by refining the abstraction of the road network on which E-VRPs are defined or by working directly with the road network.

In the long term, our objective is to be able to cope with an increasing amount of integrated problems, with a focus on supply-chain applications (scheduling, location, inventory, routing, etc.). Our work on this topic will benefit from our findings on new decomposition methods. Once our main objectives are achieved, we will explore methodological tools to take different sources of uncertainty into account (e.g., energy consumption of electric vehicles, time-dependent energy costs) using robust optimization or stochastic programming paradigms.

4.2 Robust optimization

In most decision making problems the data used in the mathematical model is subject to some form of uncertainty. This uncertainty can be caused by measurement errors, variability or the time duration of the processes under study, or simple lack of access to reliable data. Incomplete information can also come from the presence of competitors or adversarial individuals whose policies are not known to the decision maker. Recent events have also shown that the capability to resist to major disasters is an issue for important organizations and critical infrastructure.

There are two commonly accepted paradigms that are used to incorporate uncertainty in mathematical programming: stochastic optimization (including chance-constraints), and robust optimization. In stochastic optimization, one assumes that a probability distribution is known for the uncertain parameters and optimizes a statistical risk measure such as the expected value or the conditional value-at-risk. In this new project, our main tool for dealing with uncertainty will be robust optimization. Unlike stochastic programming, which requires exact knowledge of the probability distributions, robust optimization only describes the variability of the uncertain parameters through bounded uncertainty sets. Further, replacing the risk measures employed in stochastic optimization, with the worst-case realization, robust optimization is more adapted to applications where the decision-making process involves high risks or adversarial participants.

In the last 20 years, the robust optimization field has seen significant progress. Static robust optimization problems, which assume that all decisions are here-and-now, have received considerable attention. They have been formulated and studied with polyhedral, conic and ellipsoidal uncertainty sets. From a complexity viewpoint, it has been shown that these problems are barely harder than their deterministic counterparts in most practical cases. From the numerical viewpoint, mixed-integer linear (or second-order conic) reformulations have been proposed and can handle large-scale problems.

Despite these rather encouraging results, static models suffer from over-conservatism. Indeed, in these models, no recourse action can be taken to change or adapt the solution once the uncertain values have been revealed. This substantially limits the applicability of the robust optimization paradigm since in most applications that involve temporal decision-making, it is possible to adapt the solution to (at least partially) mitigate the effects of uncertainty.

Acknowledging the importance of adjustable models, the scientific community has started to address solution methods for these problems. While it is theoretically well-known that even two-stage linear programs are NP-hard, approximate (affine decision rules)  21, or exact (row-and-column generation algorithms)  20, 69 solution methods for problems featuring continuous recourse variables have been proposed in the literature. These two sets of algorithmic tools have made it possible to solve exactly or approximately a large number of adjustable robust optimization problems with continuous recourse. Several studies have also sought to understand the quality of the bounds provided by affine decision rules, as well as proposed extensions to piece-wise affine functions.

The problems we wish to investigate in this project are adjustable robust optimization problems where recourse decisions are integer. The situation is significantly more complex for this setting than for the case of continuous recourse. The two aforementioned approaches do not apply: affine decision rules provide continuous recourse actions by construction, while the separation problem in the row-and-column generation algorithm is based on linear programming duality.

Having no theoretical guarantees regarding their computational complexity, there are some numerical algorithms that aim to solve adjustable robust optimization problems with integer recourse approximately. They are all based on simplifying the type of feasible recourse actions, either by partitioning the uncertainty set or by imposing that the recourse decisions should be chosen among a predetermined finite set of solutions (finite/K-adaptability). Unfortunately, these methods hardly scale up and the recent results  24, 45 are limited to small problem instances, which can be partially explained by the fact that they rely on big-M reformulations, known to yield poor continuous relaxations.

In the following years, we plan to develop exact and approximate solution methods for adjustable robust optimization problems with integer recourse. On the one hand, we will study relatively general row-and-column generation algorithms that are based on stronger continuous relaxations than what was done in the literature (aforementioned network formulations and decision diagram-based approaches seem to be promising leads in achieving this goal). On the other hand, we will focus on classes of specific robust models (special cases), which will be studied both from a theoretical and a numerical perspective. Some of our efforts will be spent on improving the efficiency of existing methods such as K-adaptability since these approaches could benefit from the considerable numerical experience of the team.

We developed a first approach to dealing with binary recourse decisions in the context of adjustable robust optimization problems with binary recourse in 18. The main idea of this work is to convexify the recourse feasible region using a Dantzig-Wolfe reformulation. Promising results were presented for the special case where the coupling constraints are deterministic and satisfy additional technical assumptions. In this case, the inner maximization and minimization problems can be interchanged, resulting in a large-scale deterministic equivalent model. We plan to extend this technique to the more general case and to understand how to relax the aforementioned technical assumption.

In the near term, we plan to carry out several projects to respond to the challenges we outline above. We plan to study solution approaches for complex problems arising in future 5G networks. This topic is the subject of a CIFRE thesis supervised by Boris Detienne and Pierre Pesneau in collaboration with Orange. We will also address complex network-design and location problems (topic of ANR project DE-SIDE, in collaboration with Sobolev Institute and Kedge Business School). We will determine easy and hard cases for robust multi-stage network-design problems, depending on the set of constraints, the definition of the uncertainty and the possible second-stage actions, with a focus on integer recourse.

From a methodological perspective, we will adress primal and dual bounds for two- and multi-stage adjustable robust optimization problems (topic of ANR-JCJC project DROI which was granted to Ayse N. Arslan). Building upon the results of 18, we propose to deal with constraint uncertainty in adjustable robust optimization using Lagrange and Fenchel duality, which will provide dual bounds. Several issues need to be resolved in investigating this approach, such as the optimization of the dual (infinite size) problems and how to close the duality gap. We additionally propose to study improved decision rules for these problems.

In the long term, our objectives in relation with this challenge are as follows. First, we want to design efficient approximate approaches to solve multi-stage adjustable robust optimization problems. Although exact methods seem to be out of reach for the current state-of-the-art in the general case, we will also strive to identify special cases which can be solved to exact optimality with numerically viable algorithms. Second, we seek to bridge the gap between multi-stage integer robust optimization and combinatorial games. Some similar concepts are used in both types of problems (typically min-max-min problems have to be solved). This may allow to disseminate our techniques to other fields as well as adapt results obtained for those fields to the context of adjustable robust optimization.

5 Social and environmental responsibility

5.1 Impact of research results

Our work 3 on a stochastic generation and transmission expansion planning problem studied at RTE will help the company in their future strategic studies (e.g., studies similar to 62, 61). Specifically, we show how to incorporate the French legislation1 that imposes that the number of hours with energy not served should be in expectation less than or equal to three per year in the mathematical formulation of the expansion planning problem. This work was part of the PhD thesis of Xavier Blanchot.

6 Highlights of the year

Our two highlights for 2023 are related to optimization and energy.

Our paper 3 was selected as part of the editors'choice articles of the European Journal on Operational Research for the first semester of 2023. This paper describes a method for improving the resolution of stochastic linear programming models with a large number of scenarios, which is used by RTE to solve generation and transmission expansion planning problems.

We are also part of two new projects related to optimization of energy systems: project PowDev, funded by PEPR "Techonlogies Avancées pour les Systèmes Electriques" (TASE) and the Inria/EDF challenge "Gérer les systèmes électriques de demain".

7 New software, platforms, open data

7.1 New software

7.1.1 BaPCod

  • Name:
    A generic Branch-And-Price-And-Cut Code
  • Keywords:
    Column Generation, Branch-and-Price, Branch-and-Cut, Mixed Integer Programming, Mathematical Optimization, Benders Decomposition, Dantzig-Wolfe Decomposition, Extended Formulation
  • Functional Description:
    BaPCod is a prototype code that solves Mixed Integer Programs (MIP) by application of reformulation and decomposition techniques. The reformulated problem is solved using a branch-and-price-and-cut (column generation) algorithms, Benders approaches, network flow and dynamic programming algorithms. These methods can be combined in several hybrid algorithms to produce exact or approximate solutions (primal solutions with a bound on the deviation to the optimum).
  • Release Contributions:
    Bug fixes and enhancements. Better support for compact MIP models. Debug solution support. More cutting planes statistics. Apple M1 support. Experimental CLP solver support. Compiled RCSP library is now included.
  • News of the Year:
    First public release.
  • URL:
  • Publication:
  • Contact:
    Ruslan Sadykov
  • Participants:
    Artur Alves Pessoa, Boris Detienne, Eduardo Uchoa Barboza, Franck Labat, François Clautiaux, François Vanderbeck, Halil Sen, Issam Tahiri, Michael Poss, Pierre Pesneau, Romain Leguay, Ruslan Sadykov
  • Partners:
    Université de Bordeaux, CNRS, IPB, Universidade Federal Fluminense

7.1.2 VRPSolver

  • Name:
    VRPSolver
  • Keywords:
    Column Generation, Vehicle routing, Numerical solver
  • Scientific Description:
    Major advances were recently obtained in the exact solution of Vehicle Routing Problems (VRPs). Sophisticated Branch-Cut-and-Price (BCP) algorithms for some of the most classical VRP variants now solve many instances with up to a few hundreds of customers. However , adapting and reimplementing those successful algorithms for other variants can be a very demanding task. This work proposes a BCP solver for a generic model that encompasses a wide class of VRPs. It incorporates the key elements found in the best recent VRP algorithms: ng-path relaxation, rank-1 cuts with limited memory, and route enumeration, all generalized through the new concept of "packing set". This concept is also used to derive a new branch rule based on accumulated resource consumption and to generalize the Ryan and Foster branch rule. Extensive experiments on several variants show that the generic solver has an excellent overall performance, in many problems being better than the best existing specific algorithms. Even some non-VRPs, like bin packing, vector packing and generalized assignment, can be modeled and effectively solved.
  • Functional Description:
    This solver allows one to model and solve to optimality many combinatorial optimization problems, belonging to the class of vehicle routing, scheduling, packing and network design problems. The problem is formulated using variables, linear objective function, linear and integrality constraints, definition of graphs, resources, and mapping between graph arcs and variables. A complex Branch-Cut-and-Price algorithm is used to solve the model. A new concept of elementarity and packing sets is used to pass an additional information to the solver, so that several state-of-the-art Branch-Cut-and-Price components can be used to improve radically the efficiency of the solver. The interface of the solver is implemented in Julia using JuMP package. To simplify the installation and usage, the solver is distributed as a docker image. The solver can be used only for academic purposes.
  • Release Contributions:
    Version 0.4.1a allows the users to continue to use the software, it removes the current date check.
  • URL:
  • Publication:
  • Contact:
    Ruslan Sadykov
  • Participants:
    Ruslan Sadykov, Eduardo Uchoa Barboza, Artur Alves Pessoa, Eduardo Queiroga, Teobaldo Bulhões, Laurent Facq
  • Partners:
    Universidade Federal Fluminense, Universidade Federal da Paraiba

7.1.3 Benders by batch

  • Keywords:
    Linear optimization, Stochastic optimization, Large scale, Benders Decomposition, Algorithm
  • Functional Description:

    A C++ implementation of the Benders by batch algorithm described in the article: Xavier Blanchot, François Clautiaux, Boris Detienne, Aurélien Froger, Manuel Ruiz. (2023). The Benders by batch algorithm: design and stabilization of an enhanced algorithm to solve multicut Benders reformulation of two-stage stochastic programs. European Journal of Operational Research. 10.1016/j.ejor.2023.01.004

    Specifically, the code contains the following optimization algorithms for solving stochastic two-stage linear programs (IBM ILOG CPLEX is required): - the Benders by batch algorithm (with or without stabilization) - a classic Benders decomposition algorithm (monocut, multicut or cut aggregation by batch of subproblems | with or without in-out stabilization) - a level bundle algorithm (monocut) - IBM ILOG CPLEX barrier algorithm

  • Authors:
    Xavier Blanchot, Aurélien Froger, Manuel Ruiz
  • Contact:
    François Clautiaux
  • Partner:
    RTE

8 New results

8.1 Decomposition methods and extended formulations

Our team had several important contributions around the "decomposition methods" axis and its objectives in the past year.

In 3, a new finitely-convergent exact algorithm to solve two-stage stochastic linear programs is proposed. Based on the multicut Benders reformulation of such problems, with one subproblem for each scenario, this method relies on a partition of the subproblems into batches. The key idea is to solve only a small proportion of the subproblems at most iterations by detecting that a first-stage candidate solution cannot be proven optimal as soon as possible. Additionally, a general framework to stabilize the algorithm is developed.

In 12, we study the problem of designing a cabinet made up of a set of shelves that contain compartments whose contents slide forward on opening. Considering a set of items candidate to be stored in the cabinet over a given time horizon, the problem is to design a set of shelves, a set of compartments in each shelf and to select the items to be inserted into the compartments. The objective is to maximize the sum of the profits of the selected items. We call our problem the Storage Cabinet Physical Design (SCPD) problem. The SCPD problem combines a two-dimensional guillotine cutting problem for the design of the shelves and compartments with a set of temporal knapsack problems for the selection and assignment of items to compartments. We formalize the SCPD problem and formulate it as a maximum cost flow problem in a decision hypergraph with additional linear constraints. To reduce the size of this model, we break symmetries, generalize graph compression techniques and exploit dominance rules for precomputing subproblem solutions. We also present a set of valid inequalities to improve the linear relaxation of the model. We empirically show that solving the arc flow model with all our enhancements outperforms solving a compact mixed integer linear programming formulation of the SCPD problem.

In 7, we study how a regulatory constraint limiting a measure of unserved demand, called Loss Of Load Expectation (LOLE), can be incorporated into a strategic version of a stochastic generation and transmission expansion planning problem. This problem is tackled by the French Transmission System Operator RTE for producing prospective reports on the evolution of the electricity network. We show that a direct inclusion of the constraint into the extensive form of the two-stage stochastic problem leads to a formulation that violates the time-consistency principle. To obtain a valid model, we use bilevel programming and introduce a formulation of the problem in which the leader and follower have the same objective function. To solve this formulation, we propose a matheuristic that embeds a Benders decomposition algorithm in a binary search on the total investment cost. We performed computational experiments to study the practical difficulty of the problem and validate the proposed solution method. Our experiments show that solving the single-level reformulation of the problem obtained using the KKT complementary conditions is intractable in practice, even for small size instances, and that a simple heuristic procedure is not sufficient to compute feasible solutions for all test cases. This is not the case for our matheuristic, which finds a feasible solutions for all instances of our test bed.

In 1, we study an efficient implementation of the branch-and-price algorithm for the kidney exchange problem. The kidney exchange problem (KEP) is an increasingly important healthcare management problem in most European and North American countries which consists of matching incompatible patient-donor pairs in a centralized system. Despite the significant progress in the exact solution of KEP instances in recent years, larger instances still pose a challenge especially when non-directed donors are taken into account. We present a branch-and-price algorithm for the exact solution of KEP in the presence of non-directed donors. This algorithm is based on a disaggregated cycle and chains formulation where subproblems are managed through graph copies. We formalize and analyze the complexity of the resulting pricing problems and identify the conditions under which they can be solved using polynomial-time algorithms. We propose several algorithmic improvements for the branch-and-price algorithm as well as for the pricing problems. We extensively test all of our implementations using a benchmark made up of different types of instances. Our numerical results show that the proposed algorithm can be up to two orders of magnitude faster compared to the state-of-the-art. All models and algorithms developed in this study are additionally gathered in an open-access Julia package, KidneyExchange.jl, a first in the kidney exchange literature.

8.2 Robust optimization

We also made some methodological and applied progress in the field of robust optimization.

In 5, we consider the scheduling problem that consists of minimizing the weighted number of tardy jobs on one machine. It is a classical and intensively studied scheduling problem. In this paper, we develop a two-stage robust approach, where exact weights are known after accepting to perform the jobs, and before sequencing them on the machine. This assumption allows diverse recourse decisions to be taken in order to better adapt one's mid-term plan. The contribution of this paper is twofold: first, we introduce a new scheduling problem and model it as a min-max-min optimization problem with mixed-integer recourse by extending existing models proposed for the deterministic case. Second, we take advantage of the special structure of the problem to propose two solution approaches based on results from the recent robust optimization literature: namely the finite adaptability (Bertsimas and Caramanis, 2010) and a convexification-based approach (Arslan and Detienne, 2022). We also study the additional cost of the solutions if the sequence of jobs has to be decided before the uncertainty is revealed. Computational experiments are reported to analyze the effectiveness of our approaches.

In 4, we study optimization problems where some cost parameters are not known at decision time and the decision flow is modeled as a two-stage process within a robust optimization setting. We address general problems in which all constraints (including those linking the first and the second stages) are defined by convex functions and involve mixed-integer variables, thus extending the existing literature to a much wider class of problems. We show how these problems can be reformulated using Fenchel duality, allowing to derive an enumerative exact algorithm, for which we prove asymptotic convergence in the general case, and finite convergence for cases where the first-stage variables are all integer. An implementation of the resulting algorithm, embedding a column generation scheme, is then computationally evaluated on a variant of the Capacitated Facility Location Problem with uncertain transportation costs, using instances that are derived from the existing literature. To the best of our knowledge, this is the first approach providing results on the practical solution of this class of problems.

In 6, we study a generalization of static robust optimization problems that allows for interactions of the decision-maker with the uncertain parameters. Specifically, we consider the possibility of proactive actions by the decision-maker so as to reduce the impact of uncertain parameters known as uncertainty reduction. We first show that when the uncertainty reduction decisions are constrained, the resulting optimization problem is NP-hard, echnoing an earlier result from the literature. We further show that relaxing these constraints leads to solving a linear number of deterministic problems in certain special cases, leading to a polynomial-time algorithm. We provide insights into possible MILP reformulations and illustrate the practical relevance of our theoretical results on the shortest path instances from the literature.

In 9, we design primal and dual bounding methods for multistage adjustable robust optimization (MSARO) problems motivated by two decision rules rooted in the stochastic programming literature. From the primal perspective, this is achieved by applying decision rules that restrict the functional forms of only a certain subset of decision variables resulting in an approximation of MSARO as a two-stage adaptive robust optimization problem. We leverage the two-stage robust optimization literature in the solution of this approximation. From the dual side, decision rules are applied to the Lagrangian multipliers of a Lagrangian dual of MSARO resulting in a two-stage stochastic optimization problem. We argue that the quality of the resulting dual bound is dependent on the distribution chosen when developing the dual formulation. We therefore define a distributionally robust problem with the aim of optimizing the obtained bound and develop solution methods depending on the nature of the recourse variables. Our framework is general-purpose and does not require strong assumptions such as a stage-wise independent uncertainty set, and can consider integer recourse variables. Computational experiments on newsvendor, location-transportation, and capital budgeting problems show that our bounds yield considerably smaller optimality gaps compared to the existing methods.

8.3 Routing

In 2, we introduce, study and analyse several classes of compact formulations for the symmetric Hamiltonian p-Median Problem (HpMP). Given a positive integer p and a weighted complete undirected graph G=(V,E) with weights on the edges, the HpMP on G is to find a minimum weight set of p elementary cycles partitioning the vertices of G. The advantage of developing compact formulations is that they can be readily used in combination with off-the-shelf optimization software, unlike other types of formulations possibly involving the use of exponentially sized sets of variables or constraints. The main part of the paper focuses on compact formulations for eliminating solutions with less than p cycles. Such formulations are less well known and studied than formulations which prevent solutions with more than p cycles. The proposed formulations are based on a common motivation, that is, the formulations contain variables that assign labels to nodes, and prevent less than p cycles by stating that different depots must have different labels and that nodes in the same cycle must have the same label. We introduce and study aggregated formulations (which consider integer variables that represent the label of the node) and disaggregated formulations (which consider binary variables that assign each node to a given label). The aggregated models are new. The disaggregated formulations are not, although in all of them new enhancements have been included to make them more competitive with the aggregated models. The two main conclusions of this study are: i) in the context of compact formulations, it is worth looking at the models with integer node variables, which have a smaller size. Despite their weaker LP relaxation bounds, the fewer variables and constraints lead to faster integer resolution, especially when solving instances with more than 50 nodes; ii) the best of our compact models exhibit a performance that, overall, is comparable to that of the best methods known for the HpMP (including branch-and-cut algorithms), solving to optimality instances with up to 226 nodes within 1 hour. This corroborates our message that the knowledge of the inequalities for preventing less than p cycles is much less well understood.

In 13, we propose a heuristic algorithm capable of handling multiple variants of the vehicle routing problem with drones (VRPD). Assuming that the drone may be launched from a node and recovered at another, these variants are characterized according to three axes, 1) minimizing the transportation cost or minimizing the makespan, 2) the drone may land and wait or not, and 3) single or multiple trucks each equipped with a single drone. One of our main algorithmic contributions relates to a subproblem of the VRPD, which we refer to as the fixed route drone dispatch problem (FRDDP). Given a sequence of customers being visited by a truck, the FRDDP determines a subset of customers to be visited by the drone. Solving the FRDDP exactly with dynamic programming entails a computational complexity of O(n3), where n is the number of customers contained in the route. Considering that the FRDDP is very frequently solved in local search algorithms, we introduce a heuristic dynamic program (HDP) with a computational complexity of O(n2) for each of the two FRDDP objectives. We embed HDPs in a hybrid variable neighborhood search algorithm, which we reinforce by developing filtering strategies based on the HDP. We benchmark the performance of our algorithm on nine benchmark sets pertaining to four VRPD variants resulting in 932 instances. Our algorithm computes 651 of 680 optimal solutions and identifies 189 new best-known solutions.

In 11, we study the problem of determining the optimal partitioning of an urban logistics area served by electrical vehicles (EV). Establishing the size of an EV fleet is a vital decision for logistics operators. In urban settings, this issue is often dealt with by partitioning the geographical area around a depot into service zones, each served by a single vehicle. Such zones ultimately guide daily routing decisions. We cast this problem in a Continuous Approximation (CA) framework. Considering a ring radial region with a depot at its center, we introduce the electric vehicle fleet sizing problem (EVFSP). As the current range of EVs is fairly sufficient to perform service in urban areas, we assume that the EV fleet is exclusively charged at the depot, i.e., en-route charging is not allowed. In the EVFSP we account for EV features such as limited range, and non-linear charging and energy pricing functions stemming from Time-of-use (ToU) tariffs. Specifically, we combine non-linear charging functions with pricing functions into charging cost functions, establishing the cost of charging an EV for a target charge level. We propose a polynomial time algorithm for determining this function. The resulting function is non-linear with respect to the route length. Therefore, we propose a Mixed Integer Non-linear Program (MINLP) for the EVFSP, which optimizes both dimensions of each zone in the partition. We strengthen our formulation with symmetry breaking constraints. Furthermore, considering convex charging cost functions, we show that zones belonging to the same ring are equally shaped. We propose a tailored MINLP formulation for this case. Finally, we derive upper and lower bounds for the case of non-convex charging cost functions. We perform a series of computational experiments. Our results demonstrate the effective-ness of our algorithm in computing charging cost functions. We show that it is not uncommon that these functions are non-convex. Furthermore, we observe that our tailored formulation for convex charging cost functions improves the results compared to our general formulation. Finally, contrary to the results obtained in the CA literature for combustion engine vehicles, we empirically observe that the majority of EVFSP optimal solutions consist of a single inner ring.

The optimization community has made significant progress in solving Vehicle Routing Problems (VRPs) to optimality using sophisticated Branch-Cut-and-Price (BCP) algorithms. VRPSolver is a BCP algorithm with excellent performance in many VRP variants. However, its complex underlying mathematical model makes it hardly accessible to routing practitioners. In 10, we introduce VRPSolverEasy, a Python interface to VRPSolver, that does not require any knowledge of Mixed Integer Programming modeling. Instead, routing problems are defined in terms of familiar elements such as depots, customers, links, and vehicle types. VRPSolverEasy can handle several popular VRP variants and arbitrary combinations of them.

9 Bilateral contracts and grants with industry

9.1 Bilateral contracts with industry

We have a collaboration with Orange, around the PhD thesis of John Jairo Quiroga Orozco which started in October 2022. The project focuses on models and algorithms for the design of robust 5G networks.

Participants: Boris Detienne, Pierre Pesneau, John Jairo Quiroga Orozco.

We have a collaboration with CATIE (the contract is still being negotiated between Orange and University of Bordeaux), around the PhD thesis of Fulin Yan which started in February 2023. The project focuses on machine learning and optimization.

Participants: François Clautiaux, Aurélien Froger, Fulin Yan.

Together with Inria team INOCS, we are part of the Inria/EDF challenge "Gérer les systèmes électriques de demain". The work package "Algorithmes de décomposition pour les problèmes d'investissement long-terme" focuses on decomposition method for multi-stage integer stochastic programs, with application to capacity planning problems.

Participants: Ayse N. Arslan, Boris Detienne, Aurélien Froger.

10 Partnerships and cooperations

10.1 National initiatives

Dr. Arslan has obtained funding for her ANR-JCJC project DROI which concentrates on primal and dual bounds for adjustable robust optimization problems. Being a "young researcher" fund, this project will help Dr. Arslan build collaborations on national and international level. The team of experts that will help Dr. Arslan conduct this project include team member Boris Detienne, Michael Poss (CR-CNRS-LIRMM), Merve Bodur (Assistant Professor-U. Edinburgh) and Jérémy Omer (Mdc, INSA-Rennes).

ANR AD-LIB is coordinated by François Clautiaux. This projet is conducted with the LAAS laboratory in Toulouse and Toulouse Business School. We consider general aggregation/disaggregation techniques to address optimization problems that are expressed with the help of sequential decision processes. Our main goals are threefold: a generic formalism that encompasses the aforementioned techniques ; more efficient algorithms to control the aggregation procedures ; open-source codes that leverage and integrate these algorithms to efficiently solve hard combinatorial problems in different application fields. We will jointly study two types of approaches, MIP and SAT, to reach our goals. MIP-based methods are useful to obtain proven optimal solutions, and to produce theoretical guarantees, whereas SAT solvers are strong to detect infeasible solutions and learn clauses to exclude these solutions. Their combination with CP through lazy clause generation is one of the best tools to solve highly combinatorial and non- linear problems. Aggregation/disaggregation techniques generally make use of many sub- routines, which allows an efficient hybridization of the different optimization paradigms. We also expect a deeper cross-fertilization between these different sets of techniques and the different communities.

Strategic Power Systems Development for the Future (PowerDev) funded by PEPR TASE studies optimization methods and reliability/resilience engineering applied to large-scale electrical power systems. The project is led by CentraleSupélec at the University of Paris Saclay and is composed of a consortium of higher education institutions across France (CentraleSupelec, UVSQ, University Grenoble Alpes) as well as research organizations (Inria, CNRS). Modern power systems are expected to become increasingly complex to design and operate due to the growing number of renewable energy sources (RES). Renewable energy generation is, by nature, intermittent and introduces an amount of uncertainty that severely affects the physical responses of the power system, particularly in terms of voltage control and frequency regulation [1]. Moreover, RES integration within the power system requires the introduction of many new power electronic devices, which add to the system's complexity and increase its possible failure modes [2,3]. Combined with unexpected initiating events, these two main features can lead to cascading failure risks, triggering disastrous consequences to the power grid and, most notably, large-scale blackouts [4-7]. The economic and societal consequences to the impacted regions are usually massive, with economic loss measured in the tens of billions of dollars [8]. The main objective of this project is to evaluate and optimize the resilience of power systems in the context of a massive insertion of renewable energies. The project aims to elaborate a comprehensive and integrated set of decision support tools by considering extreme events in present and future climates, the complexity of the power grid, and socio-economic scenarios.

11 Dissemination

11.1 Promoting scientific activities

11.1.1 Scientific events: organisation

General chair, scientific chair

F. Clautiaux is co-chair of the workshop "Pricing algorithms", and the conference Dataquitaine.

11.1.2 Scientific events: selection

Member of the conference program committees

F. Clautiaux was a member of the scientific committee of ROADEF 2023.

Reviewer

B. Detienne has been member of the jury for the best student paper award decerned by ROADEF.

11.1.3 Journal

Member of the editorial boards

F. Clautiaux is a member a the editorial board of Open Journal on Mathematical Optimization.

Reviewer - reviewing activities

A. Froger has been reviewer for Computers & Operations Research, European Journal of Operational Research, INFORMS Journal on Computing, and Transportation Research Part C - Emerging Technologies.

F. Clautiaux has been reviewer for Computers & Operations Research, European Journal of Operational Research, Transportation Science.

B. Detienne has been reviewer for Annals of Operations Research, Journal of heuristics, INFORMS Journal on Computing, International Transactions on Operations Research.

P. Pesneau has been reviewer for Annals Of Operation Research, Discrete Applied Mathematics, and Networks.

A. Arslan has been reviewer for the Conference on Integer Programming and Combinatorial Optimization (IPCO-2023) and Mathematical Programming.

11.1.4 Leadership within the scientific community

A. Arslan and F. Clautiaux are members of the scientific comitee of GDR-ROD.

F. Clautiaux is a member of the coordination of the European Working Group on Cutting and Packing (ESICUP).

A. Arslan is the responsible for young researcher targeted actions within GDR-ROD.

11.1.5 Scientific expertise

F. Clautiaux has been an expert for HCERES.

F. Clautiaux has been an expert for Région Nouvelle Aquitaine.

B. Detienne has been an expert for ECOS-Sud.

11.1.6 Comités de sélection

F. Clautiaux has been part of recruiting committees in Paris Dauphine (Professor), Université de Lille and, and Université Sorbonne Paris Nord (Maître de conférences)

A. Arslan has been part of a recruiting comitee in Université de Montpellier (Maître de conférences).

B. Detienne has been part of a recruiting comitee in Université Sorbonne Paris Nord (Maître de conférences).

11.1.7 Research administration

B. Detienne is the head of OptimAl team at Institut de Mathématiques de Bordeaux.

B. Detienne is member of Conseil de département SIN of Université de Bordeaux and of Comission consultative 26 of IMB, and has been member of Conseil scentifique of IMB until October, 2023.

A. Arslan is co-responsible of scientific seminars for the OptimAl team Institut de Mathématiques de Bordeaux.

11.2 Teaching - Supervision - Juries

11.2.1 Teaching

F. Clautiaux is member of the board of the Mathematics departments at Université de Bordeaux (Unité de Formation "Mathematiques et Interactions").

F. Clautiaux is the head of the local initiative "Cap IA".

F. Clautiaux has been the head of the master program in Applied Mathematics (2 years, 5 master programs, around 200 students in total).

B. Detienne has been the head of the master program in operations research at university of Bordeaux (2 years, 35 students), and is responsible for apprenticeship training of this program.

P. Pesneau is the head of the Master of Engineering in Mathematical Optimization (CMI OPTIM) of the University of Bordeaux (5 years, 25 students).

A. Froger: Optimisation (L2, Université de Bordeaux), Challenges algorithmiques (L3, Université de Bordeaux), Graphes et algorithmes, Rémédiation Programmation linéaire, Travaux d'étude et de recherche (M1, Université de Bordeaux), Gestion des opérations et planification de la production, Responsable des stages de fin d'études (M2, Université de Bordeaux), Numerics (Ecole doctorale Mathématiques et informatique).

F. Clautiaux: Integer programming (M2, Univ. Bordeaux), industrial project in optimization (M2, Univ. Bordeaux), optimization project (L1, Univ. Bordeaux), algorithms for combinatorial optimization (M1, Univ. Bordeaux).

B. Detienne: Optimisation continue (M1, Univ. Bordeaux), Optimisation continue sous contraintes (M1, Université de Bordeaux), Etudes et veille bibliographique (M1, Univ. Bordeaux), Recherche Opérationnelle (Année 1, ENSEIRB), Integer Programming (M2, Univ. Bordeaux), Optimisation dans l'incertain (M2, Université de Bordeaux).

P. Pesneau: Programmation pour le calcul scientifique (L2, Université de Bordeaux), Groupe de travail applicatif (L3, Université de Bordeaux), Projets tutorés (L3, Université de Bordeaux), Recherche opérationnelle (L3, INP Bordeaux), Algorithmes pour l'optimisation en nombres entiers, Introduction à la programmation en variables entières, Remédiation algorithmique et programmation, Graphes et algorithmes, Travaux d'étude et de recherche (M1, Université de Bordeaux), Integer Programming (M2, Université de Bordeaux).

A. Arslan: Optimisation dans l'incertain (M2, Université de Bordeaux), Travaux d'étude et de recherche (M1, Université de Bordeaux), Groupe de travail applicatif (L3, Université de Bordeaux).

11.2.2 Supervision

F. Clautiaux and A. Froger supervise two PhD students: Luis Lopes Marques (from October 2022), Fulin Yan (from March 2023).

F. Clautiaux supervises the PhD thesis of Cécile Dupouy with Walid Klibi and Olivier Labarthe (Kedge Business School).

F. Clautiaux supervises the PhD thesis of Isaac Balster.

F. Clautiaux and B. Detienne supervise the PhD thesis of Komlanvi Parfait Ametana with Olga Battaia (Kedge Business School).

B. Detienne supervises the PhD thesis of Mickaël Gaury with Gautier Stauffer (HEC Lausanne).

B. Detienne and P. Pesneau supervise the thesis of John Jairo Quiroga Orozco.

A. Arslan and B. Detienne supervise the thesis of Patxi Flambard (from October 2023).

11.2.3 Juries

A. Froger: Hector Gatt (PhD, IMT Atlantique Bretagne Pays de la Loire, examinateur).

F. Clautiaux : Emiliano Traversi (HDR, Paris, rapporteur), Jérémy Omer (HDR, Rennes, rapporteur), Mathieu Lerouge (PhD Ecole Centrale, rapporteur), Guillaume Joubert (PhD, Université de Technologie de Compiègne, rapporteur), Liding Xu (PhD, Ecole Polytechnique, examinateur).

B. Detienne: Carla Juvin (PhD, Toulouse, rapporteur).

11.3 Popularization

11.3.1 Education

A. Froger: Lectures at the AI4Industry workshop to introduce the field of Operations Research to engineering students of the Nouvelle-Aquitaine region.

A. Arslan gave a talk titled "Recherche Opérationnelle : mathématiques et algorithmique pour résoudre des problèmes réels" within the seminar series Midi Maths. This seminar series aims to popularise mathematics among undergraduate students of University of Bordeaux.

A. Arslan gives a masterclass titled "Operations research in an uncertain world: Applications in logistics and energy networks" through Inria Academy.

11.3.2 Interventions

F. Clautiaux has given a presentation in a high school class, (operation Chiche!).

12 Scientific production

12.1 Publications of the year

International journals

Reports & preprints

12.2 Cited publications

  • 14 articleD.D .Applegate, W.W .Cook, R.R .Bixby and V.V .Chvatal. Implementing the Dantzig-Fulkerson-Johnson algorithm for large traveling salesman problems.Mathematical Programming, series B972003, 91--153back to text
  • 15 articleJ.J .Edmonds. Maximum matching and a polyhedron with 0, 1 vertices.Journal of Research National Bureau of Standards69B1965, 125--130back to text
  • 16 articleN.Nabil Absi, D.Diego Cattaruzza, D.Dominique Feillet, M.Maxime Ogier and F.Frédéric Semet. A Heuristic Branch-Cut-and-Price Algorithm for the ROADEF/EURO Challenge on Inventory Routing.Transportation Scienceaccepted2020back to text
  • 17 bookC.Clàudio Alves, F.François Clautiaux, J. M.José Manuel Valério De Carvalho and J.Juergen Rietz. Dual-Feasible Functions for Integer Programming and Combinatorial Optimization.SpringerJanuary 2016HALback to text
  • 18 miscA.Ayşe Arslan and B.Boris Detienne. Decomposition-based approaches for a class of two-stage robust binary optimization problems.July 2019, URL: https://hal.inria.fr/hal-02190059back to textback to text
  • 19 incollectionG.Giorgio Ausiello, P. G.Paolo G. Franciosa and D.Daniele Frigioni. Directed Hypergraphs: Problems, Algorithmic Results, and a Novel Decremental Approach.Theoretical Computer Science2202Berlin, HeidelbergSpringer Berlin Heidelberg2001, 312--328DOIback to text
  • 20 articleJ.Josette Ayoub and M.Michael Poss. Decomposition for adjustable robust linear optimization subject to uncertainty polytope.Computational Management Science132April 2016, 219--239URL: http://link.springer.com/10.1007/s10287-016-0249-2DOIback to text
  • 21 articleA.A. Ben-Tal, A.A. Goryashko, E.E. Guslitzer and A.A. Nemirovski. Adjustable robust solutions of uncertain linear programs.Mathematical Programming992March 2004, 351--376URL: http://link.springer.com/10.1007/s10107-003-0454-yDOIback to text
  • 22 articleJ. F.J. F. Benders. Partitioning procedures for solving mixed-variables programming problems.Numerische Mathematik431962, 238-252URL: https://link.springer.com/article/10.1007%2FBF01386316back to text
  • 23 articleD.David Bergman, A. A.Andre A. Cire, W.-J.Willem-Jan van Hoeve and J. N.J. N. Hooker. Discrete Optimization with Decision Diagrams.INFORMS Journal on Computing2812016, 47-66back to textback to text
  • 24 articleD.Dimitris Bertsimas and A.Angelos Georghiou. Design of Near Optimal Decision Rules in Multistage Adaptive Mixed-Integer Optimization.Operations Research633Publisher: INFORMSApril 2015, 610--627URL: https://pubsonline.informs.org/doi/abs/10.1287/opre.2015.1365DOIback to text
  • 25 articleN.Natashia Boland, J.John Dethridge and I.Irina Dumitrescu. Accelerated Label Setting Algorithms for the Elementary Resource Constrained Shortest Path Problem.Operations Research Letters3412006, 58-68DOIback to text
  • 26 articleN.N. Boland, M.M. Hewitt, L.L. Marshall and M.M. Savelsbergh. The Continuous-Time Service Network Design Problem.Operations Research6552017, 1303-1321DOIback to text
  • 27 articleN. L.N. L. Boland and M. W.M. W. P. Savelsbergh. Perspectives on Integer Programming for Time-Dependent Models.TOP2019DOIback to text
  • 28 articleH.H. Bouarab, I. E.I. El Hallaoui, A.A. Metrane and F.F. Soumis. Dynamic constraint and variable aggregation in column generation.European Journal of Operational Research26232017, 835-850URL: http://dx.doi.org/10.1016/j.ejor.2017.04.049back to text
  • 29 articleN.Nicos Christofides, A.A. Mingozzi and P.P. Toth. State-Space Relaxation Procedures for the Computation of Bounds to Routing Problems.Networks1121981, 145-164DOIback to text
  • 30 articleF.François Clautiaux, S.Sa\"id Hanafi, R.Rita Macedo, E.Emilie Voge and C.Cláudio Alves. Iterative aggregation and disaggregation algorithm for pseudo-polynomial network flow models with side constraints.European Journal of Operational Research2582017, 467 - 477HALDOIback to text
  • 31 articleF.François Clautiaux, R.Ruslan Sadykov, F.François Vanderbeck and Q.Quentin Viaud. Combining dynamic programming with filtering to solve a four-stage two-dimensional guillotine-cut bounded knapsack problem.Discrete Optimization292018, 18-44back to text
  • 32 articleL. C.Leandro C. Coelho, J.-F.Jean-François Cordeau and G.Gilbert Laporte. Thirty Years of Inventory Routing.Transportation Science4812014, 1-19back to text
  • 33 articleG. B.George B. Dantzig and P.Philip Wolfe. Decomposition Principle for Linear Programs.Operations Research811960, 101-111URL: https://doi.org/10.1287/opre.8.1.101DOIback to text
  • 34 articleG.Guy Desaulniers, J.Jacques Desrosiers and S.Simon Spoorendonk. Cutting planes for branch-and-price algorithms.Networks5842011, 301-310back to text
  • 35 article G.Guy Desaulniers and J.J\o Rakke. back to text
  • 36 incollectionJ.Jacques Desrosiers and M. E.Marco E. Lübbecke. Branch-Price-and-Cut Algorithms.Wiley Encyclopedia of Operations Research and Management ScienceAmerican Cancer Society2011back to text
  • 37 articleM.Michael Drexl. Synchronization in Vehicle Routing --- A Survey of VRPs with Multiple Synchronization Constraints.Transportation Science4632012, 297-316back to text
  • 38 articleB.B. Fortz, A. R.A. R. Mahjoub, S. T.S. T. McCormick and P.Pierre Pesneau. Two-edge connected subgraphs with bounded rings: polyhedral results and branch-and-cut.Mathematical Programming10512006, 85--111back to text
  • 39 inproceedingsA.A. Frank. Some polynomial algorithms for certain graphs and hypergraphs.Proceedings of the Fifth British Combinatorial ConferenceCongressus Numerantium XV1975, 211-226back to text
  • 40 unpublishedA.Aurélien Froger, O.Ola Jabali, J. E.Jorge E. Mendoza and G.Gilbert Laporte. The electric vehicle routing problem with capacitated charging stations.December 2020, Submitted paper (available on HAL: https://hal.archives-ouvertes.fr/hal-02386167)HALback to text
  • 41 articleL.-L.Liang-Liang Fu, M. A.Mohamed Ali Aloulou and C.Christian Artigues. Integrated production and outbound distribution scheduling problems with job release dates and deadlines.Journal of Scheduling2142018, 443--460back to text
  • 42 phdthesisM.Matthew Galati. Decomposition Methods for Integer Linear Programming.Lehigh University2009back to text
  • 43 incollectionG.Gerald Gamrath and M.Marco Lübbecke. Experiments with a Generic Dantzig-Wolfe Decomposition for Integer Programs.Experimental Algorithms6049Lecture Notes in Computer ScienceSpringer Berlin / Heidelberg2010, 239-252back to text
  • 44 articleL.L. Gouveia, M.M. Leitner and M.M. Ruthmair. Layered Graph Approaches for Combinatorial Optimization Problems.Computers & Operations Research1022019, 22-38DOIback to text
  • 45 articleG. A.Grani A. Hanasusanto, D.Daniel Kuhn and W.Wolfram Wiesemann. K -Adaptability in Two-Stage Robust Binary Programming.Operations Research634August 2015, 877--891URL: http://pubsonline.informs.org/doi/10.1287/opre.2015.1392DOIback to text
  • 46 articleM.M. Horn, G. R.G. R. Raidl and E.E. Rönnberg. A* Search for Prize-Collecting Job Sequencing with One Common and Multiple Secondary Resources.Annals of Operations Research2020DOIback to text
  • 47 articleT.T. Ibaraki. Successive Sublimation Methods for Dynamic Programming Computation.Annals of Operations Research1111987, 397--439DOIback to text
  • 48 incollectionS.Stefan Irnich and G.Guy Desaulniers. Shortest Path Problems with Resource Constraints.Column GenerationBoston, MASpringer US2005, 33--65back to text
  • 49 articleR. M.R. M. Karp and M.M. Held. Finite-State Processes and Dynamic Programming.SIAM Journal on Applied Mathematics1531967, 693-718DOIback to text
  • 50 articleA.Ali Khanafer, F.François Clautiaux and E.-G.El-Ghazali Talbi. New lower bounds for bin packing problems with conflicts.European Journal of Operational Research20622010, 281 - 288URL: http://www.sciencedirect.com/science/article/pii/S0377221710000718DOIback to text
  • 51 articleV. L.Vinícius L. de Lima, C.Cláudio Alves, F.François Clautiaux, M.Manuel Iori and J. M.José M. Valério de Carvalho. Arc flow formulations based on dynamic programming: Theoretical foundations and applications.European Journal of Operational Research2021DOIback to text
  • 52 articleL.Leonardo Lozano, D.David Bergman and J. C.J. Cole Smith. On the Consistent Path Problem.Operations Research2020, opre.2020.1979DOIback to text
  • 53 articleP.Pierre Mahjoub. On the Steiner 2-edge connected subgraph polytope.RAIRO Operations Research422008, 259-283URL: http://hal.inria.fr/inria-00325988/en/back to text
  • 54 articleG.Guillaume Marques, R.Ruslan Sadykov, J.-C.Jean-Christophe Deschamps and R.Rémy Dupas. An improved branch-cut-and-price algorithm for the two-echelon capacitated vehicle routing problem.Computers & Operations Research1142020, 104833back to text
  • 55 articleR. K.R. Kipp Martin, R. L.Ronald L. Rardin and B. A.Brian A. Campbell. Polyhedral Characterization of Discrete Dynamic Programming.Operations Research3811990, 127--138DOIback to text
  • 56 articleS.S. Nadarajah and A. A.A. A. Cire. Network-Based Approximate Linear Programming for Discrete Optimization.SSRN Electronic Journal2017DOIback to text
  • 57 articleD.Diego Pecin, A.Artur Pessoa, M.Marcus Poggi and E.Eduardo Uchoa. Improved branch-cut-and-price for capacitated vehicle routing.Mathematical Programming Computation912017, 61--100back to textback to text
  • 58 inproceedingsA.Artur Pessoa, R.Ruslan Sadykov, E.Eduardo Uchoa and F.François Vanderbeck. A Generic Exact Solver for Vehicle Routing and Related Problems.Integer Programming and Combinatorial Optimization11480Lecture Notes in Computer ScienceChamSpringer International Publishing2019, 354--369back to text
  • 59 articleA.Artur Pessoa, R.Ruslan Sadykov, E.Eduardo Uchoa and F.François Vanderbeck. Automation and combination of linear-programming based stabilization techniques in column generation.INFORMS Journal on Computing3022018, 339--360back to text
  • 60 articleD.D. Porumbel and G.G. Goncalves. Using dual feasible functions to construct fast lower bounds for routing and location problems.Discrete Applied Mathematics1962015, 83-99back to text
  • 61 techreportRTE. Energy Pathways to 2050 - Key results (Executive summary).https://assets.rte-france.com/prod/public/2022-01/Energy%20pathways%202050_Key%20results.pdfRTE, Report2021back to text
  • 62 techreportRTE. French transmission network development plan.https://assets.rte-france.com/prod/public/2020-07/Sch%C3%A9ma%20d%C3%A9cennal%20de%20d%C3%A9veloppement%20de%20r%C3%A9seau%202019%20-%20Synth%C3%A8se%20%E2%80%93%20English%20version.pdfRTE, Report2019back to text
  • 63 inproceedingsM.M. Riedler, M.M. Ruthmair and G. R.G. R Raidl. Strategies for Iteratively Refining Layered Graph Models.11th International Workshop on Hybrid MetaheuristicsChili2018, 16back to text
  • 64 articleG.Giovanni Righini and M.Matteo Salani. New Dynamic Programming Algorithms for the Resource Constrained Elementary Shortest Path Problem.Networks5132008, 155-170DOIback to text
  • 65 articleR.Ruslan Sadykov and F.François Vanderbeck. Bin Packing with Conflicts: A Generic Branch-and-Price Algorithm.INFORMS Journal on Computing2522013, 244-255back to text
  • 66 articleR.Ruslan Sadykov, F.François Vanderbeck, A.Artur Pessoa, I.Issam Tahiri and E.Eduardo Uchoa. Primal Heuristics for Branch-and-Price: the assets of diving methods.INFORMS Journal on Computing3122019, 251-267back to textback to text
  • 67 articleM.Michael Schneider and M.Michael Drexl. A survey of the standard location-routing problem.Annals of Operations Research25912017, 389--414back to text
  • 68 articleD. M.Duc Minh Vu, M.Mike Hewitt, N.Natashia Boland and M.Martin Savelsbergh. Dynamic Discretization Discovery for Solving the Time-Dependent Traveling Salesman Problem with Time Windows.Transportation Science5432020, 703--720DOIback to text
  • 69 articleB.Bo Zeng and L.Long Zhao. Solving two-stage robust optimization problems using a column-and-constraint generation method.Operations Research Letters415September 2013, 457--461URL: http://linkinghub.elsevier.com/retrieve/pii/S0167637713000618DOIback to text