EN FR
EN FR
EDGE - 2024

2024Activity reportProject-TeamEDGE

RNSR: 202124188E
  • Research center Inria Centre at the University of Bordeaux
  • In partnership with:Université de Bordeaux, CNRS
  • Team name: Extended formulations and Decomposition for Generic optimization problems
  • In collaboration with:Institut de Mathématiques de Bordeaux (IMB)
  • Domain:Applied Mathematics, Computation and Simulation
  • Theme:Optimization, machine learning and statistical methods

Keywords

Computer Science and Digital Science

  • A6.2.6. Optimization
  • A7.1.3. Graph algorithms
  • A8.1. Discrete mathematics, combinatorics
  • A8.2. Optimization
  • A8.2.1. Operations research
  • A8.7. Graph theory
  • A9.7. AI algorithmics

Other Research Topics and Application Domains

  • B3.1. Sustainable development
  • B3.1.1. Resource management
  • B4.2. Nuclear Energy Production
  • B4.4. Energy delivery
  • B6.5. Information systems
  • B7. Transport and logistics
  • B9.5.2. Mathematics

1 Team members, visitors, external collaborators

Research Scientists

  • Ayse Nur Arslan [INRIA, Researcher]
  • Eduardo Uchoa Barboza [INRIA, Senior Researcher, from Aug 2024 until Nov 2024]

Faculty Members

  • François Clautiaux [Team leader, UNIV BORDEAUX, Professor Delegation]
  • Boris Detienne [UNIV BORDEAUX, Associate Professor]
  • Aurélien Froger [UNIV BORDEAUX, Associate Professor Delegation, from Sep 2024]
  • Aurélien Froger [UNIV BORDEAUX, Associate Professor, until Aug 2024]
  • Pierre Pesneau [UNIV BORDEAUX, Associate Professor]

Post-Doctoral Fellow

  • Mariam Sangare [INRIA, Post-Doctoral Fellow, from Oct 2024]

PhD Students

  • Komlanvi Parfait Ametana [UNIV BORDEAUX]
  • Cecile Dupouy [KEDGE B. S., CIFRE]
  • Patxi Flambard [INRIA]
  • Mickael Gaury [KEDGE B. S., until Sep 2024]
  • Sylvain Lichau [UNIV BORDEAUX]
  • Luis Lopes Marques [INRIA]
  • John Jairo Quiroga Orozco [ORANGE]
  • Fulin Yan [CATIE, CIFRE]

Technical Staff

  • Najib Errami [INRIA, Engineer]
  • Laurent Facq [CNRS, Engineer]

Interns and Apprentices

  • Kea Horvath [INRIA, Intern, from May 2024 until Aug 2024]
  • Arthur Leonard [INRIA, Intern, from Sep 2024]
  • Quentin Monnereau [INRIA, Apprentice]
  • Yanyu Zhou [INRIA, Intern, from Apr 2024 until Sep 2024]

Administrative Assistant

  • Ellie Correa Da Costa De Castro Pinto [INRIA]

External Collaborators

  • Artur Alves Pessoa [UFF NITEROI BRAZIL, until Nov 2024]
  • Walid Klibi [KEDGE B. S., until Nov 2024]

2 Overall objectives

Nowadays, integer programming based methods are able to solve effectively a large number of classic combinatorial optimization problems (travelling-salesman, knapsack, facility location problems). Dramatic improvements on benchmarks were obtained due to better linear programming solvers, better (re)formulations, new polyhedral results, new efficient algorithms, fine-tuned implementations, and more powerful computers.

Practitioners are now trying to tackle more advanced problems where the degree of complexity of the systems to optimize has increased substantially. In particular, numerous factors should be taken into account in the decision process: ecological impact, new regulations, presence of commercial partners or aggressive competitors, limitation on previously abundant resources, and so forth. Many problems also come with uncertain parameters (energy production, consumer behavior, weather, disasters, resource breakdowns...), and have to be solved in a dynamic environment, where solutions have to be modified/reoptimized on-the-fly to account for last-minute changes.

In project EDGE, our main objective is to propose new mathematical models, algorithms, and efficient implementations of these algorithms to deal with families of integer programming problems arising from these complex systems, which are out-of-reach for current state-of-the-art optimization methods. Although universal algorithms able to deal with all types of constraints cannot be achieved in practice, we will seek results that are as broadly applicable as possible by designing so-called generic methods, i.e., methods that address abstract mathematical models that can be later specialized for a large number of problems. We will mainly rely on two families of mathematical tools: decomposition techniques and extended formulations.

Decomposition methods are valuable tools to address complex optimization problems. Beyond the obvious advantages of the divide-and-conquer paradigm, they offer a way to produce stronger formulations, effective algorithms applicable to a wide range of problems, and they allow the use of right mathematical tools for each subsystem (also called subproblem). Several issues have to be addressed when one wants to tackle modern integrated real-life problems by means of decomposition methods. An important limitation of these methods is that they are generally efficient only for problems with a specific structure (typically independent subproblems connected by few common constraints). Many problems we address in this new project either do not have this decomposable structure, or the structure is broken by so-called non-robust cuts, which are applied during the solution process. In this case, new decomposition methods are needed to deal effectively with these connecting constraints. Another issue stems from the large number of different subsystems. A methodological challenge is to find a good trade-off between the quantity of information passed from one subproblem to another and the difficulty to integrate the common information into them.

Extended formulations are also a focus of our team. The past ten years have seen much progress in this field, which aims at reformulating effectively a problem/polyhedron with the help of (exponentially many) additional variables. In particular, network-flow-based reformulations have received an increasing interest from the community. A considerable difficulty to overcome when dealing with such a reformulation is to handle its size. One of the most promising paths is to study so-called aggregation and disaggregation techniques, where the level of detail of the formulation is modified iteratively. Similar approaches are popular in other fields of applied mathematics, where detailed information is only needed in some specific parts of the model. In integer programming, determining the suitable level of precision needed for a given subsystem is a major challenge, since the combinatorial structure of the subproblem preclude techniques based on derivatives. Although preliminary results indicate that one can achieve considerable improvements for some specific problems, there is a lack of a theoretical framework that would allow to develop these techniques for a larger class of polyedral structures.

We use our methodological tools to address robust optimization, which is an increasingly popular approach to handle the uncertainty arising in mixed-integer linear optimization problems. This optimization paradigm describes the variability of the uncertain parameters through bounded uncertainty sets (and thus replacing the expectation, used in stochastic optimization, with the worst-case objective realization). In particular, we plan to study robust problems with integer recourse. From a theoretical point of view, these problems are known to be Σ2p-complete in general (and thus simply verifying that a solution is feasible is an NP-hard problem). We will work on determining the frontier between tractable and non-tractable problems by studying the structure of specific subsets of robust problems characterized by: their deterministic counterpart; their uncertainty set; and the difficulty of optimizing the recourse subproblem. Our first steps in this direction show that decomposition methods and extended formulations can be instrumental in achieving this objective.

State-of-the-art optimization methods often being too hard to use, or too cumbersome to adapt to a new problem, their use is typically limited to a small community of experts. Keeping this in mind, we additionally concentrate on developing high-level modeling tools and solvers with the aim of facilitating the adaptation of our algorithms by colleagues and O.R. scientists. Doing so will maximize the impact of our research in academia as well as in the industry.

3 Research program

3.1 Decomposition methods

Dantzig-Wolfe decomposition 36 is a well-established approach for solving hard and/or large-scale combinatorial optimization problems. Problems that are originally formulated as Mixed-Integer Programming problems (MIPs) are reformulated by defining a (generally exponential) number of variables which correspond to the solutions of subproblem(s) obtained by subsets of variables and constraints of the original MIP formulation. To solve the linear approximation of the Dantzig-Wolfe reformulation, the column generation procedure is employed, which iteratively generates missing variables by solving the subproblem(s). Cut generation is then employed to strengthen the linear approximation, and branching is performed to find an optimal solution to the problem. The overall approach is called the branch-cut-and-price (BCP) method 39. Recent advances, including those developed by our previous project team ReAlOpt, helped significantly improve the efficiency of general-purpose BCP algorithms. To use these algorithms one just has to specify the master problem, and a formulation or an algorithm to solve the column generation subproblem. The improvements include stabilization 62, pre-processing, diving heuristics 70, and strong branching, among others. Generic BCP codes implementing these improvements made the branch-cut-and-price approach more accessible to users.

A major limitation of BCP algorithms is their inefficiency when they face constraints that break the decomposable structure of the problem. Prominent examples are synchronisation and precedence constraints 40. Moreover, it became clear recently that many efficient BCP approaches are "non-robust", i.e., column and cut generation are interdependent. Thus, for every subproblem type, dedicated cut-generation approaches should be considered. Few such approaches are known in the literature 37,60. Their development for different families of cutting planes and different subproblem types is a very promising research direction. Devising BCP algorithms for problems that do not have the decomposability property is a difficult and high-risk task. However, in the case of success, the impact on the practice of Operations Research would be significant.

Our numerical experience shows that generic BCP approaches 45,46 (in which the subproblems are solved as generic MIPs) are rarely competitive with the traditional branch-and-cut method used in widely available and highly efficient MIP solvers. On the other hand, very efficient BCP approaches specialized for particular problems exist in the scientific literature. In such approaches, subproblems are of a certain type and are solved by fast specialized algorithms. These ad-hoc BCP approaches are however rarely used in practice due to the complexity of their implementation 60, 68. An important goal of our project is to propose innovative methods that have the efficiency of the most advanced specialized algorithms in the literature, yet can be applied to many practical cases. This can be done by developing BCP algorithms which are not completely generic (i.e., they cannot be applied to any model obtained by Dantzig-Wolfe reformulation) but can be used for large classes of combinatorial optimization problems. There is a clear need for efficient and reusable semi-generic approaches, that can be applied to a large number of problems inside a specific class. We have identified several types of subproblems that are often encountered in practical applications as substructures: resource-constrained paths in graphs 51 and hyper-graphs 34, constrained network flows, and decision diagrams 25. Developing efficient algorithms that can be applied to the broadest possible class of such subproblems is a hard task. We will seek the right trade-off between generality and efficiency.

Recently, we have developed the first "semi-generic" BCP solver, VRPSolver, which relies on solving resource constrained shortest path subproblems 61. We have shown that this solver can be successfully applied to many vehicle routing variants and some packing problems. An open question is whether there are other classes of problems which can be efficiently solved by "semi-specialized" BCP solvers. Scheduling and network design problems are good candidates for such classes. Capabilities of current BCP solvers however need to be extended to efficiently handle these problems. One of the drawbacks of our solver is the absence of good embedded heuristics to quickly find feasible solutions. Diving heuristics 70 can in principle be used, but their speed and performance are generally not satisfactory.

Another challenge is to design a modelling paradigm that can produce formulations for "semi-specialized" BCP solvers. In this context, defining a MIP and applying the standard Danzig-Wolfe reformulation technique is cumbersome as the subproblems cannot be easily defined as MIPs. New innovative modelling paradigms should be developed to simplify the problem definition and thus extend the usage of BCP approaches. They should be simple enough to attract non-specialists in the domain, and complex enough to pass the structural information to the BCP solver. This is mandatory to achieve state-of-the-art performance. One can find inspiration in the notion of global constraints designed in the field of Constraint Programming.

Benders' decomposition 24 is also becoming increasingly effective for several benchmark problems. Recent methods based on this reformulation approach involve solving many similar subproblems repeatedly, and make use of sophisticated stabilization techniques. We plan to build on the knowledge developed on Dantzig-Wolfe decomposition to develop techniques to improve Benders' decomposition, which do not use any assumption on the specific structure of the subproblem. A considerable challenge is to find effective methods for handling integer subproblems in a generic manner.

Another way to improve Benders' decomposition tools is to use machine learning techniques. Such algorithms can also be used to parameterize the different parts of the method or discover some information to help the convergence of these algorithms. This is a promising path for solving stochastic programs with a large number of scenarios. Several scientific questions have to be addressed, including the possible adaptation of stabilization techniques that need a complete dual information for proving the convergence.

In line with the challenges highlighted above, one of our first objectives is to improve the genericity of our BCP solver for the resource constrained path structure. For the moment, it supports only models with standard additive resources. We intend also to cover the case when cost and resource consumption of the current arc in the path depends not only on the arc itself, but also on the accumulated resource consumption of previous arcs in the path. This would allow to solve several important classes of problems such as electric vehicle routing problems, and scheduling problems with additive objective functions much more efficiently. This implies improving both the algorithms used and the modelling capabilities of the solver to support more general resources.

In the longer term, we plan to develop semi-generic BCP approaches and solvers for problems which do not contain the resource constrained path structure in graphs. Alternative structures which can be considered are spanning trees, network-flow problems in hyper-graphs, context-free grammars (both mentioned in Section 3.2), and decision diagrams.

Another interesting idea to explore in this direction is developing decomposition approaches for multi-stage sequential decision-making problems under uncertainty. Such problems can be modeled as multi-agent Markov Decision Processes (MDP). We believe that mathematical programming decomposition algorithms in which sub-problems are single-agent MDPs solved by dynamic programming constitute a very promising alternative to currently used approaches for multi-agent MDPs. Moreover, there are currently no approaches for such problems which provide exact solutions and non-trivial valid bounds on the optimal value.

3.2 Extended formulations

Many successful extended formulations are based on network-flow models. These models have excellent polyhedral properties, since the constraint matrix of a flow problem is totally unimodular, and thus all extreme-point solutions of the linear program related to this subsystem are integer if the right-hand sides of the constraints are integer. Additional constraints generally break this property, but in many cases, the linear relaxation remains of excellent quality.

Network-flow reformulations can be obtained when all or parts of discrete optimization problems can be described by regular or algebraic languages, or by recursive (such as dynamic programming) formulations, on top of which additional constraints such as resource constraints, disjunctions, are specified. Karp and Held 52 provided a systematic approach to building dynamic programming recurrence equations for a large class of optimization problems by characterizing the representation of discrete decision processes by monotone sequential decision processes. Martin et al. 58 characterized discrete optimization problems that can be solved via dynamic programming using directed hypergraphs 21. The formalism of sequential decision processes allows to build an internal representation of the sequential structure using network-flows in state-graphs or hypergraphs. Additional constraints can be embedded in this representation by increasing the size of the network, for example by discretizing quantities such as time (scheduling), load (vehicle routing), or width (bin-packing) 47. This has several advantages: 1) enforcing the consideration of constraints using the graph structure (that is convexifying these constraints), 2) easing the modeling of complex constraints, 3) facilitating the consideration of resource-dependent parameters 72.

The main drawback of these formulations is their typically exponential or pseudo-polynomial number of variables and constraints. This is different from models that arise from a Dantzig-Wolfe decomposition, which have an exponential number of variables, but a generally small number of constraints. The size of network-flow models is usually far too large to make them solvable by modern integer programming, SAT or constraint programming solvers.

To overcome this limitation, aggregation and disaggregation techniques, based either on a scaling of the input parameters or on aggregating nodes of the network, are good candidates. As they generally make use of sub-routines, they allow an efficient hybridization of different optimization paradigms. Several methods sharing common ideas have been introduced in different communities. For solving dynamic programs, techniques based on iterative state-space relaxations 32 have proved their efficiency since the 1980s. The most popular methods are the decremental state-space relaxation 28, 67 and the successive sublimation dynamic programming 50. In the field of MIP models, the most promising algorithms for solving exponentially-large network-flow formulations are also based on an increasingly refined series of relaxations (see 33,29,66). These methods produce initial models with fewer variables and fewer constraints than the original ones and iteratively refine them after identifying new variables and/or constraints to add. Although these methods have been shown to be efficient for several benchmark problems, only few generic methods have been proposed (see 30 and 54).

Iterative aggregation/disaggregation techniques have some relationship to other methods. They share similarities with Benders' decomposition or Logic-based Benders' decomposition. Both methods rely initially on a relaxation of the problem. However, in Benders' decomposition constraints are iteratively introduced to discard infeasible solutions, while iterative aggregation/disaggregation-based techniques may also introduce new variables. Similarities with optimization methods using decision diagrams constructed by (iterative) state aggregation procedures can also be highlighted 25. Decision diagrams are graphs storing possible variable assignments satisfying some constraints and are particularly used in CP/SAT solvers.

A conclusion of the above is that similar aggregation and disaggregation techniques are used under different names in different fields (MIP, DP, CP/SAT), leading to a scattered scientific literature. Further, most of the proposed techniques are problem-dependent (with the notable exception of decision diagrams, which offer a more general setting, although we are not aware of any generic implementation). Our goal is to go in the direction of developing a unifying formalism and express the main methods of the literature within this formalism. This generic view aims to bring together methods whose proximity has never really been highlighted. Existing algorithms may benefit from algorithmic components that have proven their effectiveness in other contexts. As an example, existing MIP algorithms could benefit from state-of-the-art approaches developed for decision diagrams.

We will study and design effective methods based on aggregating and disaggregating models. It is necessary to develop a high-level modeling tool to specify these models only implicitly. The use of algebraic languages to produce such higher-level formulations is a direction we are considering to explore. We also believe that a primal-dual solution approach is worth exploring. Such an approach will rely on the local refinement of a coarse representation of the global problem when needed. The idea is to derive primal and dual bounds on the objective value of the problem with a lower computational effort compared to working with the original model. We can distinguish two types of aggregation: 1) a conservative aggregation ensures that every solution of the original model is also a solution of the aggregated model (i.e., the aggregated model is a relaxation of the original model) and 2) a heuristic aggregation ensures that every solution of the aggregated model is also a solution of the original model. After projecting the original model onto an aggregated one, we aim to converge to an optimal solution without reaching the size of the original model. To keep the model tractable, one can make use of primal and dual information from the different aggregated models to fix variables values (using Lagrangian filtering for example). We will focus on finding a systematic way to 1) aggregate an initial model to yield either a relaxation or a restriction and 2) iteratively disaggregate the current model for obtaining better dual bounds. A path worth exploring is to dynamically aggregate the models taking advantage of the information learned in the current phase to build stronger models 49. We will also study an emerging technique consisting in the joint use of several aggregated models with decision synchronization 59,55 which can be helpful to derive stronger bounds and to keep tractable network-flow models.

For designing heuristics, the problems generated by dynamic programs / algebraic languages are suited to so-called structured learning techniques, where the objective is to learn how to approximate a hard combinatorial problem (typically a sequencing problem with resource constraints) by means of a simpler problem (typically a shortest-path problem on a graph). More classical learning techniques can also be used to guess the right parameters (lagrangian or surrogate multipliers, or to guess the right decisions to make (state-space modification, relaxation of some constraints, etc.).

In line with the challenges highlighted above, we started to address automatic heuristic for dynamic programs with side constraints based on structured-learning techniques. This is a work that had already begun in a collaboration between F. Clautiaux, A. Froger and A. Parmentier (CERMICS). The objective of the project is to determine the best way to apply machine-learning techniques to obtain heuristics via projection, pruning etc. To work on this challenge, we collaborate with Centre Aquitain des Technologies de l'Information et Électroniques (CATIE). Fulin Yan, a student from INSA Rennes, has done an internship with our team on this subject. We are now waiting for a funding answer to propose him a CIFRE PhD thesis.

We additionally started to lay the scientific foundation for a generic framework that embed various techniques from dynamic programming based on regular languages, resource-constrained shortest path, and decision diagrams. We are comparing the different elements coming from the different fields (SAT / MIP / heuristics), determining the common parts and analyzing the main differences between them. Our objective is to develop a common formalism and express the most important methods of the literature using this formalism. This is a work in progress between F. Clautiaux, B. Detienne and A. Froger. We hired Luis Lopes Marques as a PhD student starting from October 2022 to work on this subject. This thesis is financed by our ANR project AD-LIB.

In the near future, we will design exact convergent methods based on aggregation/disaggregation techniques (e.g., use of problem-dependent or primal/dual information) relying on the generic framework cited above. We will focus on finding a systematic way 1) to aggregate an initial model to yield either a relaxation or a restriction and 2) to iteratively disaggregate the current model for obtaining better dual bounds. This is part of our ANR project AD-LIB.

We will also study matheuristics for problems based on time-indexed models. This work will focus on the ability of the method to provide the best possible solution within a given time limit. This is a challenge when solving operational problems where the decision-making time is limited. The key area of research will be (i) how to disaggregate in a way that a good feasible solution can be provided as fast as possible (search strategy) and (ii) how to modify infeasible solutions provided by an aggregated model in order to make them feasible without impairing their cost. This is also a part of the ANR project AD-LIB.

Upon sucessful completion of the above objectives, we may explore many extensions and generalizations to our framework. Among them, we may cite new mechanisms to integrate several different state-space relaxations in the same solution process and extensions to non-linear constraints and objective / dynamic programs based on different semi-rings than (max,+) in n. In the longer term, we wish to explore the links with polyedral theory, and develop extensions of the framework from regular languages to algebraic languages (and to extend our algorithms from graph-based algorithms to hypergraph-based algorithms). Polyhedral theory can be used to improve the performance of our algorithms for solving dynamic programs with side constraints, and to provide the equivalent of the branch-and-cut algorithm for dynamic programs.

3.3 Structure analysis and problem specific studies

Polyhedral analysis 17 remains a part of our project. When integer linear programming methods are involved, information on the structure of the polyhedron representing the feasible region is key for solving hard problems effectively. The most spectacular success of this kind of methods can be found in Concorde 16, a software dedicated to the traveling salesman problem. Network design has also been the subject of many contributions (see e.g. 41). Since we aim to propose methods that can apply to broad classes of problems, we will focus on families of constraints/problems that occur in a large number of practical problems (including capacity, disjunction, precedence or set-covering constraints). There are two promising directions that we can follow in order to study these structures.

In the case where the linear relaxation of an integer programming formulation has an excellent quality, but requests a large computational time, being able to compute good dual solutions rapidly allows to provide useful dual bounds for combinatorial optimization problems. This technique was used for the classical cutting-stock problem under the name dual-feasible functions (see 19). These functions lead to bounds of linear complexity that have an excellent behaviour on average. This concept has already been extended to several packing and location problems (see 53, 63). We plan to explore further extensions to common problem/constraint types.

Many hard problems on graphs become easy when the graph has a special structure 42, typically perfect graphs such as trees, interval or triangulated graphs (see for example 56, 69). In most problems, the graphs do not belong to an easy class. However, one can compute effective relaxations by exhibiting subgraphs of suitable structure in these general graphs. Unfortunately for many classes the problem of finding a subgraph belonging to this class maximizing a certain criterion is NP-hard. We plan to study methods for finding efficient models to compute such relaxations.

In line with the challenges highlighted above, our first objective is to work on polyhedral analysis and computational studies of formulations for the "best" interval subgraph of a general graph. We have identified several families of formulations, among them strong formulations based on an exponential number of variables and/or constraints. Pricing algorithms and separation problems have to be studied in order to efficiently solve these formulations. The same work can be undertaken with chordal graphs, which offer a stronger relaxation (since they are a superclass of interval graphs), for a higher computational cost.

In the longer term, we plan to work on the extension of our work on so-called dual feasible functions to more complex cases. The theory of superadditive and dual-feasible functions have mainly been studied in the case of a single knapsack constraint. Extensions to the intersection of the knapsack polytope with polytopes related to highly structured constraints (such as precedences, or disjunctions) is already a hard challenge.

More generally, we plan to work on several aspects related to polyedral theory: study of extended formulations generated by algebraic grammars. A possible objective is to use implicit formulations expressed in different state spaces to derive efficient cuts for a base formulation ; determining whether considering the structure of the problems will be a stronger tool to derive dual-feasible functions than learning to guess the best dual values from the input parameters (for instance, using structured learning to compute a smaller dual polytope from an initial one) ; studying decomposition schemes based on graph decomposition techniques, such as path or tree-decompositions.

4 Application domains

Although we aim to develop generic methodologies, the types of problems solved will play an important role in the choice of paradigms employed, and in the methods used to address these problems. We will mainly focus on two classes of problems: those that possess several levels of decisions, and those that have some uncertain parameters.

These types of problems are prelavent in certain application fields, including energy, and supply-chain. In both fields, our objectives are in line with aspirations of modern societies (reducing the pollution, improving the sustainability of human activities, including a better and more robust design of supply-chains). Our optimization tools will be useful in accompanying the profound shifts that are needed in these sectors, by taking into account the interconnection between the different problems faced by the companies.

In energy, the main challenges we face are related to the uncertainty of both production and consumption. The uncertainty in production grows with the development of renewable energy production, while uncertainty in consumption remains driven by weather. This leads to large-scale robust and/or stochastic optimization problems, which push our methodologies to their limits. A typical problem is to ensure the technical feasibility of some energy transition scenarios where the percentage of nuclear power in the energetic mix decreases, leading to an increasing level of uncertainty.

Supply Chain and logistics are also fertile playgrounds for our research. In particular, successful applications in routing, production planning, inventory control, warehouse optimization, or network design are numerous. Besides, new technologies such as the internet of things or physical internet bring new core optimization problems and the need for new relevant mathematical models and solutions approaches. The main challenges are to deal with integrated problems, including location decisions, inventory, routing, packing, employee timetabling, among others. Current methods tend to take tactical and operational decisions independently despite the fact that they are interrelated problems. Supply chains also have to be robust to uncertain parameters (from regular variations of the system parameters to disaster management).

4.1 Integrated problems

Most practical optimization problems are complex, i.e., they involve different types of decisions to be made. Such problems are commonly called integrated. They often involve different time scales related to strategic, tactical and operational decisions. A classical approach to solve such problems is their disintegration into independent problems or stages, where the solution of a stage is the input for next one. We prefer the term "disintegration" here instead of "decomposition" to avoid confusion with decomposition approaches presented above. The disintegration approach usually results in highly sub-optimal solutions. There is large potential for improving the quality of solutions if two and more decision stages are considered together.

Recently there has been an increase in interest for solving integrated optimization problems. Such problems may involve for example integration of production and outbound distribution (production-distribution problem 44), facility location and vehicle routing (location-routing problem 71), inventory management and vehicle routing (inventory routing problem 35), and different levels of distribution (two-echelon routing 57 and cross-docking based distribution).

As we point out above, integrated problems are common in practice but difficult to solve, since they involve synchronization between stages, and different time scales in different stages. Moreover, different classes of problems are usually encountered in different stages. This means that different exact approaches are usually applied to solve such classes of problems. It is often not known how one can efficiently combine these approaches to solve an integrated problem. Thus, heuristic algorithms are used in a vast majority of cases. However, we believe that advances in efficiency and ease of use of exact algorithms and decomposition algorithms in particular allow us to think of applying them here.

Exact approaches are usually limited to small instances of integrated problems, while real-life instances are tackled by heuristic approaches. Nevertheless, for estimating the quality of these heuristics we need approaches that obtain lower bounds of a good quality, even for large scale instances. We think that BCP algorithms are good candidates to obtain such bounds. The column generation approach has however several drawbacks when applied to large-scale integrated problems. When solving pure academic problems, the bottleneck of BCP algorithms is usually the solution of the pricing problem. On the contrary, when dealing with integrated real-life problems, the size of the master problem may become too large and the solution of the restricted master LP becomes a bottleneck. One possible approach to tackle this problem is a dynamic aggregation of constraints in the master LP 31. Another approach could be to use machine learning techniques to choose “good” columns from a large pool generated by the pricing problem. The goal here is to help the column generation algorithm converge faster in the case of a “heavy” master problem or to find “compatible” columns to obtain better primal solutions.

Column generation-based algorithms may also prove useful for obtaining good feasible solutions for integrated problems. A potential directon is to develop matheuristics, i.e., heuristic methods that make use of sophisticated algorithms initially designed for exact methods. The goal is to benefit from the two worlds: exact methods to exploit the special structures of some subproblems, and heuristics to produce solutions in a small amount of time. Column generation-based matheuristics have already shown good results for some integrated problems 18. A common approach is to use heuristics and/or column generation to obtain interesting columns and then solve the restricted master problem as a MIP with a time limit. Then the set of columns can possibly be updated and the restricted master is resolved. However, generic frameworks for such approaches are absent, although they would be highly welcome for a wide class of problems.

Our initial efforts will be concentrated on one or two well-chosen problems of this type. The inventory routing problem (IRP) is probably the most "academic" and simple to state example of integrated problems which stays very hard to solve for both exact and heuristic methods38. In this problem, the decision maker in charge of the delivery is also in charge of the stock level of his/her customer. IRP is widely encountered in practice. The integration of electric vehicles in transportation is also a timely topic. Planning the charging operations of electric vehicles is necessary to account for a wide variety of costs (e.g., time-dependent energy costs), charging infrastructure constraints (e.g., grid restrictions, limited number of chargers), battery degradation, and the potential integration of Vehicle-to-Grid technologies. Taking this variety of constraints into account introduces complex coupling decisions between charging and routing, which leads to integrated problems.

We plan to design a comprehensive modelling and solution framework for planning the charging activities of a fleet of electric vehicles considering time-dependent costs and the potential integration of Vehicle-to-Grid technologies. Particular attention is directed towards decreasing the maximum power of energy required in the planning interval. This will be a joint work with O. Jabali (Politecnico di Milano, Italy) that has already began with a master student from Politecnico di Milano visiting our research team for 4 months last year and studying the case of a fleet of electric buses. Our objectives are to design advanced mathematical methods to produce high-quality solutions in a time efficient manner as well as to propose an exact solution method to solve electric routing problems (E-VRPs) with limited charging infrastructure constraints (based on a previous work 43) using a Branch-and-cut-and-price algorithm. We will investigate the trade-offs that exist between driving time and energy consumption (e.g., existence of alternative paths, speed reduction) either by refining the abstraction of the road network on which E-VRPs are defined or by working directly with the road network.

In the long term, our objective is to be able to cope with an increasing amount of integrated problems, with a focus on supply-chain applications (scheduling, location, inventory, routing, etc.). Our work on this topic will benefit from our findings on new decomposition methods. Once our main objectives are achieved, we will explore methodological tools to take different sources of uncertainty into account (e.g., energy consumption of electric vehicles, time-dependent energy costs) using robust optimization or stochastic programming paradigms.

4.2 Robust optimization

In most decision making problems the data used in the mathematical model is subject to some form of uncertainty. This uncertainty can be caused by measurement errors, variability or the time duration of the processes under study, or simple lack of access to reliable data. Incomplete information can also come from the presence of competitors or adversarial individuals whose policies are not known to the decision maker. Recent events have also shown that the capability to resist to major disasters is an issue for important organizations and critical infrastructure.

There are two commonly accepted paradigms that are used to incorporate uncertainty in mathematical programming: stochastic optimization (including chance-constraints), and robust optimization. In stochastic optimization, one assumes that a probability distribution is known for the uncertain parameters and optimizes a statistical risk measure such as the expected value or the conditional value-at-risk. In this new project, our main tool for dealing with uncertainty will be robust optimization. Unlike stochastic programming, which requires exact knowledge of the probability distributions, robust optimization only describes the variability of the uncertain parameters through bounded uncertainty sets. Further, replacing the risk measures employed in stochastic optimization, with the worst-case realization, robust optimization is more adapted to applications where the decision-making process involves high risks or adversarial participants.

In the last 20 years, the robust optimization field has seen significant progress. Static robust optimization problems, which assume that all decisions are here-and-now, have received considerable attention. They have been formulated and studied with polyhedral, conic and ellipsoidal uncertainty sets. From a complexity viewpoint, it has been shown that these problems are barely harder than their deterministic counterparts in most practical cases. From the numerical viewpoint, mixed-integer linear (or second-order conic) reformulations have been proposed and can handle large-scale problems.

Despite these rather encouraging results, static models suffer from over-conservatism. Indeed, in these models, no recourse action can be taken to change or adapt the solution once the uncertain values have been revealed. This substantially limits the applicability of the robust optimization paradigm since in most applications that involve temporal decision-making, it is possible to adapt the solution to (at least partially) mitigate the effects of uncertainty.

Acknowledging the importance of adjustable models, the scientific community has started to address solution methods for these problems. While it is theoretically well-known that even two-stage linear programs are NP-hard, approximate (affine decision rules) 23, or exact (row-and-column generation algorithms) 22,73 solution methods for problems featuring continuous recourse variables have been proposed in the literature. These two sets of algorithmic tools have made it possible to solve exactly or approximately a large number of adjustable robust optimization problems with continuous recourse. Several studies have also sought to understand the quality of the bounds provided by affine decision rules, as well as proposed extensions to piece-wise affine functions.

The problems we wish to investigate in this project are adjustable robust optimization problems where recourse decisions are integer. The situation is significantly more complex for this setting than for the case of continuous recourse. The two aforementioned approaches do not apply: affine decision rules provide continuous recourse actions by construction, while the separation problem in the row-and-column generation algorithm is based on linear programming duality.

Having no theoretical guarantees regarding their computational complexity, there are some numerical algorithms that aim to solve adjustable robust optimization problems with integer recourse approximately. They are all based on simplifying the type of feasible recourse actions, either by partitioning the uncertainty set or by imposing that the recourse decisions should be chosen among a predetermined finite set of solutions (finite/K-adaptability). Unfortunately, these methods hardly scale up and the recent results 26,48 are limited to small problem instances, which can be partially explained by the fact that they rely on big-M reformulations, known to yield poor continuous relaxations.

In the following years, we plan to develop exact and approximate solution methods for adjustable robust optimization problems with integer recourse. On the one hand, we will study relatively general row-and-column generation algorithms that are based on stronger continuous relaxations than what was done in the literature (aforementioned network formulations and decision diagram-based approaches seem to be promising leads in achieving this goal). On the other hand, we will focus on classes of specific robust models (special cases), which will be studied both from a theoretical and a numerical perspective. Some of our efforts will be spent on improving the efficiency of existing methods such as K-adaptability since these approaches could benefit from the considerable numerical experience of the team.

We developed a first approach to dealing with binary recourse decisions in the context of adjustable robust optimization problems with binary recourse in 20. The main idea of this work is to convexify the recourse feasible region using a Dantzig-Wolfe reformulation. Promising results were presented for the special case where the coupling constraints are deterministic and satisfy additional technical assumptions. In this case, the inner maximization and minimization problems can be interchanged, resulting in a large-scale deterministic equivalent model. We plan to extend this technique to the more general case and to understand how to relax the aforementioned technical assumption.

In the near term, we plan to carry out several projects to respond to the challenges we outline above. We plan to study solution approaches for complex problems arising in future 5G networks. This topic is the subject of a CIFRE thesis supervised by Boris Detienne and Pierre Pesneau in collaboration with Orange. We will also address complex network-design and location problems (topic of ANR project DE-SIDE, in collaboration with Sobolev Institute and Kedge Business School). We will determine easy and hard cases for robust multi-stage network-design problems, depending on the set of constraints, the definition of the uncertainty and the possible second-stage actions, with a focus on integer recourse.

From a methodological perspective, we will adress primal and dual bounds for two- and multi-stage adjustable robust optimization problems (topic of ANR-JCJC project DROI which was granted to Ayse N. Arslan). Building upon the results of 20, we propose to deal with constraint uncertainty in adjustable robust optimization using Lagrange and Fenchel duality, which will provide dual bounds. Several issues need to be resolved in investigating this approach, such as the optimization of the dual (infinite size) problems and how to close the duality gap. We additionally propose to study improved decision rules for these problems.

In the long term, our objectives in relation with this challenge are as follows. First, we want to design efficient approximate approaches to solve multi-stage adjustable robust optimization problems. Although exact methods seem to be out of reach for the current state-of-the-art in the general case, we will also strive to identify special cases which can be solved to exact optimality with numerically viable algorithms. Second, we seek to bridge the gap between multi-stage integer robust optimization and combinatorial games. Some similar concepts are used in both types of problems (typically min-max-min problems have to be solved). This may allow to disseminate our techniques to other fields as well as adapt results obtained for those fields to the context of adjustable robust optimization.

5 Social and environmental responsibility

5.1 Impact of research results

Our work 27 on a stochastic generation and transmission expansion planning problem studied at RTE will help the company in their future strategic studies (e.g., studies similar to 65, 64). Specifically, we show how to incorporate the French legislation1 that imposes that the number of hours with energy not served should be in expectation less than or equal to three per year in the mathematical formulation of the expansion planning problem. This work was part of the PhD thesis of Xavier Blanchot.

We are also involved in the PowDev project. The main objective of this project is to evaluate and optimize the resilience of power systems in the context of a massive insertion of renewable energies. The project aims to elaborate a comprehensive and integrated set of decision support tools by considering extreme events in present and future climates, the complexity of the power grid, and socio-economic scenarios.

6 Highlights of the year

In our original objectives, we have identified several application domains, with a highlight on energy. Our team has obtained several fundings to develop new methodologies applied to the field of energy:

  • PEPR TASE funded project PowDev, in which we optimize the resilience of power grids
  • Défi Inria-EDF funded a postdoc in our team
  • An industrial PhD thesis (CIFRE) with EDF will begin shortly.

This validates our strategy, and shows that the impact of our optimization algorithms is recognized by both companies and research program committees.

On top of the methodological projects funded by ANR (projects DE-SIDE, AD-LIB, DDROI, DROP), these projects will allow the team to reach the objectives set at its creation.

7 New software, platforms, open data

7.1 New software

7.1.1 BaPCod

  • Name:
    A generic Branch-And-Price-And-Cut Code
  • Keywords:
    Column Generation, Branch-and-Price, Branch-and-Cut, Mixed Integer Programming, Mathematical Optimization, Benders Decomposition, Dantzig-Wolfe Decomposition, Extended Formulation
  • Functional Description:
    BaPCod is a prototype code that solves Mixed Integer Programs (MIP) by application of reformulation and decomposition techniques. The reformulated problem is solved using a branch-and-price-and-cut (column generation) algorithms, Benders approaches, network flow and dynamic programming algorithms. These methods can be combined in several hybrid algorithms to produce exact or approximate solutions (primal solutions with a bound on the deviation to the optimum).
  • Release Contributions:
    Bug fixes and enhancements. Better support for compact MIP models. Debug solution support. More cutting planes statistics. Apple M1 support. Experimental CLP solver support. Compiled RCSP library is now included.
  • News of the Year:
    First public release.
  • URL:
  • Publication:
  • Contact:
    Ruslan Sadykov
  • Participants:
    Artur Alves Pessoa, Boris Detienne, Eduardo Uchoa Barboza, Franck Labat, François Clautiaux, François Vanderbeck, Halil Sen, Issam Tahiri, Michael Poss, Pierre Pesneau, Romain Leguay, Ruslan Sadykov
  • Partners:
    Université de Bordeaux, CNRS, IPB, Universidade Federal Fluminense

7.1.2 VRPSolver

  • Name:
    VRPSolver
  • Keywords:
    Column Generation, Vehicle routing, Numerical solver
  • Scientific Description:
    Major advances were recently obtained in the exact solution of Vehicle Routing Problems (VRPs). Sophisticated Branch-Cut-and-Price (BCP) algorithms for some of the most classical VRP variants now solve many instances with up to a few hundreds of customers. However , adapting and reimplementing those successful algorithms for other variants can be a very demanding task. This work proposes a BCP solver for a generic model that encompasses a wide class of VRPs. It incorporates the key elements found in the best recent VRP algorithms: ng-path relaxation, rank-1 cuts with limited memory, and route enumeration, all generalized through the new concept of "packing set". This concept is also used to derive a new branch rule based on accumulated resource consumption and to generalize the Ryan and Foster branch rule. Extensive experiments on several variants show that the generic solver has an excellent overall performance, in many problems being better than the best existing specific algorithms. Even some non-VRPs, like bin packing, vector packing and generalized assignment, can be modeled and effectively solved.
  • Functional Description:
    This solver allows one to model and solve to optimality many combinatorial optimization problems, belonging to the class of vehicle routing, scheduling, packing and network design problems. The problem is formulated using variables, linear objective function, linear and integrality constraints, definition of graphs, resources, and mapping between graph arcs and variables. A complex Branch-Cut-and-Price algorithm is used to solve the model. A new concept of elementarity and packing sets is used to pass an additional information to the solver, so that several state-of-the-art Branch-Cut-and-Price components can be used to improve radically the efficiency of the solver. The interface of the solver is implemented in Julia using JuMP package. To simplify the installation and usage, the solver is distributed as a docker image. The solver can be used only for academic purposes.
  • Release Contributions:
    Version 0.4.1a allows the users to continue to use the software, it removes the current date check.
  • URL:
  • Publication:
  • Contact:
    Ruslan Sadykov
  • Participants:
    Ruslan Sadykov, Eduardo Uchoa Barboza, Artur Alves Pessoa, Eduardo Queiroga, Teobaldo Bulhões, Laurent Facq
  • Partners:
    Universidade Federal Fluminense, Universidade Federal da Paraiba

7.1.3 Benders by batch

  • Keywords:
    Linear optimization, Stochastic optimization, Large scale, Benders Decomposition, Algorithm
  • Functional Description:

    A C++ implementation of the Benders by batch algorithm described in the article: Xavier Blanchot, François Clautiaux, Boris Detienne, Aurélien Froger, Manuel Ruiz. (2023). The Benders by batch algorithm: design and stabilization of an enhanced algorithm to solve multicut Benders reformulation of two-stage stochastic programs. European Journal of Operational Research. 10.1016/j.ejor.2023.01.004

    Specifically, the code contains the following optimization algorithms for solving stochastic two-stage linear programs (IBM ILOG CPLEX is required): - the Benders by batch algorithm (with or without stabilization) - a classic Benders decomposition algorithm (monocut, multicut or cut aggregation by batch of subproblems | with or without in-out stabilization) - a level bundle algorithm (monocut) - IBM ILOG CPLEX barrier algorithm

  • URL:
  • Publication:
  • Contact:
    François Clautiaux
  • Partner:
    RTE

8 New results

8.1 A Continuous Approximation Model for the Electric Vehicle Fleet Sizing Problem

Keywords:

Continuous Approximation, Electric Vehicles, Fleet Sizing, Region Partitioning

Establishing the size of an EV fleet is a vital decision for logistics operators. In urban settings, this issue is often dealt with by partitioning the geographical area around a depot into service zones, each served by a single vehicle. Such zones ultimately guide daily routing decisions. We study the problem of determining the optimal partitioning of an urban logistics area served by EVs. We cast this problem in a Continuous Approximation (CA) framework.

Considering a ring radial region with a depot at its center, we introduce the electric vehicle fleet sizing problem (EVFSP). As the current range of EVs is fairly sufficient to perform service in urban areas, we assume that the EV fleet is exclusively charged at the depot, i.e., en-route charging is not allowed. In the EVFSP, we account for EV features such as limited range, and non-linear charging and energy pricing functions stemming from Time-of-use (ToU) tariffs. Specifically, we combine non-linear charging functions with pricing functions into charging cost functions, establishing the cost of charging an EV for a target charge level. We propose a polynomial time algorithm for approximating this function and prove that the resulting approximation is exact under certain conditions. The resulting function is non-linear with respect to the route length. Therefore, we propose a Mixed Integer Non- linear Program (MINLP) for the EVFSP, which optimizes both dimensions of each zone in the partition. We strengthen our formulation with symmetry breaking constraints. Furthermore, considering convex charging cost functions, we show that zones belonging to the same ring are equally shaped. We propose a tailored MINLP formulation for this case. Finally, we derive upper and lower bounds for the case of non-convex charging cost functions.

We perform a series of computational experiments. Our results demonstrate the effectiveness of our algorithm in computing charging cost functions. We show that it is not uncommon that these functions are non-convex. Furthermore, we observe that our tailored formulation for convex charging cost functions improves the results compared to our general formulation. Finally, contrary to the results obtained in the CA literature for combustion engine vehicles, we empirically observe that the majority of EVFSP optimal solutions consist of a single inner ring.

Participants: Aurélien Froger.

8.2 KidneyExchange.jl: A Julia package for solving the kidney exchange problem with branch-and-price

Keywords:

Kidney exchange problem, Column-generation and Branch-and-Price, Decomposition algorithms, Julia Language

The kidney exchange problem (KEP) is an increasingly important healthcare management problem in most European and North American countries which consists of matching incompatible patient-donor pairs in a centralized system. Despite the significant progress in the exact solution of KEP instances in recent years, larger instances still pose a challenge especially when altruistic donors are taken into account. In this article, we present a branch-and-price algorithm for the exact solution of KEP in the presence of altruistic donors. This algorithm is based on a disaggregated cycle and chains formulation where subproblems are managed through graph copies. We additionally, present a branch-and-price algorithm based on the position-indexed chain-edge formulation, as well as two compact formulations. We formalize and analyze the complexity of the resulting pricing problems and identify the conditions under which they can be solved using polynomial-time algorithms. We propose several algorithmic improvements for the branch-and-price algorithms as well as for the pricing problems. We extensively test all of our implementations using a benchmark made up of different types of instances with the largest instances considered having 10000 pairs and 1000 altruists. Our numerical results show that the proposed algorithm can be up to two orders of magnitude faster compared to the state-of-the-art. All models and algorithms presented in the paper are gathered in an open-access Julia package, KidneyExchange.jl.

Participants: Ayse N. Arslan.

8.3 A survey on last fifty years of integer linear programming, with a focus on recent practical advances

Keywords:

Combinatorial Optimization, Mixed-Integer Linear Programming, Branch-and-Cut, Dantzig-Wolfe Decomposition, Benders Decomposition

Mixed-integer linear programming (MILP) has become a cornerstone of operations research. This is driven by the enhanced efficiency of modern solvers, which can today find globally optimal solutions within seconds for problems that were out of reach a decade ago. The versatility of these solvers allowed successful applications in many areas, such as transportation, logistics, supply chain management, revenue management, finance, telecommunications, and manufacturing. Despite the impressive success already obtained, many challenges remain, and MILP is still a very active field.

This article provides an overview of the most significant results achieved in advancing the MILP solution methods. Given the immense literature on this topic, we made deliberate choices to focus on computational aspects and recent practical performance improvements, emphasizing research that reports computational experiments. We organize our survey into three main parts, dedicated to branch-and-cut methods, Dantzig-Wolfe decomposition, and Benders decomposition. The paper concludes by highlighting ongoing challenges and future opportunities in MILP research.

Participants: François Clautiaux.

8.4 Uncertainty reduction in static robust optimization

Keywords:

Combinatorial optimization, Robust optimization, NP-hardness, Reformulation

Uncertainty reduction has recently been introduced in the robust optimization literature as a relevant special case of decision-dependent uncertainty. Herein, we identify two relevant situations in which the problem is polynomially solvable. We provide insights into possible MILP reformulations and illustrate the practical relevance of our theoretical results on the shortest path instances from the literature.

Participants: Ayse N. Arslan.

8.5 A branch-cut-and-price approach for the two-echelon vehicle routing problem with drones

Keywords:

Vehicle routing, Drones, Branch-cut-and-price

We present a new set-partitioning model for the two-echelon vehicle routing problem with drones (2E-VRP-D). In this model, partial routes corresponding to drone movements are enumerated by an efficient dynamic program. A state-of-the-art exact branch-cut-and-price algorithm solves the model using a labeling algorithm for the pricing problems. We further propose an adaptation of the well-known rounded capacity cuts, specifically tailored to address this problem, as well as pre-processing methods designed to enhance the algorithm's efficiency. In addition, we propose an effective heuristic branch-cut-and-price based on the exact algorithm. Computational experiments demonstrate that the exact algorithm solves all instances from the 2E-VRP-D literature and significantly increases the size of optimally solved clustered instances. Finally, a sensitivity analysis is conducted to assess the impact of the proposed enhancements.

Participants: Sylvain Lichau.

8.6 A Competitive Heuristic Algorithm for Vehicle Routing Problems with Drones

Keywords:

Drones, Dynamic programming, Variable neighborhood search, Routing, Heuristics

We propose a heuristic algorithm capable of handling multiple variants of the vehicle routing problem with drones (VRPD). Assuming that the drone may be launched from a node and recovered at another, these variants are characterized by three axes, (1) minimizing the transportation cost or minimizing the makespan, (2) the drone is either allowed or not allowed to land while awaiting recovery, and (3) single or multiple trucks each equipped with a drone. In our algorithm, we represent a VRPD solution as a set of customer sequences and evaluate it via local search procedures solving for each sequence a problem that we refer to as the fixed route drone dispatch problem (FRDDP). Given a sequence of customers to be served by a single truck and its drone, the FRDDP selects a subset of customers to be served by the drone and determines drone launch and recovery nodes, while ensuring that each such customer is positioned between two nodes in the initial sequence. We introduce a heuristic dynamic program (HDP) to solve the FRDDP with reduced computational complexity compared to an exact solution algorithm for the problem. We reinforce our algorithm by developing filtering strategies based on the HDP. We benchmark the performance of our algorithm on nine benchmark sets pertaining to four VRPD variants resulting in 932 instances. Our algorithm computes 651 of 680 optimal solutions and identifies 189 new best-known solutions.

Participants: Aurélien Froger.

8.7 Two-stage and Lagrangian Dual Decision Rules for Multistage Adaptive Robust Optimization

Keywords:

Optimization under uncertainty, Robust optimization, Decision rules

In this work, we design primal and dual bounding methods for multistage adaptive robust optimization (MSARO) problems motivated by two decision rules rooted in the stochastic programming literature. From the primal perspective, this is achieved by applying decision rules that restrict the functional forms of only a certain subset of decision variables resulting in an approximation of MSARO as a two-stage adjustable robust optimization problem. We leverage the two-stage robust optimization literature in the solution of this approximation. From the dual perspective, decision rules are applied to the Lagrangian multipliers of a Lagrangian dual of MSARO, resulting in a two-stage stochastic optimization problem. As the quality of the resulting dual bound depends on the distribution chosen when developing the dual formulation, we define a distribution optimization problem with the aim of optimizing the obtained bound and develop solution methods tailored to the nature of the recourse variables. Our framework is general-purpose and does not require strong assumptions such as a stage-wise independent uncertainty set, and can consider integer recourse variables. Computational experiments on newsvendor, location-transportation, and capital budgeting problems show that our bounds yield considerably smaller optimality gaps compared to the existing methods.

Participants: Ayse N. Arslan.

8.8 Adjustable robust optimization with objective uncertainty

Keywords:

Uncertainty modelling, Two-stage robust optimization, Reformulation, Fenchel duality, Branch-and-price

In this work, we study optimization problems where some cost parameters are not known at decision time and the decision flow is modeled as a two-stage process within a robust optimization setting. We address general problems in which all constraints (including those linking the first and the second stages) are defined by convex functions and involve mixed-integer variables, thus extending the existing literature to a much wider class of problems. We show how these problems can be reformulated using Fenchel duality, allowing to derive an enumerative exact algorithm, for which we prove asymptotic convergence in the general case, and finite convergence for cases where the first-stage variables are all integer. An implementation of the resulting algorithm, embedding a column generation scheme, is then computationally evaluated on a variant of the Capacitated Facility Location Problem with uncertain transportation costs, using instances that are derived from the existing literature. To the best of our knowledge, this is the first approach providing results on the practical solution of this class of problems.

Participants: Boris Detienne.

9 Bilateral contracts and grants with industry

9.1 Bilateral contracts with industry

We have a collaboration with CATIE (the contract is still being negotiated between Orange and University of Bordeaux), around the PhD thesis of Fulin Yan which started in February 2023. The project focuses on machine learning and optimization.

Participants: François Clautiaux, Aurélien Froger, Fulin Yan.

Together with Inria team INOCS, we are part of the Inria/EDF challenge "Gérer les systèmes électriques de demain". The work package "Algorithmes de décomposition pour les problèmes d'investissement long-terme" focuses on decomposition method for multi-stage integer stochastic programs, with application to capacity planning problems.

Participants: Ayse N. Arslan, Boris Detienne, Aurélien Froger.

Saint-Gobain Research Paris is an industrial research and development centre working for light and sustainable construction of the Saint-Gobain Group, the world leader in light and sustainable construction. The collaboration is centered around the PhD thesis of Pierre Pinet. We are working on stochastic vehicle routing problems with uncertainty.

Participants: François Clautiaux, Ayse N. Arslan, Pinet Pierre.

10 Partnerships and cooperations

10.1 International research visitors

10.1.1 Visits of international scientists

Artur Pessoa (Institution Universidade Federal Fluminense) spent three weeks in our team in January. He worked with Aurélien Froger on an extension to VRPSolver.

Inria International Chair

Participants: Eduardo Uchoa.

10.1.2 Visits to international teams

François Clautiaux made a short visit at Chicago Loyola University for a common work with Mike Hewitt (Loyal University) and Simon Belières (Toulouse Business School)

10.2 National initiatives

ANR-JCJC project DROI.

A. Arslan has obtained funding for this project, which concentrates on primal and dual bounds for adjustable robust optimization problems. Being a "young researcher" fund, this project will help Dr. Arslan build collaborations on national and international level. The project includes team member Boris Detienne, Michael Poss (CR-CNRS-LIRMM), Merve Bodur (Assistant Professor-U. Edinburgh) and Jérémy Omer (Mdc, INSA-Rennes).

Participants: Ayse N. Arslan, Boris Detienne, Patxi Flambard.

ANR AD-LIB

is coordinated by François Clautiaux. This projet is conducted with the LAAS laboratory in Toulouse and Toulouse Business School. We consider general aggregation/disaggregation techniques to address optimization problems that are expressed with the help of sequential decision processes. Our main goals are threefold: a generic formalism that encompasses the aforementioned techniques ; more efficient algorithms to control the aggregation procedures ; open-source codes that leverage and integrate these algorithms to efficiently solve hard combinatorial problems in different application fields. We will jointly study two types of approaches, MIP and SAT, to reach our goals. MIP-based methods are useful to obtain proven optimal solutions, and to produce theoretical guarantees, whereas SAT solvers are strong to detect infeasible solutions and learn clauses to exclude these solutions. Their combination with CP through lazy clause generation is one of the best tools to solve highly combinatorial and non- linear problems. Aggregation/disaggregation techniques generally make use of many sub- routines, which allows an efficient hybridization of the different optimization paradigms. We also expect a deeper cross-fertilization between these different sets of techniques and the different communities.

Participants: François Clautiaux, Aurélien Froger, Boris Detienne, Pierre Pesneau, Luis Lopes.

Strategic Power Systems Development for the Future (PowerDev) funded by PEPR TASE

studies optimization methods and reliability/resilience engineering applied to large-scale electrical power systems. The project is led by CentraleSupélec at the University of Paris Saclay and is composed of a consortium of higher education institutions across France (CentraleSupelec, UVSQ, University Grenoble Alpes) as well as research organizations (Inria, CNRS). Modern power systems are expected to become increasingly complex to design and operate due to the growing number of renewable energy sources (RES). Renewable energy generation is, by nature, intermittent and introduces an amount of uncertainty that severely affects the physical responses of the power system, particularly in terms of voltage control and frequency regulation [1]. Moreover, RES integration within the power system requires the introduction of many new power electronic devices, which add to the system's complexity and increase its possible failure modes [2,3]. Combined with unexpected initiating events, these two main features can lead to cascading failure risks, triggering disastrous consequences to the power grid and, most notably, large-scale blackouts [4-7]. The economic and societal consequences to the impacted regions are usually massive, with economic loss measured in the tens of billions of dollars [8]. The main objective of this project is to evaluate and optimize the resilience of power systems in the context of a massive insertion of renewable energies. The project aims to elaborate a comprehensive and integrated set of decision support tools by considering extreme events in present and future climates, the complexity of the power grid, and socio-economic scenarios.

Participants: François Clautiaux, Boris Detienne.

Decision Dependent Robust OPtimization (DDROP)

is an ANR-PRC project coordinated by Boris Detienne. The principal objective of this project is to advance the theoretical understanding and numerical solution of robust optimization problems with decision-dependent uncertainty sets. Many real-life decision-making problems under uncertainty include some form of interaction between the actions of the decision-maker and the realization of uncertain parameters. For instance, in medical appointment scheduling, no-shows of the patients are typically related to the schedules themselves whereas, in long-term medical treatments, the state of a patient evolves depending on past medication (decision-dependent uncertainty). On the other hand, in planning organ transplants, the uncertainty related to the compatibility of donors and patients needs to be investigated via expensive medical tests (information discovery) under a given budget constraint. It is therefore essential to incorporate the interactions of the decision-maker with the uncertain parameters within optimization under uncertainty models. This project proposes the study and solution of robust optimization models formalizing such interactions under a single mathematical framework with decision-dependent uncertainty sets. These problems have mostly been neglected in the literature despite their theoretical and applied interest, leaving many research questions open. This project aims to respond to some of these questions and fill some of the outlined gaps in the literature by studying the complexity and solution algorithms for these problems, exploring their connections with classical paradigms such as bilevel programming and adjustable robust optimization and developing solution algorithms based on reformulations, decomposition algorithms, and machine learning. Any progress made in this sense should interest both researchers and practitioners from multiple communities that do not always interact with each other. Consortium: Edge, ESSEC Business School, LIRMM. People involved at EDGE: Ayse Nur Arslan, Boris Detienne.

Participants: Boris Detienne, Ayse N. Arslan.

Grip4all.

The aim of this project is to make industry more competitive by developing a new palletising cell adapted to the severe constraints imposed on the logistics flow when handling mixed products (of varying dimensions and weight) and arranging them on a pallet, without having to sort them manually upstream. This new type of palletising meets a strong demand from a number of sectors, notably mass distribution and the food industry. It meets the demand for handling heterogeneous products without imposing constraints on their packaging, which significantly improves productivity and eliminates tedious human tasks. No similar solution currently exists on the market. The flexible robotics issues addressed will be transposable to other logistics processes in the factory of the future. Our objective is to develop algorithms for the optimised design of palletising plans. Two main levers are considered: the order of arrival of items and the design of pallet loading plans. Given a list of items with physical properties and assigned to orders and a location, and a list of pallets, we aim to calculate an order of arrival for each item and to place the items on the pallets by considering two criteria: the number of pallets used and the pallet balance score. Several technical issues need to be considered. Firstly, the efficiency of the system will be highly dependent on the order of arrival of the items. The order is constrained by the general organisation of the warehouse: several configurations will be evaluated. Secondly, the pallet construction algorithms will be adapted to the gripping system chosen. We will implement an approach based on the team's expertise in several combinatorial optimisation tools: mathematical programming (MIP), heuristics, constraint programming (PPC) and the combination of optimisation algorithms with machine learning tools. Consortium: Inria Edge, Inria Auctus, PPRIME, FIVES, KMSD

Participants: François Clautiaux, Pierre Pesneau.

11 Dissemination

11.1 Promoting scientific activities

11.1.1 Scientific events: selection

Chair of conference program committees

F. Clautiaux is part of the scientific chair of Dataquitaine, the regional AI/data/optimization conference.

Member of the conference program committees

Conference program committee of ROADEF

11.1.2 Journal

Member of the editorial boards

 

F. Clautiaux is a member of the editorial board of OJMO, the Open Journal on Mathematical Optimization.

Reviewer - reviewing activities

 

  • B. Detienne has been reviewer for International Journal on Computing and International Transactions on Operations Research.
  • A. Froger has been reviewer for Computers & Operations Research, European Journal of Operational Research, International Transactions in Operational Research, Transportation Science.
  • P. Pesneau has been reviewer for Networks.
  • F. Clautiaux has been reviewer for 4OR, Computers & Operations Research, European Journal of Operational Research, INFORMS Journal on Computing, Operations Research and Decisions,

11.1.3 Invited talks

11.1.4 Leadership within the scientific community

A. Arslan and F. Clautiaux are members of the scientific committee of GDR Recherche Opérationnelle (CNRS)

11.1.5 Scientific expertise

  • F. Clautiaux is an expert for Région Nouvelle Aquitaine.
  • B. Detienne has been expert for the European Science Foundation.

11.2 Teaching - Supervision - Juries

11.2.1 Teaching

  • F. Clautiaux is the head of the master program in operations research at university of Bordeaux (2 years, 35 students).
  • B. Detienne is in charge of apprenticeship training in the master program in Operations Research.
  • A. Froger is in charge of interships management of the master program in Operations Research, and is reponsible of the outgoing international mobility for the Master of Engineering in Mathematical Optimization (CMI OPTIM) of the University of Bordeaux
  • P. Pesneau is the head of the CMI OPTIM of the University of Bordeaux (5 years, 25 students).
  • A. Arslan and B. Detienne taught each one day at the international Robust Optimization School organized in Montpellier

Below is a list of courses taught by members of the team :

  • A. Froger: Optimisation (L2, Université de Bordeaux), Projet d'optimisation, Rémédiation Programmation linéaire, Travaux d'étude et de recherche (M1, Université de Bordeaux), Gestion des opérations et planification de la production, Responsable des stages de fin d'études (M2, Université de Bordeaux), Numerics (Ecole doctorale Mathématiques et informatique).
  • F. Clautiaux: Integer programming (M2, Univ. Bordeaux), industrial project in optimization (M2, Univ. Bordeaux), optimization project (L1, Univ. Bordeaux), Complexity and algorithms (M1, Univ. Bordeaux).
  • B. Detienne: Optimisation continue (M1, Univ. Bordeaux), Optimisation continue sous contraintes (M1, Université de Bordeaux), Etudes et veille bibliographique (M1, Univ. Bordeaux), Recherche Opérationnelle (Année 1, ENSEIRB), Integer Programming (M2, Univ. Bordeaux), Optimisation dans l'incertain (M2, Université de Bordeaux), Introduction à l'optimisation non linéaire (L2-L3, Université de Bordeaux).
  • P. Pesneau: Programmation pour le calcul scientifique (L2, Université de Bordeaux), Groupe de travail applicatif, Projets tutorés (L3, Université de Bordeaux), Recherche opérationnelle (L3, INP Bordeaux), Algorithmes pour l'optimisation en nombres entiers, Introduction à la programmation en variables entières, Graphes et algorithmes, Etudes de cas industriels, Travaux d'étude et de recherche (M1, Université de Bordeaux), Integer Programming (M2, Université de Bordeaux).
  • A. Arslan: Optimisation dans l'incertain (M2, Université de Bordeaux), Travaux d'étude et de recherche (M1, Université de Bordeaux), Groupe de travail applicatif (L3, Université de Bordeaux).

11.2.2 Supervision

  • F. Clautiaux and A. Froger supervise two PhD students: Luis Lopes Marques (from October 2022), Fulin Yan (from March 2023).
  • F. Clautiaux supervises the PhD thesis of Cécile Dupouy with Walid Klibi and Olivier Labarthe (Kedge Business School).
  • F. Clautiaux and B. Detienne supervise the PhD thesis of Komlanvi Parfait Ametana with Olga Battaia (Kedge Business School).
  • A. Arslan and F. Clautiaux supervise the PhD thesis of Pierre Pinet (CIFRE), from November, 2024
  • B. Detienne supervises the PhD thesis of Mickaël Gaury with Gautier Stauffer (HEC Lausanne).
  • A. Arslan and B. Detienne supervise the thesis of Patxi Flambard (from October 2023).

11.2.3 Juries

B. Detienne was part of the jury of the PGMO PhD award and of the ROADEF best student paper award.

11.3 Popularization

11.3.1 Specific official responsibilities in science outreach structures

  • F. Clautiaux is the head of the project Compétence et Métiers d'Avenir initiative "Cap IA" of Université de Bordeaux.
  • B. Detienne is the head of work package 6 of Cap IA Project, dedicated to outreach and undergraduate students training.

11.3.2 Productions (articles, videos, podcasts, serious games, ...)

B. Detienne developed the application The intergalactic traveling salesperson to popularize optimization algorithms.

11.3.3 Participation in Live events

  • In collaboration with Bordeaux University, the CAP IA project and the ACCESS program, B. Detienne and P. Pesneau have organized an immersion day for high school teachers in university classes.
  • P. Pesneau took part in the "Teens'up révélateur de talents" forum organized by Région Nouvelle-Aquitaine, within the CAP IA project.
  • Within the CAP IA project, K.P. Ametana, B. Detienne, P. Flambard, L. Lopes animated a two-day stand about The intergalactic traveling salesperson at the Data Party Festival, Bordeaux. 180 schoolchildren participated to the activity, plus about 40 people with various profiles.

12 Scientific production

12.1 Publications of the year

International journals

Conferences without proceedings

Reports & preprints

12.2 Cited publications

  • 16 articleD.D .Applegate, W.W .Cook, R.R .Bixby and V.V .Chvatal. Implementing the Dantzig-Fulkerson-Johnson algorithm for large traveling salesman problems.Mathematical Programming, series B972003, 91--153back to text
  • 17 articleJ.J .Edmonds. Maximum matching and a polyhedron with 0, 1 vertices.Journal of Research National Bureau of Standards69B1965, 125--130back to text
  • 18 articleN.Nabil Absi, D.Diego Cattaruzza, D.Dominique Feillet, M.Maxime Ogier and F.Frédéric Semet. A Heuristic Branch-Cut-and-Price Algorithm for the ROADEF/EURO Challenge on Inventory Routing.Transportation Scienceaccepted2020back to text
  • 19 bookC.Clàudio Alves, F.François Clautiaux, J. M.José Manuel Valério De Carvalho and J.Juergen Rietz. Dual-Feasible Functions for Integer Programming and Combinatorial Optimization.SpringerJanuary 2016HALback to text
  • 20 miscA.Ayşe Arslan and B.Boris Detienne. Decomposition-based approaches for a class of two-stage robust binary optimization problems.July 2019, URL: https://hal.inria.fr/hal-02190059back to textback to text
  • 21 incollectionG.Giorgio Ausiello, P. G.Paolo G. Franciosa and D.Daniele Frigioni. Directed Hypergraphs: Problems, Algorithmic Results, and a Novel Decremental Approach.Theoretical Computer Science2202Berlin, HeidelbergSpringer Berlin Heidelberg2001, 312--328DOIback to text
  • 22 articleJ.Josette Ayoub and M.Michael Poss. Decomposition for adjustable robust linear optimization subject to uncertainty polytope.Computational Management Science132April 2016, 219--239URL: http://link.springer.com/10.1007/s10287-016-0249-2DOIback to text
  • 23 articleA.A. Ben-Tal, A.A. Goryashko, E.E. Guslitzer and A.A. Nemirovski. Adjustable robust solutions of uncertain linear programs.Mathematical Programming992March 2004, 351--376URL: http://link.springer.com/10.1007/s10107-003-0454-yDOIback to text
  • 24 articleJ. F.J. F. Benders. Partitioning procedures for solving mixed-variables programming problems.Numerische Mathematik431962, 238-252URL: https://link.springer.com/article/10.1007%2FBF01386316back to text
  • 25 articleD.David Bergman, A. A.Andre A. Cire, W.-J.Willem-Jan van Hoeve and J. N.J. N. Hooker. Discrete Optimization with Decision Diagrams.INFORMS Journal on Computing2812016, 47-66back to textback to text
  • 26 articleD.Dimitris Bertsimas and A.Angelos Georghiou. Design of Near Optimal Decision Rules in Multistage Adaptive Mixed-Integer Optimization.Operations Research633Publisher: INFORMSApril 2015, 610--627URL: https://pubsonline.informs.org/doi/abs/10.1287/opre.2015.1365DOIback to text
  • 27 articleX.Xavier Blanchot, F.François Clautiaux, B.Boris Detienne, A.Aurélien Froger and M.Manuel Ruiz. The Benders by batch algorithm: Design and stabilization of an enhanced algorithm to solve multicut Benders reformulation of two-stage stochastic programs.European Journal of Operational Research30912023, 202-216URL: https://www.sciencedirect.com/science/article/pii/S0377221723000048DOIback to text
  • 28 articleN.Natashia Boland, J.John Dethridge and I.Irina Dumitrescu. Accelerated Label Setting Algorithms for the Elementary Resource Constrained Shortest Path Problem.Operations Research Letters3412006, 58-68DOIback to text
  • 29 articleN.N. Boland, M.M. Hewitt, L.L. Marshall and M.M. Savelsbergh. The Continuous-Time Service Network Design Problem.Operations Research6552017, 1303-1321DOIback to text
  • 30 articleN. L.N. L. Boland and M. W.M. W. P. Savelsbergh. Perspectives on Integer Programming for Time-Dependent Models.TOP2019DOIback to text
  • 31 articleH.H. Bouarab, I. E.I. El Hallaoui, A.A. Metrane and F.F. Soumis. Dynamic constraint and variable aggregation in column generation.European Journal of Operational Research26232017, 835-850URL: http://dx.doi.org/10.1016/j.ejor.2017.04.049back to text
  • 32 articleN.Nicos Christofides, A.A. Mingozzi and P.P. Toth. State-Space Relaxation Procedures for the Computation of Bounds to Routing Problems.Networks1121981, 145-164DOIback to text
  • 33 articleF.François Clautiaux, S.Sa\"id Hanafi, R.Rita Macedo, E.Emilie Voge and C.Cláudio Alves. Iterative aggregation and disaggregation algorithm for pseudo-polynomial network flow models with side constraints.European Journal of Operational Research2582017, 467 - 477HALDOIback to text
  • 34 articleF.François Clautiaux, R.Ruslan Sadykov, F.François Vanderbeck and Q.Quentin Viaud. Combining dynamic programming with filtering to solve a four-stage two-dimensional guillotine-cut bounded knapsack problem.Discrete Optimization292018, 18-44back to text
  • 35 articleL. C.Leandro C. Coelho, J.-F.Jean-François Cordeau and G.Gilbert Laporte. Thirty Years of Inventory Routing.Transportation Science4812014, 1-19back to text
  • 36 articleG. B.George B. Dantzig and P.Philip Wolfe. Decomposition Principle for Linear Programs.Operations Research811960, 101-111URL: https://doi.org/10.1287/opre.8.1.101DOIback to text
  • 37 articleG.Guy Desaulniers, J.Jacques Desrosiers and S.Simon Spoorendonk. Cutting planes for branch-and-price algorithms.Networks5842011, 301-310back to text
  • 38 article G.Guy Desaulniers and J.J\o Rakke. back to text
  • 39 incollectionJ.Jacques Desrosiers and M. E.Marco E. Lübbecke. Branch-Price-and-Cut Algorithms.Wiley Encyclopedia of Operations Research and Management ScienceAmerican Cancer Society2011back to text
  • 40 articleM.Michael Drexl. Synchronization in Vehicle Routing --- A Survey of VRPs with Multiple Synchronization Constraints.Transportation Science4632012, 297-316back to text
  • 41 articleB.B. Fortz, A. R.A. R. Mahjoub, S. T.S. T. McCormick and P.Pierre Pesneau. Two-edge connected subgraphs with bounded rings: polyhedral results and branch-and-cut.Mathematical Programming10512006, 85--111back to text
  • 42 inproceedingsA.A. Frank. Some polynomial algorithms for certain graphs and hypergraphs.Proceedings of the Fifth British Combinatorial ConferenceCongressus Numerantium XV1975, 211-226back to text
  • 43 unpublishedA.Aurélien Froger, O.Ola Jabali, J. E.Jorge E. Mendoza and G.Gilbert Laporte. The electric vehicle routing problem with capacitated charging stations.December 2020, Submitted paper (available on HAL: https://hal.archives-ouvertes.fr/hal-02386167)HALback to text
  • 44 articleL.-L.Liang-Liang Fu, M. A.Mohamed Ali Aloulou and C.Christian Artigues. Integrated production and outbound distribution scheduling problems with job release dates and deadlines.Journal of Scheduling2142018, 443--460back to text
  • 45 phdthesisM.Matthew Galati. Decomposition Methods for Integer Linear Programming.Lehigh University2009back to text
  • 46 incollectionG.Gerald Gamrath and M.Marco Lübbecke. Experiments with a Generic Dantzig-Wolfe Decomposition for Integer Programs.Experimental Algorithms6049Lecture Notes in Computer ScienceSpringer Berlin / Heidelberg2010, 239-252back to text
  • 47 articleL.L. Gouveia, M.M. Leitner and M.M. Ruthmair. Layered Graph Approaches for Combinatorial Optimization Problems.Computers & Operations Research1022019, 22-38DOIback to text
  • 48 articleG. A.Grani A. Hanasusanto, D.Daniel Kuhn and W.Wolfram Wiesemann. K -Adaptability in Two-Stage Robust Binary Programming.Operations Research634August 2015, 877--891URL: http://pubsonline.informs.org/doi/10.1287/opre.2015.1392DOIback to text
  • 49 articleM.M. Horn, G. R.G. R. Raidl and E.E. Rönnberg. A* Search for Prize-Collecting Job Sequencing with One Common and Multiple Secondary Resources.Annals of Operations Research2020DOIback to text
  • 50 articleT.T. Ibaraki. Successive Sublimation Methods for Dynamic Programming Computation.Annals of Operations Research1111987, 397--439DOIback to text
  • 51 incollectionS.Stefan Irnich and G.Guy Desaulniers. Shortest Path Problems with Resource Constraints.Column GenerationBoston, MASpringer US2005, 33--65back to text
  • 52 articleR. M.R. M. Karp and M.M. Held. Finite-State Processes and Dynamic Programming.SIAM Journal on Applied Mathematics1531967, 693-718DOIback to text
  • 53 articleA.Ali Khanafer, F.François Clautiaux and E.-G.El-Ghazali Talbi. New lower bounds for bin packing problems with conflicts.European Journal of Operational Research20622010, 281 - 288URL: http://www.sciencedirect.com/science/article/pii/S0377221710000718DOIback to text
  • 54 articleV. L.Vinícius L. de Lima, C.Cláudio Alves, F.François Clautiaux, M.Manuel Iori and J. M.José M. Valério de Carvalho. Arc flow formulations based on dynamic programming: Theoretical foundations and applications.European Journal of Operational Research2021DOIback to text
  • 55 articleL.Leonardo Lozano, D.David Bergman and J. C.J. Cole Smith. On the Consistent Path Problem.Operations Research2020, opre.2020.1979DOIback to text
  • 56 articleP.Pierre Mahjoub. On the Steiner 2-edge connected subgraph polytope.RAIRO Operations Research422008, 259-283URL: http://hal.inria.fr/inria-00325988/en/back to text
  • 57 articleG.Guillaume Marques, R.Ruslan Sadykov, J.-C.Jean-Christophe Deschamps and R.Rémy Dupas. An improved branch-cut-and-price algorithm for the two-echelon capacitated vehicle routing problem.Computers & Operations Research1142020, 104833back to text
  • 58 articleR. K.R. Kipp Martin, R. L.Ronald L. Rardin and B. A.Brian A. Campbell. Polyhedral Characterization of Discrete Dynamic Programming.Operations Research3811990, 127--138DOIback to text
  • 59 articleS.S. Nadarajah and A. A.A. A. Cire. Network-Based Approximate Linear Programming for Discrete Optimization.SSRN Electronic Journal2017DOIback to text
  • 60 articleD.Diego Pecin, A.Artur Pessoa, M.Marcus Poggi and E.Eduardo Uchoa. Improved branch-cut-and-price for capacitated vehicle routing.Mathematical Programming Computation912017, 61--100back to textback to text
  • 61 inproceedingsA.Artur Pessoa, R.Ruslan Sadykov, E.Eduardo Uchoa and F.François Vanderbeck. A Generic Exact Solver for Vehicle Routing and Related Problems.Integer Programming and Combinatorial Optimization11480Lecture Notes in Computer ScienceChamSpringer International Publishing2019, 354--369back to text
  • 62 articleA.Artur Pessoa, R.Ruslan Sadykov, E.Eduardo Uchoa and F.François Vanderbeck. Automation and combination of linear-programming based stabilization techniques in column generation.INFORMS Journal on Computing3022018, 339--360back to text
  • 63 articleD.D. Porumbel and G.G. Goncalves. Using dual feasible functions to construct fast lower bounds for routing and location problems.Discrete Applied Mathematics1962015, 83-99back to text
  • 64 techreportRTE. Energy Pathways to 2050 - Key results (Executive summary).RTE, Report2021back to text
  • 65 techreportRTE. French transmission network development plan.RTE, Report2019back to text
  • 66 inproceedingsM.M. Riedler, M.M. Ruthmair and G. R.G. R Raidl. Strategies for Iteratively Refining Layered Graph Models.11th International Workshop on Hybrid MetaheuristicsChili2018, 16back to text
  • 67 articleG.Giovanni Righini and M.Matteo Salani. New Dynamic Programming Algorithms for the Resource Constrained Elementary Shortest Path Problem.Networks5132008, 155-170DOIback to text
  • 68 articleR.Ruslan Sadykov, E.Eduardo Uchoa and A.Artur Pessoa. A bucket graph--based labeling algorithm with application to vehicle routing.Transportation Science5512021, 4--28back to text
  • 69 articleR.Ruslan Sadykov and F.François Vanderbeck. Bin Packing with Conflicts: A Generic Branch-and-Price Algorithm.INFORMS Journal on Computing2522013, 244-255back to text
  • 70 articleR.Ruslan Sadykov, F.François Vanderbeck, A.Artur Pessoa, I.Issam Tahiri and E.Eduardo Uchoa. Primal Heuristics for Branch-and-Price: the assets of diving methods.INFORMS Journal on Computing3122019, 251-267back to textback to text
  • 71 articleM.Michael Schneider and M.Michael Drexl. A survey of the standard location-routing problem.Annals of Operations Research25912017, 389--414back to text
  • 72 articleD. M.Duc Minh Vu, M.Mike Hewitt, N.Natashia Boland and M.Martin Savelsbergh. Dynamic Discretization Discovery for Solving the Time-Dependent Traveling Salesman Problem with Time Windows.Transportation Science5432020, 703--720DOIback to text
  • 73 articleB.Bo Zeng and L.Long Zhao. Solving two-stage robust optimization problems using a column-and-constraint generation method.Operations Research Letters415September 2013, 457--461URL: http://linkinghub.elsevier.com/retrieve/pii/S0167637713000618DOIback to text