Quantitative modeling is routinely used in both industry and administration to design and operate transportation, distribution, or production systems. Optimization concerns every stage of the decision-making process: long term investment budgeting and activity planning, tactical management of scarce resources, or the control of day-to-day operations. In many optimization problems that arise in decision support applications the most important decisions (control variables) are discrete in nature: such as on/off decision to buy, to invest, to hire, to send a vehicle, to allocate resources, to decide on precedence in operation planning, or to install a connection in network design. Such combinatorial optimization problems can be modeled as linear or nonlinear programs with integer decision variables and extra variables to deal with continuous adjustments. The most widely used modeling tool consists in defining the feasible decision set using linear inequalities with a mix of integer and continuous variables, so-called Mixed Integer Programs (MIP), which already allow a fair description of reality and are also well-suited for global optimization. The solution of such models is essentially based on enumeration techniques and is notoriously difficult given the huge size of the solution space.
Commercial solvers have made significant progress but remain quickly overwhelmed beyond a certain problem size. A key to further progress is the development of better problem formulations that provide strong continuous approximations and hence help to prune the enumerative solution scheme. Effective solution schemes are a complex blend of techniques: cutting planes to better approximate the convex hull of feasible (integer) solutions, extended reformulations (combinatorial relations can be formulated better with extra variables), constraint programming to actively reduce the solution domain through logical implications, Lagrangian and Bender's decomposition methods to produce powerful relaxations, multi-level programming to model a hierarchy of decision levels or recourse decision in the case of data adjustment, heuristics and meta-heuristics (greedy, local improvement, or randomized partial search procedures) to produce good candidates at all stage of the solution process, and branch-and-bound or dynamic programming enumeration schemes to find a global optimum. The real challenge is to integrate the most efficient methods in one global system so as to prune what is essentially an enumeration based solution technique. The progress are measured in terms of the large scale of input data that can now be solved, the integration of many decision levels into planning models, and not least, the account taken for random data by way of modeling expectation (stochastic approaches) or worst-case bahavior (robust approaches).
Building on complementary expertise, our team's overall goals are threefold:
To design tight formulations for specific problems and generic models, relying on delayed cut and column generation, decomposition, extended formulations and projection tools for linear and nonlinear mixed integer programming models. More broadly, to contribute to theoretical and methodological developments of exact approaches in combinatorial optimization, while extending the scope of applications.
To demonstrate the strength of cooperation between complementary exact mathematical optimization techniques, dynamic programming, robust and stochastic optimization, constraint programming, combinatorial algorithms and graph theory, by developing “efficient” algorithms for specific mathematical models. To tackle large-scale real-life applications, providing provably good approximate solutions by combining exact methods and heuristics.
To provide prototypes of specific model solvers and generic software tools that build on our research developments, writing proof-of-concept code, while transferring our research findings to internal and external users.
Combinatorial optimization is the field of discrete
optimization problems. In many applications, the most important
decisions (control variables) are binary (on/off decisions) or
integer (indivisible quantities). Extra variables can represent
continuous adjustments or amounts. This results in models known as
mixed integer programs (MIP), where the relationships between
variables and input parameters are expressed as linear constraints
and the goal is defined as a linear objective function. MIPs are
notoriously difficult to solve: good quality estimations of the
optimal value (bounds) are required to prune enumeration-based
global-optimization algorithms whose complexity is exponential. In
the standard approach to solving an MIP is so-called branch-and-bound algorithm :
Progress can be expected from the development of tighter formulations. Central to our field is the characterization of polyhedra defining or approximating the solution set and combinatorial algorithms to identify “efficiently” a minimum cost solution or separate an unfeasible point. With properly chosen formulations, exact optimization tools can be competitive with other methods (such as meta-heuristics) in constructing good approximate solutions within limited computational time, and of course has the important advantage of being able to provide a performance guarantee through the relaxation bounds. Decomposition techniques are implicitly leading to better problem formulation as well, while constraint propagation are tools from artificial intelligence to further improve formulation through intensive preprocessing. A new trend is robust optimization where recent progress have been made: the aim is to produce optimized solutions that remain of good quality even if the problem data has stochastic variations. In all cases, the study of specific models and challenging industrial applications is quite relevant because developments made into a specific context can become generic tools over time and see their way into commercial software.
Our project brings together researchers with expertise in mathematical programming (polyhedral approaches, Dantzig-Wolfe decomposition, mixed integer programing, robust and stochastic programming, and dynamic programming), graph theory (characterization of graph properties, combinatorial algorithms) and constraint programming in the aim of producing better quality formulations and developing new methods to exploit these formulations. These new results are then applied to find high quality solutions for practical combinatorial problems such as routing, network design, planning, scheduling, cutting and packing problems.
Adding valid inequalities to the polyhedral description of an MIP
allows one to improve the resulting LP bound and hence to better
prune the enumeration tree. In a cutting plane procedure, one
attempt to identify valid inequalities that are violated by the LP
solution of the current formulation and adds them to the
formulation. This can be done at each node of the branch-and-bound
tree giving rise to a so-called branch-and-cut algorithm
. The goal is to reduce the resolution of an
integer program to that of a linear program by deriving a linear
description of the convex hull of the feasible solutions. Polyhedral
theory tells us that if
An hierarchical approach to tackle complex combinatorial problems consists in considering separately different substructures (subproblems). If one is able to implement relatively efficient optimization on the substructures, this can be exploited to reformulate the global problem as a selection of specific subproblem solutions that together form a global solution. If the subproblems correspond to subset of constraints in the MIP formulation, this leads to Dantzig-Wolfe decomposition , , , . If it corresponds to isolating a subset of decision variables, this leads to Bender's decomposition. Both lead to extended formulations of the problem with either a huge number of variables or constraints. Dantzig-Wolfe approach requires specific algorithmic approaches to generate subproblem solutions and associated global decision variables dynamically in the course of the optimization. This procedure is known as column generation, while its combination with branch-and-bound enumeration is called branch-and-price. Alternatively, in Bender's approach, when dealing with exponentially many constraints in the reformulation, the cutting plane procedures that we defined in the previous section are well-suited tools. When optimization on a substructure is (relatively) easy, there often exists a tight reformulation of this substructure typically in an extended variable space. This gives rise powerful reformulation of the global problem, although it might be impractical given its size (typically pseudo-polynomial). It can be possible to project (part of) the extended formulation in a smaller dimensional space if not the original variable space to bring polyhedral insight (cuts derived through polyhedral studies can often be recovered through such projections).
When one deals with combinatorial problems with a large number of integer variables, or tightly constrained problems, mixed integer programming (MIP) alone may not be able to find solutions in a reasonable amount of time. In this case, techniques from artificial intelligence can be used to improve these methods. In particular, we use primal heuristics and constraint programming.
Primal heuristics are useful to find feasible solutions in a small amount of time. We focus on heuristics that are either based on integer programming (rounding, diving, relaxation induced neighborhood search, feasibility pump), or that are used inside our exact methods (heuristics for separation or pricing subproblem, heuristic constraint propagation, ...).
Constraint Programming (CP) focuses on iteratively reducing the variable domains (sets of feasible values) by applying logical and problem-specific operators. The latter propagates on selected variables the restrictions that are implied by the other variable domains through the relations between variables that are defined by the constraints of the problem. Combined with enumeration, it gives rise to exact optimization algorithms. A CP approach is particularly effective for tightly constrained problems, feasibility problems and min-max problems Mixed Integer Programming (MIP), on the other hand, is known to be effective for loosely constrained problems and for problems with an objective function defined as the weighted sum of variables. Many problems belong to the intersection of these two classes. For such problems, it is reasonable to use algorithms that exploit complementary strengths of Constraint Programming and Mixed Integer Programming.
Decision makers are usually facing several sources of uncertainty, such as the variability in time or estimation errors. A simplistic way to handle these uncertainties is to overestimate the unknown parameters. However, this results in over-conservatism and a significant waste in resource consumption. A better approach is to account for the uncertainty directly into the decision aid model by considering mixed integer programs that involve uncertain parameters. Stochastic optimization account for the expected realization of random data and optimize an expected value representing the average situation. Robust optimization on the other hand entails protecting against the worst-case behavior of unknown data. There is an analogy to game theory where one considers an oblivious adversary choosing the realization that harms the solution the most. A full worst case protection against uncertainty is too conservative and induces very high over-cost. Instead, the realization of random data are bound to belong to a restricted feasibility set, the so-called uncertainty set. Stochastic and robust optimization rely on very large scale programs where probabilistic scenarios are enumerated. There is hope of a tractable solution for realistic size problems, provided one develops very efficient ad-hoc algorithms. The techniques for dynamically handling variables and constraints (column-and-row generation and Bender's projection tools) that are at the core of our team methodological work are specially well-suited to this context.
Many fundamental combinatorial optimization problems can be modeled as the search for a specific structure in a graph. For example, ensuring connectivity in a network amounts to building a tree that spans all the nodes. Inquiring about its resistance to failure amounts to searching for a minimum cardinality cut that partitions the graph. Selecting disjoint pairs of objects is represented by a so-called matching. Disjunctive choices can be modeled by edges in a so-called conflict graph where one searches for stable sets – a set of nodes that are not incident to one another. Polyhedral combinatorics is the study of combinatorial algorithms involving polyhedral considerations. Not only it leads to efficient algorithms, but also, conversely, efficient algorithms often imply polyhedral characterizations and related min-max relations. Developments of polyhedral properties of a fundamental problem will typically provide us with more interesting inequalities well suited for a branch-and-cut algorithm to more general problems. Furthermore, one can use the fundamental problems as new building bricks to decompose the more general problem at hand. For problem that let themselves easily be formulated in a graph setting, the graph theory and in particular graph decomposition theorem might help.
Our group has tackled applications in logistics, transportation and routing , , , , in production planning and inventory control , , in network design and traffic routing , , , , , , , , in cutting and placement problems , , , , , , and in scheduling , , .
We are actively working on problems arising in network topology design, implementing a survivability condition of the form “at least two paths link each pair of terminals”. We have extended polyhedral approaches to problem variants with bounded length requirements and re-routing restrictions . Associated to network design is the question of traffic routing in the network: one needs to check that the network capacity suffices to carry the demand for traffic. The assignment of traffic also implies the installation of specific hardware at transient or terminal nodes.
To accommodate the increase of traffic in telecommunication networks, today's optical networks use grooming and wavelength division multiplexing technologies. Packing multiple requests together in the same optical stream requires to convert the signal in the electrical domain at each aggregation of disaggregation of traffic at an origin, a destination or a bifurcation node. Traffic grooming and routing decisions along with wavelength assignments must be optimized to reduce opto-electronic system installation cost. We developed and compared several decomposition approaches , , to deal with backbone optical network with relatively few nodes (around 20) but thousands of requests for which traditional multi-commodity network flow approaches are completely overwhelmed. We also studied the impact of imposing a restriction on the number of optical hops in any request route . We also developed a branch-and-cut approach to a problem that consists in placing sensors on the links of a network for a minimum cost , .
We studied several time dependent formulations for the unit demand vehicle routing problem , . We gave new bounding flow inequalities for a single commodity flow formulation of the problem. We described their impact by projecting them on some other sets of variables, such as variables issued of the Picard and Queyranne formulation or the natural set of design variables. Some inequalities obtained by projection are facet defining for the polytope associated with the problem. We are now running more numerical experiments in order to validate in practice the efficiency of our theoretical results.
We also worked on the p-median problem, applying the matching theory to develop an efficient algorithm in Y-free graphs and to provide a simple polyhedral characterization of the problem and therefore a simple linear formulation simplifying results from Baiou and Barahona.
We considered the multi-commodity transportation problem. Applications of this problem arise in, for example, rail freight service design, "less than truckload" trucking, where goods should be delivered between different locations in a transportation network using various kinds of vehicles of large capacity. A particularity here is that, to be profitable, transportation of goods should be consolidated. This means that goods are not delivered directly from the origin to the destination, but transferred from one vehicle to another in intermediate locations. We proposed an original Mixed Integer Programming formulation for this problem which is suitable for resolution by a Branch-and-Price algorithm and intelligent primal heuristics based on it.
For the problem of routing freight railcars, we proposed two algorithmes based on the column generation approach. These algorithmes have been tested on a set of real-life instances coming from a real Russian freight transportation company. Our algorithms have been faster on these instances than the current solution approach being used by the company.
Realopt team has a strong experience on exact methods for cutting and packing problems. These problems occur in logistics (loading trucks), industry (wood or steel cutting), computer science (parallel processor scheduling).
We developed a branch-and-price algorithm for the Bin Packing Problem with Conflicts which improves on other approaches available in the literature . The algorithm uses our methodological advances like the generic branching rule for the branch-and-price and the column based heuristic. One of the ingredients which contributes to the success of our method are fast algorithms we developed for solving the subproblem which is the Knapsack Problem with Conflicts. Two variants of the subproblem have been considered: with interval and arbitrary conflict graphs.
We also developped a branch-and-price algorithm for a variant of the bin-packing problem where the items are fragile. In we studied empirically different branching schemes and different algorithms for solving the subproblems.
We studied a variant of the knapsack problem encountered in inventory routing problem : we faced a multiple-class integer knapsack problem with setups (items are partitioned into classes whose use implies a setup cost and associated capacity consumption). We showed the extent to which classical results for the knapsack problem can be generalized to this variant with setups and we developed a specialized branch-and-bound algorithm.
We studied the orthogonal knapsack problem, with the help of graph theory , , , . Fekete and Schepers proposed to model multi-dimensional orthogonal placement problems by using an efficient representation of all geometrically symmetric solutions by a so called packing class involving one interval graph for each dimension. Though Fekete & Schepers' framework is very efficient, we have however identified several weaknesses in their algorithms: the most obvious one is that they do not take advantage of the different possibilities to represent interval graphs. We propose to represent these graphs by matrices with consecutive ones on each row. We proposed a branch-and-bound algorithm for the 2D knapsack problem that uses our 2D packing feasibility check. We are currently developping exact optimization tools for glass-cutting problems in a collaboration with Saint-Gobain. This 2D-3stage-Guillotine cut problems are very hard to solve given the scale of the instance we have to deal with. Moreover one has to issue cutting patterns that avoid the defaults that are present in the glass sheet that are used as raw material. There are extra sequencing constraints regarding the production that make the problem even more complex.
We have also organized a european challenge on packing with society
Renault: see http://
Inventory routing problems combine the optimization of product deliveries (or pickups) with inventory control at customer sites. We considered an industrial application where one must construct the planning of single product pickups over time; each site accumulates stock at a deterministic rate; the stock is emptied on each visit. We have developed a branch-and-price algorithm where periodic plans are generated for vehicles by solving a multiple choice knapsack subproblem, and the global planning of customer visits is coordinated by the master program . We previously developed approximate solutions to a related problem combining vehicle routing and planning over a fixed time horizon (solving instances involving up to 6000 pick-ups and deliveries to plan over a twenty day time horizon with specific requirements on the frequency of visits to customers .
Together with our partner company GAPSO from the associate team SAMBA, we worked on the equipment routing task scheduling problem arising during port operations. In this problem, a set of tasks needs to be performed using equipments of different types with the objective to maximize the weighted sum of performed tasks.
We participated to the project on an airborne radar scheduling. For this problem, we developed fast heuristics and exact algorithms . A substantial research has been done on machine scheduling problems. A new compact MIP formulation was proposed for a large class of these problems . An exact decomposition algorithm was developed for the NP-hard maximizing the weighted number of late jobs problem on a single machine . A dominant class of schedules for malleable parallel jobs was discovered in the NP-hard problem to minimize the total weighted completion time . We proved that a special case of the scheduling problem at cross docking terminals to minimize the storage cost is polynomially solvable , .
Another application area in which we have successfully developed MIP approaches is in the area of tactical production and supply chain planning. In , we proposed a simple heuristic for challenging multi-echelon problems that makes effective use of a standard MIP solver. contains a detailed investigation of what makes solving the MIP formulations of such problems challenging; it provides a survey of the known methods for strengthening formulations for these applications, and it also pinpoints the specific substructure that seems to cause the bottleneck in solving these models. Finally, the results of provide demonstrably stronger formulations for some problem classes than any previously proposed. We are now working on planning phytosanitary treatments in vineries.
We have been developing robust optimization models and methods to deal with a number of applications like the above in which uncertainty is involved. In , , we analyzed fundamental MIP models that incorporate uncertainty and we have exploited the structure of the stochastic formulation of the problems in order to derive algorithms and strong formulations for these and related problems. These results appear to be the first of their kind for structured stochastic MIP models. In addition, we have engaged in successful research to apply concepts such as these to health care logistics . We considered train timetabling problems and their re-optimization after a perturbation in the network , . The question of formulation is central. Models of the literature are not satisfactory: continuous time formulations have poor quality due to the presence of discrete decision (re-sequencing or re-routing); arc flow in time-space graph blow-up in size (they can only handle a single line timetabling problem). We have developed a discrete time formulation that strikes a compromise between these two previous models. Based on various time and network aggregation strategies, we develop a 2-stage approach, solving the contiguous time model having fixed the precedence based on a solution to the discrete time model.
Currently, we are conducting investigations on a real-world planning problem in the domain of energy production, in the context of a collaboration with EDF. The problem consists in scheduling maintenance periods of nuclear power plants as well as production levels of both nuclear and conventional power plants in order to meet a power demand, so as to minimize the total production cost. For this application, we used a Dantzig-Wolfe reformulation which allows us to solve realistic instances of the deterministic version of the problem . In practice, the input data comprises a number of uncertain parameters. We deal with a scenario-based stochastic demand with help of a Benders decomposition method. We are working on Multistage Robust Optimization approaches to take into account other uncertain parameters like the duration of each maintenance period, in a dynamic optimization framework. The main challenge adressed in this work is the joint management of different reformulations and solving techniques coming from the deterministic (Dantzig-Wolfe decomposition, due to the large scale nature of the problem), stochastic (Benders decomposition, due to the number of demand scenarios) and robust (reformulations based on duality and/or column and/or row generation due to maintenance extension scenarios) components of the problem .
The Inria Innovation Lab with Ertus-consulting has reached the state of outputing a strategic planner for phytosanitary treatements in viticulture, showing significant potential saving margins. The prototype was presented to the press and the wine-making industry in September 2016. This event has been followed by some articles in the specialized press (such as “Réussir Vigne”) and more generalist output (such as “Les Echos”). Industrial partnerships are being pursued with EDF (on nuclear maintenance planning) and Saint Gobain (on glas cutting optimization) and a new project has been launched with SNCF.
François Clautiaux published a book about dual-feasible functions, their use to improve the resolution of several combinatorial optimization problems involving knapsack inequalities like cutting and packing, scheduling, and vehicle routing problems, and their strong links with column generation models and the underlying Dantzig-Wolfe decomposition. This book explores the general properties that identify the best dual-feasible functions, describes the general approaches that can be followed to derive new non-dominated functions, which leads on several problems to the best results reported in the literature.
Our research on decomposition based math-heuristics has led to new benchmarks, highlighting the performance of our generic procedures: for instance, we have managed to improve the best known solutions for several open Generalized Assignment Problem (GAP) instances of the litterature. Similarly, our algorithms based on aggregation and disaggregation techniques allowed us to outperform previous approaches for the cutting-stock problem, which is a classical benchmark problem. On the most difficult instances to date, we were able to solve optimally 240 instances out of 250, whereas previous algorithms were only able to solve 29 instances. In a more practical setting, we have developed algorithms to compute team schedules for a roster of employees , and these algorithms are now embedded in a professional employee scheduling software of the Asys company. We have also obtained strong results for scheduling problems in a high performance computing context , , which allowed to significantly improve the performance of linear algebra routines on high-end heterogeneous systems.
Keywords: Column Generation - Branch-and-Price - Branch-and-Cut - Mixed Integer Programming - Mathematical Optimization - Benders Decomposition - Dantzig-Wolfe Decomposition - Extended Formulation
Functional Description
BaPCod is a prototype code that solves Mixed Integer Programs (MIP) by application of reformulation and decomposition techniques. The reformulated problem is solved using a branch-and-price-and-cut (column generation) algorithm, Benders approaches, or network flow algorithms.
Participants: Francois Vanderbeck, Ruslan Sadykov, Issam Tahiri, Romain Leguay, Artur Alves Pessoa, Boris Detienne, Franck Labat, François Clautiaux, Pierre Pesneau, Eduardo Uchoa Barboza, Michael Poss and Halil Sen
Partners: CNRS - IPB - Universidade Federal Fluminense - Université de Bordeaux
Contact: François Vanderbeck
URL: https://
We have made progress on stabilization techniques and math-heuristics that are essential components for generic Branch-and-Price methods.
The convergence of a column generation algorithm can be improved in practice by using stabilization techniques. Smoothing and proximal methods based on penalizing the deviation from the incumbent dual solution have become standards of the domain. Interpreting column generation as cutting plane strategies in the dual problem, we have analyzed the mechanisms on which stabilization relies. In particular, the link is established between smoothing and in-out separation strategies to derive generic convergence properties. For penalty function methods as well as for smoothing, we describe proposals for parameter self-adjusting schemes. Such schemes make initial parameter tuning less of an issue as corrections are made dynamically. Such adjustments also allow to adapt the parameters to the phase of the algorithm. Extensive test reports validate our self-adjusting parameter scheme and highlight their performances. Our results also show that using smoothing in combination with penalty function yields a cumulative effect on convergence speed-ups.
Math heuristics have become an essential component in mixed integer programming (MIP) solvers. Extending MIP based heuristics, we have studied , generic procedures to build primal solutions in the context of a branch-and-price approach. As the Dantzig-Wolfe reformulation of a problem is typically tighter than that of the original compact formulation, heuristics based on rounding its linear programing (LP) solution can be more competitive. We focus on the so-called diving methods that used re-optimization after each LP rounding. We explore combination with diversification- intensification paradigms such as Limited Discrepancy Search, sub-MIPing, relaxation induced neighborhood search, local branching, and strong branching. The dynamic generation of variables inherent to a column generation approach requires specific adaptation of heuristic paradigms. We manage to use simple strategies to get around these technical issues. Our numerical results on generalized assignment, cutting stock, and vertex coloring problems sets new benchmarks, highlighting the performance of diving heuristics as generic procedures in a column generation context and producing better solutions than state-of-the-art specialized heuristics in some cases.
We have developed a general solution framework based on aggregation techniques to solve NP-Hard problems that can be formulated as a circulation model with specific side constraints. The size of the extended Mixed Integer Linear Programming formulation is generally pseudo-polynomial. To efficiently solve exactly these large scale models, we propose a new iterative aggregation and disaggregation algorithm. At each iteration, it projects the original model onto an aggregated one, producing an approximate model. The process iterates to refine the current aggregated model until the optimality is proved.
The computational experiments on two hard optimization problems (a variant of the vehicle routing problem and the cutting-stock problem) show that a generic implementation of the proposed framework allows us to outperform previous known methods.
We have applied this aggregation method to reduce the size of column generation (CG) models for covering problems in which the feasible subsets depend on a resource constraint . The aggregation relies on a correlation between the resource consumption of the elements and the corresponding optimal dual values. The resulting aggregated dual model is a restriction of the original one, and it can be rapidly optimized to obtain a feasible dual solution. A primal bound can also be obtained by restricting the set of columns to those saturated by the dual feasible solution obtained by aggregation. The convergence is realized by iterative disaggregation until the gap is closed by the bounds. Computational results show the usefulness of our method for different cutting-stock problems. An important advantage is the fact that it can produce high-quality dual bounds much faster than the traditional lagrangian bound used in stabilized column generation.
In Benders decomposition approach to mixed integer programs, the optimization is carried in two stages: key first-stage decision variables are optimized using a polyhedral approximation of the full-blown problem projection, then a separation problem expressed in the second-stage variables is solved to check if the current first-stage solution is truly feasible, and otherwise, it produces a violated inequality. Such cutting-plane algorithms suffer from several drawbacks and may have very bad convergence rates. We have reviewed the battery of approaches that have been proposed in the literature to address these drawbacks and to speed-up the algorithm. Our contribution consists in explaining these techniques in simple terms and unified notations, showing that in several cases, different proposals of the literature boil down to the same key ideas. We classify methods into specific initialization mode, stabilization techniques, strategies to select the separation point, and cut generation strategies. Where available, we highlight numerical benchmarks that have resulted from such enhancements.
Given a directed graph
The Dial-a-Ride Problem is a variant of the pickup and delivery problem with time windows, where the user inconvenience must be taken into account. In , ride time and customer waiting time are modeled through both constraints and an associated penalty in the objective function. We develop a column generation approach, dynamically generating feasible vehicle routes. Handling ride time constraints explicitly in the pricing problem solver requires specific developments. Our dynamic programming approach for pricing problem makes use of a heuristic dominance rule and a heuristic enumeration procedure, which in turns implies that our overall branch-and-price procedure is a heuristic. However, in practice our heuristic solutions are experimentally very close to exact solutions and our approach is numerically competitive in terms of computation times.
In , , we consider the problem of covering an urban area with sectors under additional constraints. We adapt the aggregation method to our column generation algorithm and focus on the problem of disaggregating the dual solution returned by the aggregated master problem.
We have considered the flowshop problem on two machines with sequence-independent setup times to minimize total completion time. Large scale network flow formulations of the problem are suggested together with strong Lagrangian bounds based on these formulations. To cope with their size, filtering procedures are developed. To solve the problem to optimality, we embed the Lagrangian bounds into two branch-and-bound algorithms. The best algorithm is able to solve all 100-job instances of our testbed with setup times and all 140-job instances without setup times, thus significantly outperforming the best algorithms in the literature.
In , we address a multi-activity tour scheduling problem with time varying demand. The objective is to compute a team schedule for a fixed roster of employees in order to minimize the over-coverage and the under-coverage of different parallel activity demands along a planning horizon of one week. Numerous complicating constraints are present in our problem: all employees are different and can perform several different activities during the same day-shift, lunch breaks and pauses are flexible, demand is given for 15 minutes periods. Employees have feasibility and legality rules to be satisfied, but the objective function does not account for any quality measure associated with each individual’s schedule. More precisely, the problem mixes simultaneously days-off scheduling, shift scheduling, shift assignment, activity assignment, pause and lunch break assignment.
To solve this problem, we developed four methods: a compact Mixed Integer Linear Programming model, a branch-and-price like approach with a nested dynamic program to solve heuristically the subproblems, a diving heuristic and a greedy heuristic based on our subproblem solver. The computational results, based on both real cases and instances derived from real cases, demonstrate that our methods are able to provide good quality solutions in a short computing time. Our algorithms are now embedded in a commercial software, which is already in use in a mini-mart company.
With the complexification of the architecture of HPC nodes (multicores, non uniform memory access, GPU and accelerators), a recent trend in application development is to explicitely express the computations as a task graph, and rely on a specialized middleware stack to make scheduling decisions and implement them. Traditional algorithms used in this community are dynamic heuristics, to cope with the unpredictability of execution times. In , we analyze the performance of static and hybrid strategies, obtained by adding more static (resp. dynamic) features into dynamic (resp. static) strategies. Our conclusions are somehow unexpected in the sense that we prove that static-based strategies are very efficient, even in a context where performance estimations are not very good. We also present and generalize HeteroPrio, a semi-static resource-centric strategy based on the acceleration factors of tasks. In , we generalize this strategy to platforms with more than two types of resources. This allows to use intra-task parallelism by grouping several CPU cores together. In , we prove tight approximation ratios for HeteroPrio in the context of independent tasks, providing a theoretical insight to its good practical performance.
Another study focuses on the memory-constrained case, where tasks may produce large data. A task can only be executed if all input and output data fit into memory, and a data can only be removed from memory after the completion of the task that uses it as an input data. There is a known, polynomial time algorithm to minimize the peak memory used on one machine for the cases where the input graph is a rooted tree. We generalize in to the variant where the input graph is a directed series-parallel graph, and propose a polynomial time algorithm. This allows to solve this practical problem in two important classes of applications.
In , we consider the static problem of
data placement for matrix multiplication in heterogeneous machines, so
as to optimize both load balancing and communication volume. This is
modeled as a partitioning of a square into a set of zones of
prescribed areas, while minimizing the overall size of their
projections onto horizontal and vertical axes. We combine two ideas
from the literature (recursive partitioning, and optimal solution
structure for low number of processors) to obtain a non-rectangular
recursive partitioning (NRRP), whose approximation ratio is
The delivery of freight from manufacturing platforms to demand zones is often managed through one or more intermediate locations where storing, merging, transshipment and consolidation activities are performed. In , we design a Two-Echelon Distribution Network that helps synchronize different flows of product. Under demand uncertainty, our model integrates decisions on the locations and the size of second echelon facilities an decisions on the flows assignment between the echelons, and on delivery routes to serve the demand zones.
In , we study the
In , we consider planning phytosanitary treatments in a vineyard. We are given a set of diseases (or requests) that must be treated for each site. Product mixtures are defined by their composition of active components, and their duration of protective power for each request. Machines are available to spread the mixtures on the sites. The time horizon is divided in time periods. Sites are partitioned in sectors. The objective of the problem is to minimize the machine leasing costs, their travel cost to sectors and the costs related to the product use. To solve this problem, we use a column generation approach where the machine policy and the product order policy are pure master decisions, while treatment planning decisions are made in individual pricing subproblems associated with each site. We developed a dedicated dynamic program to solve the pricing subproblems.
The two-dimensional knapsack problem consists in packing a set of small rectangular items into a given large rectangle while maximizing the total reward associated with selected items. In , we restrict our attention to packings that emanate from a k-stage guillotine-cut process. We introduce a generic model where a knapsack solution is represented by a flow in a directed acyclic hypergraph. This hypergraph model derives from a forward labeling dynamic programming recursion that enumerates all non-dominated feasible cutting patterns. To reduce the hypergraph size, we make use of further dominance rules and a filtering procedure based on Lagrangian reduced costs fixing of hyperarcs. Our hypergraph model is (incrementally) extended to account for explicit bounds on the number of copies of each item. Our exact forward labeling algorithm is numerically compared to solving the max-cost flow model in the base hyper-graph with side constraints to model production bounds. Benchmarks are reported on instances from the literature and on datasets derived from a real-world application.
MapReduce is a well-know framework for distributing data-processing computations on parallel clusters. In MapReduce, a large computation is broken into small tasks that run in parallel on multiple machines, and scales easily to very large clusters of inexpensive commodity computers. Before the Map phase, the original dataset is first split into chunks, that are replicated (a constant number of times, usually 3) and distributed onto the computing nodes. During the Map phase, nodes request tasks and are allocated first tasks associated to local chunks (if any). Communications take place when requesting nodes do not hold any local chunk anymore. In , we provide the first complete theoretical data locality analysis of the Map phase of MapReduce, and more generally, for bag-of-tasks applications that behaves like MapReduce. We show that if tasks are homogeneous (in term of processing time), once the chunks have been replicated randomly on resources with a replication factor larger than 2, it is possible to find a priority mechanism for tasks that achieves a quasi-perfect number of communications using a sophisticated matching algorithm. In the more realistic case of heterogeneous processing times, we prove using an actual trace of a MapReduce server that this priority mechanism enables to complete the Map phase with significantly fewer communications, even on realistic distributions of task durations.
In a joint work with C. Bachoc, T. Bellitto and
P. Moustrou , we consider the maximum
density of sets avoiding distance 1 in
Let
Let
If
Our project with EDF concerns the optimization of the long term energy production planning, allowing for nuclear power plants maintenance. The challenges are to handle the large-scale instance of a five year planning and to handle the stochastic aspects of the problem: the stochastic variation of the electricity demand, the production capacity and the duration of maintenance period. The key decisions to be optimized are the dates of outages (for maintenance) and the level refuelling that determines the production of the year to come. We previously developed a column generation approach based on extended formulation which enables to solve within a few minutes a deterministic instance of the problem, which is within the time frame of the operational tools currently used by EDF. We now investigate stochastic and robust versions of the problem, where the duration of maintenance operations and the power demand are uncertain. Our approaches shall be evaluated on real life instances within a rolling horizon framework.
In planning winary operations (most importantly phytosanitary
treatments on the wine tree) under wheather forcast uncertainty, one
searches for solutions that remain feasible and “cheap” in case of
perturbation in the data. We consider the planning and scheduling of
the operations that arise over a one-year horizon. More precisely,
the operations to be sheduled include tasks related to soil care, or
grape tree care: cutting, line building, thinning out leaves,
Through the PhD of Quentin Viaud, we study a hard glass-cutting problem. The objective is to minimize the quantity of trim loss when rectangular pieces are cut from large rectangles. This first study has shown that our methodologies are able to cope with this problem for medium-size instances. Solving the problem with large instances is a scientific challenge that we will address in the a follow-up contract.
Through a research internship, we have studied a hard one-dimensional industrial cutting problem with manu practical constraints. We have designed a non-standard diving heuristic , where some complicating constraints are handled trough branching. Our heuristic was able to improve the solutions found by the industrial partner for several hard instances.
Our projet with SNCF concerns the optimisation of timetable and rolling stock rotation planning. The railway production planning process combines heterogeneous resources and is usually decomposed into different sequential sub-problems, beginning by line planning, timetabling, rolling stock rotations and crew scheduling. Our goal is to solvie the timetable and rolling stock problems in an integrated manner. Given a line planning and service requirement constraints, the problem is to produce a timetable for a set of trains and the objective is to minimize the cost of the railcars used. An originality of our approach is to deal with railcars composed of multiple units, which can be coupled or decoupled at some stations. The PhD thesis of Mohamed Benkirane is funded by this project.
We have received support from the regional authorities (Region Aquitaine) for a research project on the planning under uncertainty. A postdoc, Agnès Leroux, has been recruited on this project. She currently develops dynamic programming approaches for scheduling problems and their application to building planning for phytosanitary treatments.
This project aims at studying and designing algorithms and parallel programming models for implementing direct methods for the solution of sparse linear systems on emerging computing platforms equipped with accelerators. This project proposes an innovative approach which relies on the efficiency and portability of runtime systems, such as the StarPU tool. The focus of RealOpt in this project is on the scheduling aspect. Indeed, executing a heterogeneous workload with complex dependencies on a heterogeneous architecture is a very challenging problem that demands the development of effective scheduling algorithms. These will be confronted with possibly limited views of dependencies among tasks and multiple, and potentially conflicting objectives, such as minimizing the makespan, maximizing the locality of data or, where it applies, minimizing the memory consumption.
See also: http://
The goal of the SONGS project is to extend the applicability of the SimGrid simulation framework from Grids and Peer-to-Peer systems to Clouds and High Performance Computation systems. Any sound study of such systems through simulations relies on the following pillars of simulation methodology: Efficient simulation kernel; Sound and validated models; Simulation analysis tools; Campaign simulation management. The contribution of RealOpt in this project revolves around enabling peer-to-peer simulation, and providing use cases for Cloud Computing simulations.
See also: http://
Title: Synergies for Ameliorations and Mastering of Branch-and-Price Algorithms
International Partner (Institution - Laboratory - Researcher):
Universidade Federal Fluminense (Brazil) - LIGOS - Eduardo Uchoa
Start year: 2011
SAMBA is a research project between the Inria project team ReAlOpt (Bordeaux, France), the ADT-Lab Pontifícia Universidade Católica do Rio de Janeiro, and the LOGIS at the Universidade Federal Fluminense. The project is supported by Inria under the “associate team” framework for an initial period of three years (2011-2013) and was renewed for another three years period (2014-2016) with additional partners at the Operations Research and Complex Systems Group School of Business, Universidad Adolfo Ibanez, Chile, and the LIRMM at the University of Montpellier.
Quantitative models are important tools for strategic, tactical, and operational decision-making. Many underlying optimization problems are discrete in nature. They are modeled as linear programs with integer variables, so called Mixed Integer Programs (MIP). Their solution is es- sentially based on enumeration techniques, which is notoriously difficult given the huge size ofthe solution set. Powerful generic commercial solvers for MIP are available, but despite continuous progress, the existing tools can be overwhelmed when problem complexity or size increases.
Decomposition approaches are primary tools to expand the capabilities of MIP solution techniques. When the application presents a decomposable constraint system, the so-called “Dantzig-Wolfe decomposition” consists in reformulating the problem as a selection of a specific solution for each individual subsystems that together satisfy the linking constraints. In practice, the individual subsystem solutions are brougth in the formulation in the course of the opti- mization if they can lead to improvement in the objective value. On the other hand, “Benders’ decomposition applies when the the application presents a decomposable system of variables, as traditional in stocahstic two-stage optimization models where main decisions are taken prior to knowing the realization ofr random data, while second stage decision are adjusments that can be done once the true value of data is revealed. In this context, one solves the first stage model and check a posteriori the feasibilility of the second stage. In case the second stage is infeasible, a constraint on the first stage variables is induced that aim to account for the cause of second stage infeasibility, and the processus reiterates.
Both of these decomposition approaches are perceived as requiring an application specific implementation for tractability in scaling-up to real-life applications. Our research aim at developing generic methods for these and algorithmic enhancements to can yield significant speed-ups in practice and have sound theoretical basis. Such research includes methodological developments (such as stabilization techniques for improved convergence, preprocessing rules, dynamic aggregation-and-disagregation), algorithms strategies (such as mutli-column/cut generation strategies, pre-evaluation of enumerated subproblem strategies – so-called strong branching), and efficient implementations (code re-engineering of our software platform BaPCod).
Beyond the methodological developments, our motivations are to set new benchmarks on standard combinatorial problems and industrial applications. In particular, we proceed to extend our techniques to the context of dynamic optimization. In a stochastic environment, the aim is to build a planning that are robust to perturbations in the sense that it can be adapted dynamically in reaction to the observed changes in the predicted data.
The project builds on the accumulated experience of both the Brazilian, the Chilean and the French teams that have done pioneering work in tackling complex applications and deriving generic solution strategies using this decomposition approach.
LEITE BULHOES Teobaldo, from Universidade Federal Fluminense (Niteroi, Brazil), visited the team from November 2nd to December 9th.
Sadykov Ruslan
Date: Aug 2015 - Jul 2016
Institution: Universidade Federal Fluminense (Brazil)
Thomas Lambert
Date: Feb 8 - Mar 4
Institution: University College of Dublin (Ireland)
Pierre Pesneau has organized the workshop “Polyhedral Approaches for Combinatorial Optimization”, December 8-9 2016, Paris
Arnaud Pêcher has organized the workshop “Bordeaux Graph Workshop”, Novembre 7-10 2016, Bordeaux.
Lionel Eyraud-Dubois is Chair of the “Cloud Computing and Data Center Management” track of I-SPAN 2017: the 14th International Symposium on Pervasive Systems, Algorithms, and Networks
Olivier Beaumont is Co-Chair of the Algorithms track of ICPP 2016: 2016 International Conference on Parallel Processing
The team members are members of the following program committees:
François Clautiaux: ROADEF 2016: French Operational Research Society Conference.
Lionel Eyraud-Dubois: ICPP 2016: 2016 International Conference on Parallel Processing
Lionel Eyraud-Dubois and Olivier Beaumont: HiPC 2016: 23rd IEEE International Conference on High Performance Computing, Data, and Analytics
Olivier Beaumont: IPDPS 2016, 30th IEEE International Parallel & Distributed Processing Symposium
Olivier Beaumont: Euro-EDUPAR 2016, Parallel and Distributed Computing Education for Undergraduate Students, a EuroPar workshop
Olivier Beaumont: HeteroPar 2016: Algorithms, Models, and Tools for Parallel Computing on Heterogeneous Platforms, a EuroPar Workshop
Olivier Beaumont is editor for IEEE Transactions on Parallel and Distributed Systems (TPDS)
François Vanderbeck is Associate Editor for the EURO Journal on Computational Optimization
François Clautiaux is Associate Editor for Mathematical Programming and Exact Methods in the journal ISTE “Recherche Opérationnelle”
The team members are regular referees for the best journals of the field.
Arnaud Pêcher: On sets avoiding distance 1, 2016 International Conference on Graph Theory, Jinhua, Chine, 2016
Olivier Beaumont is a member of the INCITE (math-comp track) panel
Olivier Beaumont is an expert for the H2020-FET-OPEN-2016 projects
Olivier Beaumont is the scientific deputy of Inria Bordeaux Sud-Ouest and a member of the Evaluation Committee of Inria.
François Vanderbeck is taking care of the team OptimAl (“Optimisation Mathématique Modèle Aléatoire et Statistique”) at the Mathematics Institute of Bordeaux.
Licence : A. Pêcher, Programmation Impérative, 10h, DUT, Université de Bordeaux, France
Licence : A. Pêcher, Conception Objet, 42h, DUT, Université de Bordeaux, France
Licence : A. Pêcher, Programmation objet en Java, 44h, DUT, Université de Bordeaux, France
Licence : A. Pêcher, Algorithmique Avancée, 32h, DUT, Université de Bordeaux, France
Licence : A. Pêcher, Assembleur, 24h, DUT, Université de Bordeaux, France
Licence : A. Pêcher, Programmation Mobile, 24h, DUT, Université de Bordeaux, France
Master : F. Clautiaux, Gestion des Opérations et Planification de la Production, 20h, M2, Université de Bordeaux, France
Master : F. Clautiaux, Flot et Combinatoire, 10h, M2, Institut Polytechniques de Bordeaux, France
Master : F. Clautiaux, Introduction à la Programmation en Variables Entières, 20h, M1, Université de Bordeaux, France
Master : F. Clautiaux, Projet d'optimisation pour l'insertion professionnelle, M2, Université de Bordeaux, France
Master : L. Eyraud-Dubois, Optimisation en Cloud Computing et Big Data, 15h, M2, Université de Bordeaux, France
Master : L. Eyraud-Dubois, Algorithmique et Programmation, 30h, M1, Université de Bordeaux, France
Master : L. Eyraud-Dubois, Introduction à la Programmation en Variables Entières, 15h, M1, Université de Bordeaux, France
Licence : P. Pesneau, Modèles et Méthodes d'Optimisation, 30h, L2, Université de Bordeaux, France
Licence : P. Pesneau, Système et programmation en Fortran 90, 24h, L2, Université de Bordeaux, France
Licence : P. Pesneau, Recherche Opérationnelle, 24h, DUT, Université de Bordeaux, France
Master : P. Pesneau, Algorithmique et Programmation 1, 60h, M1, Université de Bordeaux, France
Master : P. Pesneau, Algorithmique et Programmation 2, 30h, M1, Université de Bordeaux, France
Master : P. Pesneau, Programmation linéaire, 15h, M1, Université de Bordeaux, France
Master : P. Pesneau, Optimisation dans les graphes (partie flots), 15h, M1, Université de Bordeaux, France
Master : O. Beaumont, Approximation et Big Data, 15h, M2, Université de Bordeaux, France
Master : O. Beaumont, Distributed Computing and Data Mining, 4h, M2, Institut National Polytechnique de Bordeaux, France
Master : B. Detienne, Optimisation continue, 29h, M1, Université de Bordeaux, France
Master : B. Detienne, Recherche Opérationnelle, 16h, M1, Institut National Polytechnique de Bordeaux, France
Master : B. Detienne, Introduction à la Programmation en Variables Entières, 14h, M1, Université de Bordeaux, France
Master : B. Detienne, Gestion des Opérations et Planification de la Production, 28h, M2, Université de Bordeaux, France
Master : B. Detienne, Optimisation dans l'incertain, 58h, M2, Université de Bordeaux, France
Master : B. Detienne, Problèmes combinatoires et routage, 14h, M1, Université de Bordeaux, France
Master : I. Tahiri, Outils et Logiciels pour l'Optimisation, 30h, M1, Université de Bordeaux, France
Master : F. Vanderbeck, Recherche Opérationnelle, 15h, M1, Institut National Polytechnique de Bordeaux, France
Master : F. Vanderbeck, Programmation Entière, 58h, M2, Université de Bordeaux, France
PhD in progress : Jérémy Guillot, Optimisation de problèmes de partionnement, September 2014, François Clautiaux (dir) and Pierre Pesneau (dir).
PhD in progress : Quentin Viaud, Méthodes de programmation mathématiques pour des problèmes complexes de découpe, January 2015, François Clautiaux (dir), Ruslan Sadykov (dir), and François Vanderbeck (co-dir)).
PhD in progress : Martin Bué, Gestion du revenu dans le cadre du voyage professionnel, September 2012, François Clautiaux (dir), Luce Brotcorne (dir).
PhD in progress : Rodolphe Griset, Robust planning in Electricity production, November 2015, Boris Detienne (dir) and François Vanderbeck (dir).
PhD in progress : Imen Ben Mohamed, Location routing problems, October 2015, Walid Klibi (dir) and François Vanderbeck (dir).
PhD in progress : Thomas Bellitto, Infinite graphs, September 2015, Arnaud Pêcher (dir) and Christine Bachoc (dir).
PhD in progress : Philippe Moustrou, Codes, September 2014, Arnaud Pêcher (dir) and Christine Bachoc (dir).
PhD in progress : Thomas Lambert, September 2014, Placement de tâches et réplication de fichiers sur plates-formes parallèles, Olivier Beaumont (dir) and Lionel Eyraud-Dubois (co-dir)
PhD in progress : Suraj Kumar, December 2013, Scheduling of Dense Linear Algebra Kernels on Heterogeneous Resources, Olivier Beaumont (dir) and Lionel Eyraud-Dubois (co-dir)
François Clautiaux: Evaluation (rapporteur) of the PhD thesis of Charly Lersteau (University Bretagne Sud)
Ruslan Sadykov: Evaluation (examinateur) of the PhD thesis of Rian Gabriel Santos Pinheiro (University Federal Fluminense, Niteroi, Brazil), March 1st, 2016.
François Clautiaux is a member of the board of AMIES, the French Agency for Interaction in Mathematics with Business and Society. AMIES is a national organization that aims to develop relations between academic research teams in mathematics and business, especially SMEs.