REALOPT is an INRIA Team joint with University of Bordeaux (UB1) and CNRS (IMB, UMR 5251 and LaBRI, UMR 5800)

Decision making today relies increasingly on support from mathematical models. Quantitative modeling is routinely used in both industry and administration to design and operate transportation, distribution, or production systems. Optimization concerns every stage of the decision-making process: investment budgeting, long term planning, the management of scarce resources, or the planning of day-to-day operations. Many optimization problems that arise in decision support applications involve discrete decision variables; the resulting problems can be modeled as linear or non-linear programs with integer variables. The solution of such problems is essentially based on enumeration techniques and can be notoriously difficult given the huge size of the solution space. A key to success is the development of better problem formulations that provide strong approximations and hence help to prune the enumerative solution scheme. One must also avoid the drawback of enumerating what are essentially symmetric solutions. Our project aims to develop tight formulations and algorithms for combinatorial optimization problems exploiting the complementarity between the latest reformulation techniques, such as Lagrangian and polyhedral approaches (the generation of columns and cutting planes), non-linear programming tools (quadratic programming, semi-definite, and other convex relaxations), and graph theoretic tools (for induced properties and implicit representations of solutions). Our focus is on deterministic optimization approaches based on mathematical programming, but our experience extends to stochastic programming, constraint programming, and graph theory. Through industrial partnerships, the team targets large scale problems such as those arising in network design, logistic (routing problems), scheduling, cutting and packing problems, production planning, and health care logistics.

Our project proposal received very good feedback from the international referees; as a result, the creation of our project was approved by the project committee.

Arnaud Pêcher received an HDR (Habilitation à Diriger des Recherches) for the work entitled “Des multiples facettes des graphes circulants” .

F. Vanderbeck was invited as a plenary session speaker for the annual meeting of the Canadian Operations Research Society (CORS) in May 2008. He presented the principles that made it possible to automate the Dantzig-Wolfe decomposition in his generic branch-and-price solver (BaPCod).

*Combinatorial optimization*is the field of discrete optimization problems. In many applications, the most important decisions (control variables) are discrete in nature. Binary variables
model on/off decisions to buy, invest, hire, send a vehicle, or enforce a precedence. Integer variables model indivisible quantities. Extra variables can represent continuous adjustments or
amounts. This results in models known as
*mixed integer programs*(MIP), where the relationships between variables and input parameters are expressed as linear constraints and the goal is defined as a linear objective function.
MIPs are among the most widely used modeling tools. They allow a fair description of reality; they are versatile; they can handle many non-linearities (and even non-convexities) in the cost
function and in the constraints; they are also well-suited for global optimization. However useful they may be, these models are notoriously difficult to solve: good quality estimations of the
optimal value (bounds) are required to prune enumeration-based global-optimization algorithms whose complexity is exponential. In the standard approach to solving an MIP is so-called
*branch-and-bound algorithm*:
(
i)one solves the linear programming (LP) relaxation using the simplex method;
(
i
i)if the LP solution is not integer, one adds a disjunctive constraint on a factional component (rounding it up or down) that defines two sub-problems;
(
i
i
i)one applies this procedure recursively, thus defining a binary enumeration tree that can be pruned by comparing the local LP bound to the best known integer
solution. State-of-the-art MIP solvers – such as the commercial solvers CPLEX of Ilog or Dash-Optimization's Xpress-mp – are remarkably effective. But many real-life applications remain beyond
their scope, and the scientific community is actively seeking to extend the capabilities of MIP solvers. Developments made in the context of specific applications often become generic tools
over time and see their way into commercial software.

The most effective solution schemes are a complex blend of techniques: cutting planes to better approximate the convex hull of feasible (integer) solutions and hence provide better LP bounds, Lagrangian decomposition methods to produce alternative powerful relaxations, constraint programming to actively reduce the solution domain through logical implications, heuristics and meta-heuristics (greedy, local improvement, or randomized partial search procedures) to produce good candidate solutions, and specialized branch-and-bound or dynamic programming enumeration schemes to find a global optimum. The real challenge is to integrate the most efficient methods into one global system. Another key to further progress is the development of stronger problem formulations whose relaxations provide approximations that enable enhanced truncation of enumerative solution schemes. Tighter formulations are also much more likely to yield good quality approximate solutions through rounding techniques. With properly chosen formulations, exact optimization tools can be competitive with other methods (such as meta-heuristics) in constructing good approximate solutions within limited computational time, and of course has the important advantage of being able to provide a performance guarantee through the relaxation bounds.

Our project brings together researchers with expertise mathematical programming (polyhedral approaches, Dantzig-Wolfe decomposition, quadratic programming, non-linear integer programing, stochastic programming, and dynamic programming), graph theory (characterization of graph properties, combinatorial algorithms) and constraint programming in the aim of producing better quality formulations and developing new methods to exploit these formulations. These new results are then applied to find high quality solutions for practical combinatorial problems such as routing, network design, planning, scheduling, cutting and packing problems.

Adding valid inequalities to the polyhedral description of an MIP allows one to improve the resulting LP bound and hence to better prune the enumeration tree. In a cutting plane procedure,
one attemps to identify valid inequalities that are violated by the LP solution of the current formulation and adds them to the formulation. This can be done at each node of the
branch-and-bound tree giving rise to a so-called
*branch-and-cut algorithm*
. Introduced by Edmonds in 1965
, the polyhedral approach has turned out to be one of the main sources of progress in solving NP-hard
combinatorial optimization problems in the last two decades. A benchmark problem, in this regard, is the traveling salesman problem
. In the early 80's, the best algorithm was able to solve instances with around 300 cities. A recent paper
reports that branch-and-cut algorithms are able to solve instances with nearly 25,000 cities. Similar significant
improvements have been observed for instance for network design problems arising in telecommunication (see Kerivin and Mahjoub, 2005
) or vehicle routing problems in logistic (see Letchford and Salazar-Gonzalez, 2006
).

The goal of these approaches is to reduce the resolution of an integer program to that of a linear program by deriving a linear description of the convex hull of the feasible solutions,
conv(
X), where
Xis the discrete set of solutions to the combinatorial problem on hand. A fundamental result in this field is the equivalence of complexity between solving the combinatorial optimization
problem and solving the separation problem over the associated polyhedron: if
, find a linear inequality
x_{0}satisfied by all points in
conv(
X)but violated by
. Hence, for NP-hard problems, one cannot hope to define either a closed-form description of
conv(
X)or a polynomial time exact separation routine. Nevertheless, one does not need to know such a description to take advantage of the polyhedral approach. Only a subset
of the inequalities can already yield a good approximation of the ideal polytope. Moreover, non-exact separation, using heuristic procedures, turns out to be quite efficient for practical
purposes.

Polyhedral theory provides ways to derive automatically new inequalities from an initial polyhedral description
Pof the problem. For instance, it is known
that any valid inequalities for an IP can be obtained by iteratively taking linear combinations of existing
constraints and rounding their coefficients. Such
*general purposes cuts*have only recently made their way as practical tools: for instance, Gomory fractional cuts are now generated by default into commercial MIP solvers. Recent work
,
has consisted in numerically testing the strength of the formulation obtained by application of a single round of
such general purpose cuts (called the first-closure): the separation problem being set as an MIP problem which is solved with a commercial MIP solver. Cornuéjols (2006)
provides a comparative review of general purpose cuts such as lift-and-project cuts, Gomory mixed integer cuts,
mixed integer rounding cuts, split cuts, and intersection cuts, as well as their practical contributions to dual bound improvements. However, the most promising results have often been obtained
with so-called
*template cuts*, i.e. family of valid inequalities derived in an application specific context: the close form expression of these additional inequalities is a template from which specific
cuts are generated dynamically. To prove validity, one can show that such inequalities can be obtained as a special case of general purpose procedures. If it can be shown that the inequalities
define so-called
*facets*of
conv(
X), these inequalities are needed for its description. In practice, one needs to develop efficient procedures (exact or heuristic) to separate these inequalities.
Then, numerical evaluation can show the impact of the additional inequalities not only on the strength of the resulting dual bound but also in yielding solutions more likely to satisfy
integrality restrictions (which is good for primal heuristics).

The connections between polyhedral structure and graph theory are deep. Many facets of various polyhedra are directly related to special classes of graphs. The literature is rich with such
examples of facet-defining systems described by exhibiting a bijection to a collection of subgraphs of the studied graph: forest polytopes (Edmonds, 1961), the matching polytope (Edmonds, 1965,
), and many others. There are even results showing that the structure of a specific polyhedron itself is closely
related to the structure of a related graph. For instance, Chvátal has shown that the property of adjacency in the stable set polytope of a graph
G(i.e., the fact that two solutions satisfy the same facet defining constraint at equality) is characterized as a connectivity property of
Gitself. A number of researchers have recently described analogous interpretations of other facets of the stable set polytope. For example, Lipták and Lovász (1999)
exhibited such a relationship; in
we describe another and use it to generate a new set of facets for the stable set polytope of webs.

An alternative to using LP bounds to prune the branch-and-bound tree is to use bounds from
*Lagrangian relaxation*. Lagrangian relaxation consists in transforming some of the hard constraints of a problem into soft constraints that can be violated with a given penalty. The idea
is to keep as hard constraints those that define a well-structured combinatorial problem that is much easier to solve than the original. In particular, one may want to relax linking constraints
without which the problem decomposes into much smaller and simpler problems. The best bounds can then be obtained by properly adjusting the penalty cost. Choosing the best set of penalties is
itself an optimization problem: the so-called
*Lagrangian dual*. It can be reformulated as a linear program with an exponential number of variables (that are associated with the generators of the subproblem solution sets): this is the
so-called
*Dantzig-Wolfe reformulation*. This large LP can be solved using the revised simplex method while generating its variables and associated columns in the course of the optimization: this is
the so-called
*column generation*procedure. If branch-and-bound enumeration is based on the Dantzig-Wolfe reformulation of the problem, one must use column generation at each node of the
branch-and-bound tree giving rise to the so-called
*branch-and-price*algorithm.

Branch-and-price algorithms are more recent than branch-and-cut algorithms. Although column generation appeared in the 60's as a technique to handle linear programs with a huge number of variables , , its combination with branch-and-bound to solve integer programs was only developed in the 90's (we did pioneering work on this subject ). Branch-and-price proved to be very useful in solving many practical problems that were intractable by other means: crew and fleet assignment problems faced by airline companies , , vehicle routing problems in public and in fret transport systems , cutting stock problems experienced in the paper, textile, or steel industry , , , , production planning problems , , and network design problems , are such examples. Branch-and-price has now become the reference method for problems well suited to decomposition and it is making its way in industry: for instance, decision aid software developed by consultant firms like Eurodecision (Paris) or Adopt (Montréal) rely on this approach. The use of this method in the practical context of challenging applications has revealed its limitation. Further developments are required to overcome these difficulties.

Indeed, using column generation in the context of integer programming is not straightforward. The primary challenges revolve around the convergence of the dual bound computations, the
enforcement of integrality restrictions and the combination with dynamic cutting plane generation. A first step toward targeting these difficulties was to clarify the underlying Dantzig-Wolfe
decomposition principle. While the standard view was to see Dantzig-Wolfe decomposition as the linear programming formulation of the Lagrangian dual
, we presented it as a reformulation technique that gives rise to an integer master program (
does this for the integer case, while
extends this view to the mixed integer case). Then, the integrality restrictions of the original formulation
translate into integrality restrictions in the reformulated problem (this is the
*discretization*approach). Our framework based on the concept of generating sets facilitates the handling of branching decisions (to enforce integrality) and adding cutting planes to the
formulation.

Natural applications for the Dantzig-Wolfe approach are problems whose constraint matrix has a block diagonal structure augmented with linking constraints, i.e. a constraint matrix
Aof the form

with
Kblocks
B^{k},
k= 1, ...,
K. Then, dualizing constraints
Ldecomposes the problem into
Ksub-problems of smaller size. Moreover, if the
Kblocks are identical, the column generation reformulation eliminates the symmetry in
kand there is only one pricing problem that needs to be solved during the column generation algorithm (the same column being optimal for all
Ksubsystems). Beyond this standard application framework, open questions concern the viability of applying the decomposition principle recursively or to multiple independent subsystems.
We develop a first application of a nested decomposition in
. A decomposition based on multiple independent subsystems (not necessarily disjoint) should provide tighter
bounds according to the theory of Lagrangian decomposition
. But, there is typically a large number of dualized constraints linking the different systems and hence many
dual prices to adjust (therefore one expects slow convergence). To our knowledge the literature does not report of any column generation approach based on multiple subsystems capturing
different combinatorial structures. A recent study however comes close to such multiple decomposition:
combines variable redefinition applied to one subsystem and column generation applied to another subsystem. The
dual bounds that they obtained are shown to be tighter than with alternative approaches but the computational times are much larger.

The communication between the non-linear programming and the combinatorial optimization communities is limited although the latter has much to learn from the former. Several of the latest
developments in discrete optimization are imports from convex optimization
,
,
. Quadratic programming (QP), in particular, offers very powerful modeling tools. A
*quadratic program*is a formulation in continuous variables whose cost and/or constraint functions are polynomials of degree 2. Several classical constraint types for combinatorial
problems are more efficiently modeled with (QP): binary constraints (
x_{i}{0, 1}can be equivalently set as
x_{i}^{2}=
x_{i}); sequencing constraints: for instance in scheduling problems, if job
jfollows job
i, denoted by
x_{ij}= 1, then their completion times must satisfy
C_{j}C_{i}+
p_{j}where
p_{j}is the processing time of
j; this can be modeled as
x_{ij}(
C_{j}-
C_{i}-
p_{j})
0; transitivity constraints represent requests of the type “if two variables
x_{i}and
x_{j}are equal to 1, a third one
x_{k}should be 1” (their quadratic formulation
x_{k}x_{i}x_{j}is stronger than the linear
x_{k}x_{i}+
x_{j}-1when one relaxes the integrality restrictions).

Although Quadratic Programming (QP) is NP-Hard, some special cases or relaxations are polynomially solvable. Minimizing a convex quadratic cost over a feasible region described by linear
constraints is easy. This method is implemented in commercial MIP solvers. Convex QP with linear constraints and 0-1 variables can then be solved using branch-and-bound and solving the
continuous QP relaxation at each node. Also, the
*Semi-Definite Programming*(SDP) relaxation of a QP is polynomially solvable. An SDP is an extension of an LP where variables are the components of a matrix that is constrained to be
semi-definite. When the objective is not convex, it can be convexified using an augmented relaxation approach
,
: the Hessian is made convex by adding to the objectives a weighted sum of the quadratic binary constraints and
the squared norm of equality constraints (optimized weights are obtained by solving an SDP relaxation). When applied to an already convex objective, this approach is still useful to improve the
continuous relaxation bound. Solvers are available for SDP
, but, in their current implementation, they are very sensitive to the conditioning of the matrices.

Even though the numerical solution of QP and SDP remains problematic, recent applications of these techniques to some combinatorial problems have led to major improvements , , , . Generally, an SDP relaxation can be found for any general Integer Program . Starting from the QP reformulation of an IP, a Lagrangian relaxation procedure is applied to yield an SDP: after Lagrangian dualization, the (unconstrained) QP has a solution (in the primal variables) iff some matrix of coefficients is positive semidefinite. The associated SDP bound is always better than classical LP relaxation , although sometimes the size of the SDP formulation is problematic. In former studies, we discovered other SDP formulations for Lovász's bound on vertex coloring : a direct quadratization that appears to be intermediate between the ones of and . The SDP formulation obtained by application of the general scheme of is of huge dimension and, because of symmetry, does not bring more than our (compact) SDP formulation. Yet, it can be fruitfully used to compute bounds on generalizations of Vertex Coloring where symmetry does not hold (List Coloring, some problems of Frequency Assignment) .

Many engineering, management, and scientific applications involve not only discrete decisions, but also nonlinear relationships that significantly affect the feasibility and optimality of solutions. Mixed-integer nonlinear programming (MINLP) problems combine the difficulties of MIP with the challenges of handling nonlinear functions. MINLP is one of the most flexible modeling paradigms available. Hence, an expanding body of researchers and practitioners, including engineers, operations managers, economists, statisticians, computer scientists, and mathematical programmers are now interested in solving large-scale MINLPs (for recent examples of studies based on MINLP models, see, e.g., , , , , , ).

The wealth of applications that can be accurately modeled by using MINLP is not yet matched by the capability of available optimization solvers. Both MIP and nonlinear programming (NLP) have witnessed tremendous progress over the past 15 years. Some of the factors that have gone into the development of effective MIP algorithms and powerful academic, open source, and commercial MIP solvers are described above; similarly, new paradigms and a better theoretical understanding have created faster and more reliable NLP solvers that work well even under adverse conditions such as failures of constraint qualifications (more details on recent improvements in NLP solvers, as well as on challenges that remain, can be found in , , , , , among others.)

The time is right to synthesize these advances and inspire new ideas in order to transform MINLP into an area in which researchers and practitioners can access robust tools and methods capable of solving a wide range of important, commonly occurring decision support problems. While there remains enormous room for progress, initial efforts towards the development of such algorithmic tools are already under way. Our team is involved in the enhancement of existing ideas, and the development of new ones, towards these ends (see, e.g., , ).

Numerous common application areas involve factors that are inherently random or uncertain (such as varying demands, machine failures, surgery duration times, and cost overruns). When such models also have discrete or binary decision variables, for any of the reasons already discussed, the combination of uncertainty and combinatorial nature makes these problems more difficult than if one were trying to address either the uncertain or the discrete aspects of these problems by themselves. Stochastic MIP models have been proposed for applications in resource acquisition planning , internet server capacity expansion , electric power management , and inventory management .

Despite their ubiquity, not much is yet known about how to solve practical MIPs in which uncertainty plays a major role, and in particular problems of the size often encountered in the real world. Our team is involved in research that is advancing our knowledge of how to solve large MIPs in which the data is random or uncertain, in application areas ranging from production planning , to health care logistics .

The relationship between graph theory and mathematical programming has led to several famous research advances. Let us cite just a few landmarks. The matching problem (selecting disjoint
edges in a graph) is historically the first integer programming problem that could not be solved by linear programming (no compact ideal formulation being known) but for which an efficient
(polynomial time) combinatorial algorithm was known
(ideal formulations were only known for network flow problems at the time). The combinatorial algorithm lead to a
polyhedral description of the matching polytope with exponentially many constraints separable in polynomial time
. Another example is the study of perfect graphs. Perfect graphs do not only have nice graph-theoretical
properties and behave nicely from an algorithmic point of view, but several characterizations of perfect graphs also form an interface between graph theory, integer programming and
semi-definite programming: a graph is perfect iff the convex hull of the incidence vectors of its stable sets is defined by the so-called
*clique constraint polytope*(the only constraints needed are those enforcing that at most one node can be selected from each clique, while, for general graph, this polytope defines a
relaxation of the stable set polytope). Perfect graphs are also characterized by the fact that their chromatic number (the minimum number of independent stable sets needed to cover the graph)
can be computed in polynomial time by semi-definite programming.

Thus, the graph theory tools allow to derive better models/formulations for combinatorial optimization problems (f.i. graph theory characterization of forbidden subgraphs can sometimes be directly expressed as constraints in a mathematical program). Moreover, combinatorial procedure from graph theory can also serve as subroutines in mathematical programming approach, f.i., for cut separation or column generation. In particular, on the issue of symmetries, we believe that progress can come from the complementarity between graph theory and mathematical programming. This is illustrated by the work of Fekete and Schepers (2004) on the 2-dimensional placement problem. In our project, we also play the reverse complementarity, i.e. to use mathematical programming techniques to make progress in graph theory.

In this area, there is potential for collaboration between our team and the INRIA team MASCOTTE, among others.

A CP approach is particularly effective for tightly constrained problems, feasibility problems and min-max problems (minimizing the maximum of several variable values). Mixed Integer Programming (MIP), on the other hand, is effective for loosely constrained problems and for problems with an objective function defined as the weigthed sum of variables. Many problems belong to the intersection of these two classes. For example, some scheduling and timetabling problems are tightly constrained and have a sum-type objective. For such problems, it is reasonable to use algorithms that exploit complementary strengths of Constraint Programming and Mixed Integer Programming.

The integration of MIP and CP methods is currently an important research direction in combinatorial optimization , . A wide variety of applications demonstrate the potential of such collaboration. Methods which combine MIP with CP have been successfully applied to scheduling , , , transportation , , network design , and other problems.

A heuristic is an algorithm that attempts to build a “good” primal feasible solution to a combinatorial optimization problem with no a priori guarantee on its maximum deviation from
optimality. Exact optimization approaches can also serve to build good approximate solutions for complex combinatorial problems, either
(
i)by truncating an exact algorithm, or
(
i
i)by constructing solution from the relaxation on which the exact approach relies, or
(
i
i
i)by using exact solvers as subroutines in building heuristic solutions. Point
(
i)is common practice, even in commercial MIP solvers: one sets a time limit (or another bound on the number of algorithmic steps) or a threshold deviation from
optimality. However, the implementation strategies are typically different if the aim is to get quickly good primal solution rather than solving the problem exactly. Point
(
i
i)is key to our project. The chances are that starting from the solution to a stronger relaxation of a problem (i.e. a better formulation), one gets better primal
solution in the end. The techniques to build the primal solutions range from greedy constructive procedures, to rounding techniques, using the relaxed solution as a target, or simply exploiting
dual information to price choices. Point
(
i
i
i)is the idea of “MIPPING” questions
, i.e. to set intermediate questions, such as finding the best solution in the neighborhood of the current
solution, as a mathematical program that can be solved with an MIP solver or another combinatorial algorithm. Even though the initial problem might be much too hard for exact methods, the
subquestions that arise during its heuristic solution might be within the scope of exact solvers. There too it is important to have a good formulation of the “mipped” question. This points to
potential collaboration with other INRIA research teams who develop meta-heuristic approaches (DOLPHIN and TAO).

Heuristics based on exact methods have found a new breath in the recent literature, in part due to the progress of exact commercial solvers. The latest developments are the
*Large Scale Neighborhood Search*(an exponential size neighborhood can be explored in a local search procedure, provided that an efficient/polynomial algorithm exists to search it – Ahuja
et al. 2002
), the
*Relaxation Induced Neighborhood Search*(the components of the LP solution that are close to the best known integer solution are rounded and the residual problem is then solved as an MIP
of smaller size – Danna et al. (2005)
) and the
*feasibility pump algorithm*(the rounded LP solutions defines a target that might be infeasible; the LP is re-optimized with the objective of minimizing the distance to that target; the
process iterates in hope of finding good integer solutions – Fischetti et al., 2003, 2005
,
).

Our group has tackled applications in logistics, transportation and routing , , , , in production planning , and inventory control , , in network design and traffic routing , , , , , , , , , in cutting and placement problems , , , , , , and in scheduling , . Building on this experience, we plan to find our motivation for algorithmic developments in the study of complex combinatorial problems of industrial relevance. In particular, we are currently involved in two industrial partnerships. With Exeo Solutions (a consultant that has worked for Eco-emballages, Suez, and other main stream group), we study planning and routing problems that arise in waste management , . With SNCF, we are studying train timetabling problems and their re-optimization after a perturbation in the network .

Managerial problems raised by the planning of operations in transport networks, the distribution of goods, and the associated management of inventories have always been central in Operations Research. The tools of mathematical programming can bring substantial savings given the part of operation and transportation costs in the global cost of logistic. One could think that these major issues should be well resolved by now. This is simply not so. The combinatorial difficulties inherent to these problems require new techniques to increase the size of instances that can be treated. Moreover, new managerial practices and the trend of going from a hierarchical to an integrated optimization yield new problems.

Our experience in this domain includes several industrial studies:

we are pursuing a project that simultaneously optimizes transportation routes and customer inventories (a problem known as
*inventory routing*) with our industrial partner Exeo Solutions
,
;

we have an ongoing project with SNCF in the context of the PhD thesis of L. Gely on the operational re-scheduling of a timetable following a perturbation .

for Routing International, Brussels, we have combined vehicle routing and planning over a fixed time horizon , ;

we are also working on variants of the traveling salesman problem (TSP): the cumulative TSP arises when the cost of an arc is inversely proportional to its position in the circuit (consider for instance a delivery man that is paid more at the beginning of the journey because of the weight he has to carry).

Network design is a wide research domain arising in railway, highway and telecommunication. Applications in telecoms are very much studied at the moment, a revival due to the arrival of new transfer technologies such as optical fibers. The aim is to conceive cheap and reliable networks with specific requirements on the topology (which links will be created) or the capacity of the links (which amount of information can be in transit on a link at the same time). Sufficient capacities have to be installed to avoid congestion, several paths may have to link given pairs of nodes to ensure transmission in case of breakdown, all this at the cheapest possible cost.

We are actively working on problems arising in network topology design, implementing a survivability condition of the form “at least two paths link each pair of terminals”. We have extended polyhedral approaches to problem variants with specific requirements. In , we deal with the design of so-called SDH/SONET networks, where the links must form cycles of bounded length (see Figure ); bounded length requirements can also come from re-routing restrictions , , . Associated to network design is the question of traffic routing in the network: one needs to check that the network capacity suffices to carry the demand for traffic. The assignment of traffic also implies the installation of specific hardware at transient or terminal nodes. In previous work, we optimized traffic assignment to minimize such installation cost . We now consider the problem that arises when using new wavelength division multiplexing (WDM) technologies that allow to pack more traffic on optical networks. Several streams can be multiplexed, each of them supported by a different wavelength, in an optical signal (Figure illustrates routing configurations that will be assigned to different wavelengths to handle several thousands of requests). We are also working on the problem of measuring traffic in a network through the placement of markers at minimal cost .

These problems present themselves throughout the supply chain in a host of industrial applications. Production planning problems can be defined on a network of facilities on an aggregate scale, or at a more detailed level within a single plant. These problems are concerned with the timing, the location, and the quantities of production. Often multiple product categories compete for scarce resources (raw materials, machine capacities, limited labor), and there are many other factors that must also be taken into account. To list just a few example, there is often external demand for finished products that must be considered; a bill-of-materials that specifies which products require others for their production or assembly must be taken into account; and the restrictions imposed by setup times and/or precedence restrictions must be rigorously accounted for.

One important class of production planning and scheduling problems is
*lot-sizing problems*, in which the most important decision typically relate to
*how much*to produce of each product category, as well as when (which time period) and where (which machine, line, or facility) to produce it in. In one stream of research, we have been
engaging in developing rigorous computational methods to strengthen automatically the formulations of these problems in order to enable optimization software to identify high quality production
plans and find strong performance guarantees (lower bounds) for such problems
,
,
. In another stream, we have been analyzing fundamental MIP models that incorporate uncertainty into lot-sizing
problems, deriving results concerning optimality conditions for and the polyhedral structure of these models
,
.

Another important category of problems within this larger family are
*machine scheduling problems*. In contrast to lot-sizing problems, the emphasis is on determining the
*sequence*and
*location*of a set of jobs whose specific quantities of production have already been determined. In machine scheduling problems, a set of jobs of different durations must be processed on a
set of machines over time. The objective is to minimize a function of the jobs' starting times. Often, jobs can be processed only within availability intervals, or precedence relations between
jobs must be followed. Typical applications of machine scheduling can be found in manufacturing and operational planning.

For some machine scheduling problems whose objective is to minimize a weighted sum of the number of late jobs, we have developed branch-and-price and branch-and-cut algorithms , . Recently, we have developed heuristic and exact , MIP-based algorithms for the problem of scheduling an airborne radar, which can be modeled as a machine scheduling problem.

A family of problems related to the applications just discussed are cutting and packing problems. In cutting stock problems, one has a supply of large pieces of raw material in stock and a set of demands for small “order” pieces. One must satisfy these demands by cutting the required small pieces out of the large pieces from the stock. The objective is primarily to minimize the waste that is counted as the unused part of used pieces of stock material. A solution is given by a set of feasible cutting patterns, i.e. assortments of order pieces that can be cut out of a given large piece of stock material, such that their accumulated production covers the demands. There are many variants of the cutting stock problem. The main ones concerns the number of significant dimensions of the forms (1D, 2D, 3D, or even 1.5D), specific restrictions on the cutting process, the geometrical arrangements of pieces, and the number of cutting stages. There might be secondary objectives related to the balancing of the workload between different cutting machines, the minimization of the number of different cutting patterns used, or the respect of due dates for instances.

Packing, placement or loading problems can be stated in similar terms. There, one has a set of resources (vehicles, railway cars, machines) that must be packed with items. The objective is to maximize the value of the resulting packing while respecting the capacity of the resources. A solution is given by a set of feasible individual resource packings which together do not pack the same object more than once. Again, many variants exist depending mainly on the number of dimensions in which the capacity of the resources are measured and the specific restrictions on the loading process. Packing problems arise in particular as subproblems in cutting problems since the question of building “good” cutting patterns typically boils down to packing a resource piece with maximum value items. There are many applications of these cutting and packing problems: for instances, the cutting of paper, steel bars, glass, wood, textile, and plastic film; optimization of newspaper layout; scheduling of parallel machines; line balancing; and more generally scheduling problems with limited resources.

In previous work, we developed specialized algorithms for some variants of the knapsack problem that arise as a subproblem in solving cutting stock problems: problems with class bounds or with setup costs . We also set benchmark results for the 1D cutting stock problem using an exact optimization approach based on branch-and-price . We were first to introduce exact algorithm for the 1D problem with setup minimization (a much harder variant) . We also applied a nested decomposition approach to a 2D multi-cutting-stage variant and considered ways of incorporating industrial side-constraints in an exact approach for the 1D problem , . We currently work on 2D orthognal placement problems, combining graph therory and mathematical programmaing approaches.

The output of our studies on specific applications typically consists in algorithmic developments and codes that implement the application specific solvers or, in some cases, routines that allow to derive theoretical results (f.i. in graph theory). The algorithms produced this year are:

GRWA-BAP: A branch-and-price code for routing traffic in a optical telecommunication network .

contact: Benoit Vignac

GRWA-BAC: A branch-and-cut code for routing traffic in a optical telecommunication network .

contact: Benoit Vignac

DEMIURGE: A mathematical programming based code for the rescheduling of trains at SNCF.

contact: Laurent Gely

SKIMAP: A branch-and-cut code for optimizing the placement of sensors identifying origin-destination traffic in a network.

contact: Pierre Pesneau

BAP-RAD: branch-and-price code for scheduling an airborne radar problem .

contact: Ruslan Sadykov

B-CHECK: A branch-and-cut code for the one-machine scheduling problem to minimize the weighted number of late jobs .

contact: Ruslan Sadykov

2D-KNAP: a branch-and-bound code for the 2D orthogonal packing problme where feasibility checks are based on graph theory tools.

contact: Cédric Joncours

contact: FrancoisVanderbeck

Beyond the above application specific algorithms, we are involved in the development of a generic platform for the implementation of the branch-and-price method. Contrary to the cutting plane approach, column generation has not yet made its way into commercial solvers. Despite its successes, the use of a Dantzig-Wolfe decomposition approach is currently restricted to experts. The difficulties inherent to implementing the method and to combining it with other techniques were only overcome in an application specific context. As a consequence, algorithmic ideas have been developed and tested for specific applications only, without convincing arguments for showing their impact across different applications. The challenge is to show that column generation is also a technique that can be automated and make its way into commercial solvers (as cutting plane approach did recently).

We develop the prototype of a generic branch-and-price code, named
*BaPCod*, for solving mixed integer programs by column generation. Previous attempts have been limited to offering “
*tool-boxes*” to ease the implementation of algorithms combining branch-and-price-and-cut. With these, the user must implement three basic features for its application: the reformulation,
the setting-up of the column generation procedure and the branching scheme. Other available codes that offer more by default were developed for a specific class of applications (such as the
vehicle routing problem and its variants). Our prototype is a “
*black-box*” implementation that does not require user input and is not application specific. The features are

(
i)the automation of the Dantzig-Wolfe reformulation process (the user defines a mixed integer programming problem in terms of variables and constraints, identifies
subproblems, and can provide the associated solvers if available, but he does not need to explicitly define the reformulation, the explicit form of the columns, their reduced cost, or the
Lagrangian bounds) – this automation of the decomposition principle is presented in
.

(
i
i)a default column generation procedure with standard initialization and stabilization (it may offer a selection of solvers for the master) – the issue of
stabilization is discussed in
, and

(
i
i
i)a default branching scheme – recent progress has been made on the issue of generic branching scheme in
.

The ongoing work consists in developping default primal heuristics and preprocessing techniques.

contact: AndrewMiller

For many applications, it would be interesting to be able to use parallel resources to solve realistic size MIPs that take too long to solve on a single workstation. However, using parallel computing resources to solve MIP is difficult, as parallelizing the standard branch-and-bound framework presents an array of challenges in terms of ramp-up, interprocessor communication, and load balancing. In we propose a new framework (the Parallel Macro Partitioning (PMaP) framework) for MIPs that partitions the feasible domain of an MIP by using concepts derived from recently developed primal heuristics. Initial computational resources suggest they enable PMaP to use many processors effectively to solve difficult problems.

Among the MIP heuristics that our team has been working on is the a new family of randomized rounding heurisitics for MIP problems with general integer variables (i.e., not necessarily binary) . We have extensively tested these heuristics within the COIN-OR suite of optimization software , and it is our intent to incorporate them within this suite as part of the module CBC in the near future.

For some network design problems, it happens that directed formulations are tighter than undirected ones. This is the case, for instance, for spanning trees where the polytope associated with directed graphs involves less constraints than the polytope associated with undirected graph. The former one is described only by cut and trivial inequalities whereas the latter one needs in addition partition inequalities.

Pierre Pesneau is studying, in collaboration with Luis Gouveia of the University of Lisbon, and A. Ridha Mahjoub from the University Paris Dauphine, the polytope associated with the
problem of finding two edge-disjoint paths where the length (given in number of links) of the paths are bounded by an integer
K. In
,
, the authors give integer programming formulations for the undirected versions of the problem when
K4. These formulation can easily be extended for the directed version.
Unfortunately, for both cases, directed and undirected, no integral formulations on the design variables only are known when
K5. However, they have already observed that inequalities for the directed case
imply, when projecting on the undirected case, some new inequalities that cut integral points satisfying all the undirected version of the first inequalities.

Currently, they are studying the gain of using directed formulations for this network design problem and projecting on the undirected variables.

In collaboration with B. Jaumard from Concordia University in Quebec, B. Vignac and F. Vanderbeck are studying routing and associated design decisions in backbone optical networks.

To accomodate the increase of traffic in telecommunication networks, today's optical networks have huge capacity thanks to grooming and wavelength division multiplexing technologies. The
wavelength bandwidth utilization is increased by packing several requests on the same wavelength. Moreover, several streams can be multiplexed on an optical signal, each of them supported by
a different wavelength. However, packing multiple requests together in the same optical stream requires to convert the signal in the electrical domain at each aggregation of dissagregation of
trafic at an origin, a destination or a bifurcation node. These conversions requires the installation of expensive ports. Hence, traffic grooming and routing decisions along with wavelength
assignments must be optimized to reduce opto-electronic system installation cost. This optimization problem is known as the
*grooming, routing and wavelength assignment*(GRWA) problem.

Given a
*physical network*, each link can carry a uniform number of wavelengths. Each wavelength has the same transport capacity. Trafic demand over the network take the form of brandwith
requests defined by their origin and destination and their capacity requirement that is selected from a discrete set of standard granulaties,
{1, 3, 12, 48}that are dividers of the wavelength capacity. A request must be single path routed. Its route is defined by a sequence of
*optical hops*, each of which is defined by a path in the physical network along which the signal remains into the optical domain with no electrical conversion at intermediate nodes.
Traffic routing consists in defining an
*optical path*defined by a sequence of optical hops.

We deal with backbone optical network with relatively few nodes (around 20) but thousands of requests. Such instances cannot be dealt with a traditional multi-commodity network flow approach. Instead, we develop and compare several decomposition approaches , (hierarchical versus nested decomposition): column generation is used to solved the LP relaxation of our models and a rounding procedure provides primal solutions; the LP dual bounds are improved using a cutting plan procedure. In ongoing work, we develop a direct branch-and-cut approach on a pseudo-polynomial formulation, for comparison.

We also studied the impact of imposing a retriction on the number of optical hops in any request route. Indeed, electrical convertions may cause important end-to-end delays. To limit such delay we evaluate different bounds on the number of hops in an optical path. Our study shows that limiting path to 1-hop is very restrictive, while restricting the number of hops to 2, has a very limited impact on the design cost; if we use 3 or more hops, the cost decrease is marginal.

In collaboration with Teresa Godinho, Luis Gouveia and José Pires of the University of Lisbon and Thomas L. Magnanti of the School of Ingineering of the MIT, Pierre Pesneau has studied several time dependent formulations for the unit demand vehicle routing problem . They gave new bounding flow inequalities for a single commodity flow formulation of the problem. They described their impact by projecting on some other sets of variables as variables issued of the Picard and Queyranne formulation or the natural set of design variables.

Following up on this work, they proved that some new inequalities obtained by projection are facet defining for the polytope associated with the problem and thus.

Philippe Meurdesoif, Pierre Pesneau and François Vanderbeck are studying a problem that consists in placing sensors on the links of a network for a minimum cost. These sensors, that will check which commodities are using the links, have to be installed in such a way that it is possible to reconstruct the path followed by each commodity in the network.

The original application of this problem arose in the management of ski resort where the manager would have been able to give each skier the path he followed during his skiing day.

They have modelled this problem along two ways, a quadratic formulation and a linear integer programming formulation. Some classes of valid inequalities that can defined facets have been introduced and a branch-and-cut is under development for this problem . The first numerical results we have obtained are promising and a paper should be written in the next few months.

Along with Laurent Alfandari, Sylvie Borne and Lucas Létocart from the University Paris 13, Pierre Pesneau is currently starting a project granted by the working group on operations research (GDR RO) of the CNRS on the study of integer quadratic or integer linear programming formulation for some variants of the Asymmetric Traveling Salesman Problem.

This project aims to compare several new formulations and develop branch-and-cut and/or branch-and-price algorithms for the solution of these problems. One of the variants of the ATSP problem could consider that some cities have to be visited before other ones (precedence constraints) but with a bounded distance between these cities.

Inventory routing problems combine the optimization of product deliveries (or pickups) with inventory control at customer sites. We considered an application submitted by our industrial partner Exeo Solutions where one must construct the planning of single product pickups over time; each site accumulates stock at a deterministic rate; the stock is emptied on each visit. At the tactical planning stage considered here, our objective is to minimize a surrogate measure of routing cost while achieving some form of regional clustering by partitioning the sites between the vehicles. The fleet size is given but can potentially be reduced. Planning consists in assigning customers to vehicles in each time period, but the routing, i.e., the actual sequence in which vehicles visit customers, is considered as an “operational” decision. The planning is due to be repeated over the time horizon with constrained periodicity.

We have developed a truncated branch-and-price algorithm: periodic plans are generated for vehicles by solving a multiple choice knapsack subproblem; the global planning of customer visits is generated by solving a master program. This exact optimization approach is combined with rounding and local search heuristics to yield both primal solutions and dual bounds that allow us to estimate the deviation from optimality of our solution. We were confronted with the issue of symmetry in time that naturally arises in building a cyclic schedule (cyclic permutations along the time axis define alternative solutions). Central to our approach is a state-space relaxation idea that allows us to avoid this drawback: the symmetry in time is eliminated by modelling an average behavior. Our algorithm provides solutions with reasonable deviation from optimality for large scale problems (260 customer sites, 60 time periods, 10 vehicles) coming from industry.

Our mathematical model can potentially be used in sensitivity analysis to see whether the given fleet size
Vcan be reduced. Alternatively one can iterate on different values of
Vand generate the Pareto optimality curve by solving our model (with a large
) for each feasible
V. In an effort to build robust solutions, one might modify our model to minimize the maximum vehicle capacity that is used. Balancing the slack capacity in each vehicle task is another
way to hedge against uncertainty. Further research would be
(
i)to perform a direct comparison with other constructive or meta heuristics especially developed for our model;
(
i
i)to develop an operational solver using our tactical solution as a target.

Ruslan Sadykov in collaboration with Prof. Philippe Baptiste from the Ecole Polytechnique (Paris) has worked on the problem to schedule an airborne radar. This research has been done in the framework of a joint project between the Ecole Polytechnique and the DGA. Airborne radars are widely used to perform a large variety of tasks in a fighter aircraft. These tasks include, but are not limited to, searching, tracking and identifying targets. Such tasks play a crucial role for the aircraft and they are repeated in a “more or less” cyclic fashion. This defines a complex scheduling problem that impacts a lot on the quality of the radars output and on the overall safety of the aircraft.

For this problem, three different Mixed Integer Programming formulations have been proposed . Two of the formulations are compact and can be solved directly by an MIP solver. The third formulation relies on a Branch-and-Price algorithm. Theoretical and experimental comparisons of the formulations were reported. A dedicated solver has been implemented for this problem, and real-life instances of a moderate size have been solved to optimality. This work will allow us to estimate the quality of heuristic algorithms for the problem. Only fast heuristic algorithms can be used in practice due to tight resolution time restrictions. The research in this direction has been already started .

Another research in progress concerns scheduling parallel jobs. i.e. which can be executed on more than one processor at the same time. With the emergence of new production, communication and parallel computing system, the usual scheduling requirement that a job is executed only on one processor has become, in many cases, obsolete and unfounded. Therefore, parallel jobs scheduling is becoming more and more widespread.

In this work, we consider the NP-hard problem of scheduling a class of parallel jobs to minimize the total weighted completion time or mean weighted flow time criteria. For this problem, we have introduced an important dominance rule which can be used to reduce the search space while searching for an optimal solution .

We have continued our analysis of fundamental MIP models that incorporate uncertainty into lot-sizing problems. We exploit the structure of the stochastic formulation of the problems to derive polynomial time algorithms for them , . To the best of our knowledge, these results may be the first known exact algorithms for structured stochastic MIP models.

Based on these results, we have also defined extended formulations for special cases of these problems that satisfy certain cost conditions . These extended formulations are polynomial in size and (under the given cost conditions) yield integer optimal solutions; they are therefore the tightest compact formulations known for these kinds of problems.

A major result of graph theory is that the chromatic number of a perfect graph is computable in polynomial time (Grötschel, Lovász and Schrijver 1981). The circular chromatic number is a well-studied refinement of the chromatic number of a graph. Xuding Zhu noticed in 2000 that circular cliques are the relevant circular counterpart of cliques, with respect to the circular chromatic number, thereby introducing circular-perfect graphs, a super-class of perfect graphs. It is unknown whether the circular-chromatic number of a circular-perfect graph is computable in polynomial time. We proved that a characterization of circular-perfect graphs by induced forbidden subgraphs, analogous to the Strong Perfect Graph Theorem, is very unlikely .

Since the chromatic number of a graph is the integer ceiling of its circular-chromatic number, if the circular chromatic number of a circular-perfect graph is computable in polynomial time
then it would imply Grötschel, Lovász and Schrijver's result. In 2005, Coulonges, Pêcher and Wagler, introduced the intermediate class of strongly circular-perfect graphs, as those
circular-perfect graphs whose complements are also circular-perfect. For the triangle free cases, we managed to fully characterize these graphs, and gave a polynomial time algorithm to
recognize them
. Coulonges, Pêcher and Wagler also introduced
a-perfect graphs, another super-class of perfect graphs. We bounded their imperfection ratio by 1.5
.

We also exhibited new facets of the stable set polytope of claw-free graphs , and studied automatic generation of clique families inequalities for stable set polytopes .

This year, we introduced the circular-clique polytope, and used it to prove that the weighted circular-clique number, and thus the circular chromatic number, of a strongly circular-perfect
graph which is also
a-perfect is computable in polynomial time. We also proved that the circular chromatic number of a strongly circular-perfect graph is computable in polynomial time.

Unexpectedly, we also used the circular-cliques polytope to prove that the circular stability number, and thus the stability number, of a fuzzy circular-interval graph is computable in polynomial time. These results were communicated at Kolkom'08 (Magdeburg, Germany) .

With C. Joncour (PhD student), we also started investigations of the orthogonal knapsack problem, with the help of graph theory
. Fekete and Schepers managed a recent breakthrough in solving multi-dimensional orthogonal placement problems
by using an efficient representation of all geometrically symmetric solutions by a so called
*packing class*involving one
*interval graph*(whose complement admits a transitive orientation: each such orientation of the edges corresponds to a specific placement of the forms) for each dimension. Though Fekete
& Schepers' framework is very efficient, we have however identified several weaknesses in their algorithms: the most obvious one is that they do not take advantage of the sophisticated
data structures (called MPQ trees) which were introduced to design linear time recognition algorithms of interval graphs. They use instead a very basic characterization of interval graphs.
Thus we think that significant improvements on this problems are possible. We implemented a new graph theoretic characterization: preliminary results on standard benchmarks are satisfactory
but have not been communicated yet.

A recent approach in modelling graph coloring consists in representing color classes of a coloration as a collection of stars in the complementary graph. D. Cornaz and V. Jost
showed that the chromatic number of a graph,
(
G), could be obtained by computing the independance number,
(
D(
G)), of some auxiliary graph
D(
G). Following this idea, we investigate the lower bound on
(
G)obtained by approximating
(
G)with Lovász'
(
G)number.

The procedure for constructing
D(
G)indeed makes use of an order on the vertices so as to break symmetries in the color classes representations. Although the value of
(
D(
G))does not depend on the ordering used for the auxiliary graph, different orderings typically yield different auxiliary graphs; we show that when it comes to
approximate
(
D(
G)), this results in different lower bounds on
(
G).

However, it appears that whatever the order is, this (SDP based) bound outperforms (sometimes dramatically) the classical SDP bound. Sometimes it even dominates the fractional chromatic number, which is known to be a theoretical limit of polynomial-time lower bounding of the chromatic number .

Moreover, we have designed heuristics, based on the dual variables of iteratively computed semidefinite programs, to generate an order on the vertices yielding a "good" bound on . Yet experience shows that it is often quicker to randomly generate a number of different orders and choose the best one. Furthermore, the choice of the formulation for the number appears relevant here since the traditional one yields to slightly unaccurate results.

A similar approach can be applied to the alternate coloring problem of minimizing the total weight of the heaviest vertex in each color class. If vertexes are ordered with respect with
decreasing weights, then the problem is equivalent to finding a maximum weighted stable set in
D(
G), which can be approximated via a weighted form of Lovász'
number. A column generation bound generalizing the fractional chromatic number was defined in views of comparison with the SDP bound
.

We have pursued our work on developing a branching scheme that is compatible with the column generation procedure and that implies no strutural modifications to the pricing problem. We have carry on a comprehensive experimental study that demonstrates the efficiency of what could have been seen otherwise as a theoretic scheme. Moreover, although the scheme was initially presented for a binary integer program, we have shown (in theory and in our computational experiments) how it extends to a general mixed integer program

The decomposition principle can be exploited in greedy, local search or rounding heuristics, even though some of these heuristics are not straightforward to implement in the context of
dynamic column generation (variables are only known implicitly and cannot be bounded). The price coordination mechanism of the Dantzig-Wolfe approach brings the global view that may be
lacking when local search or constructive heuristics are applied directly to the original problem formulation (these latter approaches are often qualified as myopic). There are examples of
decomposition based heuristics in the literature that range from the simplest truncated exact approach to the latest meta-heuristic paradigm. We work at classifying these approaches
,
,
. In summary, there are four types of methods.
(
i)The so-called restricted master heuristic: the column generation formulation restricted to a subset of variables is solved as an MIP.
(
i
i)The greedy heuristics consist in generating iteratively the columns that have the smallest ratio of reduced cost per unit of constraint satisfaction (based on dual
price estimates), selecting some in the solution and reiterating for the residual master problem.
(
i
i
i)The rounding heuristics consist in iteratively selecting one or more columns of the LP solution to the master (among those whose values is close to integer), round
their value, and re-optimize the residual master LP (exactly or approximately).
(
i
v)Local search heuristics are often based on removing a few columns from the current solution and re-optimizing the residual master IP (typically with one of the
above heuristics).

Based on our application spcific experience with these tehcniques , , , , we are extracting generic concepts to develop default primal heursitics into BaPCod.

Partially as a result of the development of this software, Mahdi Namazifar spent a summer internship at IBM's T.J. Watson Research Center in 2008 working on these and other ideas. We expect to integrate PMaP into COIN-OR (maintained by an IBM team at Watson) in the near future.

Most MIP heuristics that have been developed perform best on problems with binary variables. In
we propose methods for problems with
*general*(i.e., not necessary binary) integer variables. Called
*randomized rounding*heuristics, these methods do much more than simply rounding a single LP fractional solution. They attempt to find feasible solutions by randomly walking within a
specially-constructed polyhedron, and then performing rounding operations from the points traversed. The polyhedron in question is expressed as the convex hull of some of the “interesting"
extreme points of the LP relaxation of the original problem, where the extreme points chosen to be of interest have a high probability of being in the region of the LP relaxation pointed to
by the objective function. Preliminary computational results for this heuristic approach suggest that it may be the most effective primal heuristic known for certain classes of general MIP
problems.

While our research on MINLP problems is just beginning, there is already evidence of potential for significant success. For example, we have been considering how to exploit the information generated by solvers as they make branching decisions (in particular, which variable to branch on) during the branch-and-bound process. This has allowed us to define a structured family of disjunctive cuts whose generation requires no additional effort beyond that necessary to perform classical strong branching. Even for MIPs, these ideas seem capable of singificantly reducing the size of branch-and-bound trees; moreover, the ideas themselves are directly applicable to MINLPs, and we are currently investigating how best to apply them .

Another factor in defining effective branch-and-solvers for MINLPs is defining solvers that are capable of
*re*-optimizing NLP problems efficiently. Classical interior point methods have well-known difficulties in performing warm re-starts; for this and other methods,
*active set*methods seem to have more potential for efficient re-optimization. In
, we have defined a new active set method for linear programming and generalized it for quadratic programming
problems. This algorithm has a number of desirable properties for these problems (including re-optimization capabilities), and we are hopeful to generalize this method further to other
families of NLP problems.

One area in which there is enormous potential for improved decision support through the application of optimization methods is in the delivery of health care services. Throughout Europe and much of the rest of the world, it is becoming increasingly important to administer health care in such a way that 1) a high quality of care is maintained and ensured; 2) costs are kept as low as reasonably possible and waste is avoided.

One example of such problems is the allocation and scheduling of surgeries to and within operating rooms (ORs). In , we propose MIP formulations for such problems and discuss issues involved in solving them. Our methods were tested on historical data gathered from the renowned Mayo Clinic in Rochester, Minnesota, USA. These problems are difficult, as they involved both combinatorial components (which set of ORs to open, which surgeries to assign to which OR, etc.) and significant uncertainty (e.g., how long each surgery might take). For reasons of both quality and efficiency, it is critical to perform such operations in such a way that avoids excessive overtime for the surgeons and their staffs; however, for efficiency reasons, it is also critical not to use many more ORs than are really necessary. Models and methods like those proposed in this paper are likely to grow in importance in the near future.

We have an ongoing contract with SNCF, “Innovation et Recherche”, in the context of the PhD thesis of L. Gely (with a CIFRE scholarship). In previous work, we produce timetable in the aim of maximizing the throughput (number of trains) that can be handled by a given network . In this project, we consider the problem of managing perturbations . Network managers must re-optimize train schedules in the event of a significant unforeseen event that translates into new constraints on the availability of resources. The control parameters are the speed of the trains, their routing and sequencing. The aim is to re-schedule trains to return as quickly as possible to the theoretic timetable and to restrict the consequences of the perturbation to a limited area. The question of formulation is again central to the approach that shall be developed here. The models of the literature are not satisfactory. Continuous time formulations have poor quality due to the presence of discrete decision (re-sequencing or re-routing). Other standard models based on arc flow in time-space graph blow-up in size. Formulations in time-space graphs have therefore been limited to tackling single line timetabling problems. We have develop a discrete time formulation that strikes a compromise between these two previous models. Based on various time and network aggregation strategies, we develop a 2-stage approach, solving the contiuous time model having fixed the precedence based on a solution to the discrete time model.

With Jeffrey T. Linderoth and James Luedtke of the University of Wisconsin-Madison, and Sven Leyffer and Todd R. Munson of Argonne National Laboratory (a research unit of the United States Department of Energy), Andrew Miller was awarded two grants in 2008 from United States government sources for the project “Next Generation Mixed Integer Nonlinear Programming Solvers: Structure, Search and Implementation".

The first grant (Department of Energy grant number DE-PS02-08ER08-13) began on August 15, 2008 and lasts through August 14, 2011. The second grant (grant number CCF 0830153 of the National Science Foundation) will begin on January 1, 2009, and continue through December 31, 2011.

Andrew Miller served on the program committee of MIP 08, an internationally recognized annual workshop in mixed integer programming that was held from August 4-7, 2008, at Columbia University in New York City, USA. Along with the Integer Programming and Combinatorial Optimization conference and the ADONET workshop on Combinatorial Optimization held each winter in Aussois, France, this meeting is one of the three most prestigious annual conferences in the overall field of mixed integer programming. The program committee is responsible for selecting and inviting all of the speakers, as well as for arranging the date and place of the workshop, raising funds, etc.

Pierre Pesneau is an active member of the organizing committee of the working group on Polyhedra and Combinatorial Optimization affiliated to the french operation research society (ROADEF) and the operation research group of the CNRS. The purpose of this working group is to promote the field of polyhedra in the research domain of combinatorial optimization. To this aim, the group organizes every year the Polyhedra and Combinatorial Optimization Days. These days gather young and less young researchers who want to discuss or discover the polyhedral optimisation. These days are preceded by a doctoral school.

The next edition of this workshop will take place at the IMB (Institut de Mathématiques de Bordeaux) in June 2009 and will be organized by the RealOpt team.

We also regularly organize day-long workshops on more specific individual subjects. Such workshops are scheduled around a tutorial on the subject and several talks; some slots are also dedicated to open questions. For example, in October 2008, Pierre Pesneau organized such a workshop on the subject of approximation algorithms and polyhedra.

A. Raspaud and A. Pêcher are chairs of the upcoming international conference Eurocomb 2009 to be held in September, at the University of Bordeaux 1.

*C. Joncour*, A. Pêcher, P. Pesneau, F. Vanderbeck — “Mathematical programming formulations for the orthogonal
2
dknapsack problem” — 9ème Congrès de la Société Française de Recherche Opérationnelle et d'Aide à la Décision. February 25–27, 2008, Clermont-Ferrand, France.

*Ph. Meurdesoif*— “"Une nouvelle relaxation quadratique pour la coloration de graphes” — 9ème Congrès de la Société Française de Recherche Opérationnelle et d'Aide
à la Décision. February 25–27, 2008, Clermont-Ferrand, France.

Ph. Baptiste,
*R. Sadykov*— “Formulations PLNE pour l'ordonnancement des chaînes sur une machine” — 9ème Congrès de la Société Française de Recherche Opérationnelle et d'Aide à
la Décision. February 25–27, 2008, Clermont-Ferrand, France.

*S.Michel*, N.Perrot, F.Vanderbeck — “Knapsack Problems with setups” — 7ème Conférence Internationale de Modélisation et Simulation. March 31 – April 2, 2008,
Paris, France.

Y. Hendel,
*R. Sadykov*— “Timing problem for scheduling an airborne radar” — 11th International Workshop on Project Management and Scheduling. April 28–30, 2008, Istanbul,
Turkey.

B. Jaumard,
*F. Vanderbeck*and B. Vignac. A Nested Decomposition Approach to an Optical Network Design Problem, 9th INFORMS Telecommunications Conference, Washington, March
2008.

*B. Vignac*— “Traffic Grooming in WDM Mesh Optical Networks: Mathematical Formulations” — Workshop on Optimization of Optical Networks OON 2008. Mai 7–8, 2008,
Montreal, Canada.

*F. Vanderbeck*"Towards a generic branch-and-price solver: progress report", plenary talk, CORS/Optimization Days, May 2008, Quebec; and Colgen 2008, June,
Aussois.

*S.Michel*, N.Perrot, F.Vanderbeck — “Heuristiques basées sur la génération de colonnes” — JPOC5 (Journées Polyèdres et Optimisation Combinatoire). June 4–6, 2008,
Rouen, France.

*A. Pêcher*— “Polytope des cliques circulaires et calcul du nombre d'indépendance des graphes quasi-adjoints” — JPOC5 (Journées Polyèdres et Optimisation
Combinatoire). June 4–6, 2008,Rouen, France.

P. Meurdesoif,
*P. Pesneau*, F. Vanderbeck — “A Branch-and-Cut algorithm to optimize sensor installation in a network” — Graph and Optimization Meeting (GOM2008). August 24–27,
2008,Saint-Maximin, France.

*L. Gély*, G. Dessagne, P. Pesneau, F. Vanderbeck — “A multi scalable model based on a connexity graph representation” — 11th International Conference on Computer
Design and Operation in the Railway and Other Transit Systems COMPRAIL'08. September 15–17, 2008, Toledo, Spain.

*A. Pêcher*— “Circular-clique polytopes and circular-perfect graphs” — KolKom08 (Kolloquium über Kombinatorik). November 14–15, 2008, Magdeburg, Germany.

*A. Pêcher*— “Sur le polytope des cliques circulaires” — Journée H. Thuillier. 2008, Orléans, France.

A. Miller and F. Vanderbeck were invited to participate in the workshop on Combinatorial Optimization hosted by ADONET at Aussois in January 2008. This workshop included a special celebration of “50 Years of Integer Programming” and was attended by many of the most well-known people in the field.

A. Pêcher was invited to give a plenary talk to the Graph coloring workshop which was held on January 26–27, 2008 in Kaohsiung, Taiwan .

A. Miller was invited to give a presentation by the Department of Industrial Engineering at North Carolina State, USA, in February 2008.

F. Vanderbeck was invited as a plenary session speaker for the annual meeting of the Canadian Operations Research Society (CORS) in May 2008 .

F. Vanderbeck was invited to give a seminar at “l'Institut de Mathématiques de Toulouse”, in May 2008.

P. Pesneau spent 4 days in Paris in June by invitation of A. Ridha Mahjoub (University Paris Dauphine) within the framework of their collaboration involving also Luis Gouveia (University of Lisbon).

F. Vanderbeck was invited to a workshop on column generation (ColGen2008), organized by the university of Montreal, in June in Aussois.

F. Vanderbeck was invited to give a seminar at the Center for Operations Research and Econometrics (CORE), Université Catholique de Louvain (UCL), Belgium, in September 2008.

A. Miller was invited to participate in the Workshop on Mixed Integer Nonlinear Programming in November 2008. This workshop was part of the Hot Topics series hosted by the Institute for Mathematics and its Applications.

Several international and national colleagues visited us (short visits for scientific exchanges and seminars presentations):

Jose Neto, from the University of Clermont, came in April to present his work on approximation methods for quadratic programs.

Leo Liberti, Institut Polytechnique, came in October to exchange with us about his automatic software tools to reformulate mathematical programs and ways to automatically generate symmetry breaking constraints.

Ted Ralphs, from Lehigh University in Bethlehem, Pennsylvania, USA, visited our team from December 15-18. During his visit he discussed with us mutual topics of interest such as computational MIP and software codes for decomposition algorithms.

Li-Da Tong, from the Department of Applied Mathematics (Kaohsiung, Taiwan), came in November to pursue our collaboration with Xuding Zhu's team.

Cédric Joncour started in September 2007. His doctoral study is on 2-D packing problems. Advisors: A. Pêcher, P. Pesneau, F. Vanderbeck.

Mahdi Namazifar began his doctoral work in August 2006. His thesis will cover various theoretical and computational aspects of MIP and MINLP. In 2008 he had one paper published in a competitive conference proceedings ( ; this paper was one of only 22 short papers accepted, out of 61 submissions), and is currently working on several others. Advisors: A.J. Miller, M.C. Ferris, and J.T. Linderoth.

Benoit Vignac (Advisors: B. Jaumard, G. Laporte, F. Vanderbeck) pursue his doctoral research.

Laurent Gely (Advisors: P. Pesneau, F. Vanderbeck) pursue his doctoral research.

Each member of the team is quite involved in teaching in the thematic specialties of the project, including in the research track of the Masters in applied mathematics or computer science. Moreover, we are largely implied in the organization of the curriculum:

Arnaud Pêcher was until recently the co-head of studies for the Master of computer science applied to management.

Philippe Meurdesoif is the project organizer for the operations management specialty of the Master of Applied Mathematics, Statistics and Econometric.

Pierre Pesneau is head of the professional curriculum of the operations management specialty.

Francois Vanderbeck is head of the Master of Applied Mathematics, Statistics and Econometrics.

External collaborators of the group are also quite involve locally. In particular, E. Sopena is head of the doctoral school in Mathematics and Computer Science.

The student Thanh Tuan NGUYEN from IFI (Vietnam) made his master training course of 5 months under the supervision on Sophie Michel and François Vanderbeck. He worked on the vehicle routing problem with split deliveries and developed the application under BaPCod. He compared the original compact formulation with the basic column generation formulation on small instances.