REALOPT is an INRIA Team joint with University of Bordeaux (UB1) and CNRS (IMB, UMR 5251 and LaBRI, UMR 5800)

Quantitative modeling is routinely used in both industry
and administration to design and operate transportation,
distribution, or production systems. Optimization concerns
every stage of the decision-making process: investment
budgeting, long term planning, the management of scarce
resources, or the planning of day-to-day operations. In many
optimization problems that arise in decision support
applications the most important decisions (control variables)
are discrete in nature: such as on/off decision to buy, to
invest, to hire, to send a vehicle, to allocate resources, to
decide on precedence in operation planning, or to install a
connection in network design. Such
*combinatorial optimization*problems can be modeled as
linear or nonlinear programs with integer decision variables
and extra variables to deal with continuous adjustments. The
most widely used modeling tool consists of linear
inequalities with a mix of integer and continuous variables,
so-called Mixed Integer Programs (MIP), which already allow a
fair description of reality and are also well-suited for
global optimization. The solution of such models is
essentially based on enumeration techniques and is
notoriously difficult given the huge size of the solution
space. Commercial solvers have made significant progress but
remain quickly overwhelmed beyond a certain problem size. A
key to further progress is the development of better problem
formulations that provide strong continuous approximations
and hence help to prune the enumerative solution scheme.
Effective solution schemes are a complex blend of techniques:
cutting planes to better approximate the convex hull of
feasible (integer) solutions, Lagrangian decomposition
methods to produce powerful relaxations, constraint
programming to actively reduce the solution domain through
logical implications, heuristics and meta-heuristics (greedy,
local improvement, or randomized partial search procedures)
to produce good candidate solutions, and branch-and-bound or
dynamic programming enumeration schemes to find a global
optimum. The real challenge is to integrate the most
efficient methods in one global system so as to prune what is
essentially an enumeration based solution technique.

Building on complementary expertise, our team's overall goals are threefold:

To design tight formulations for specific problems and generic models, relying on delayed cut and column generation, decomposition, extended formulations and projection tools for linear and nonlinear mixed integer programming models. More broadly, to contribute to theoretical and methodological developments of exact approaches in combinatorial optimization, while extending the scope of applications (in particular to encompass nonlinear models).

To demonstrate the strength of cooperation between complementary exact mathematical programming techniques, constraint programming, combinatorial algorithms and graph theory. To develop “efficient” algorithms for specific mathematical models and to tackle large-scale real-life applications, providing provably good approximate solutions by hybridization of different exact methods and heuristics.

To provide prototypes of specific model solvers and generic software tools that build on our research developments, writing proof-of-concept code, while making our research findings available to internal and external users.

In 2010, the team has had some achievements contributing
to a better visibility in our specialty (reformulation and
decomposition techniques for mathematical optimization as
well as work at the interface with graph theory) in the
community. We published benchmark papers on our methodologies
in the best journals of the field (Mathematical Programming,
Operations Research, Discrete Mathematics, Journal of Graph
Theory) and conferences (SODA); a chapter in a reference
book: “50 Years of Integer Programming”
; the co-organization of two
international workshops (European Workshop on Mixed Integer
Nonlinear Programming
https://

Our key results are: advances in reformulations (obtaining the convex hull of multi-linear problems , exploiting principles underlying extended reformulation to develop new polynomial algorithms as in or approximation algorithms as in , and deriving extended formulation for variants of the traveling salesman problem , ); progress in methodologies based on column generation (a generic branching scheme , general purpose heuristics based on exact optimization tools , analysis of the method extension to simultaneous row-and-column generation ); results in Graph Theory (closed formula for Lovasz number ) and new benchmark in solving combinatorial problems (health care planning , bin packing with conflict , orthogonal packing , ).

We also have progress significantly on software development, transfer and collaborations with academic and industrial partners: we completed an ADT on BaPCod, our software platform for branch-and-price; we started a collaboration with Brazil (hoping to create an associated team); and we developed a new collaboration with EURODECISION, a consultancy company in decision support. Our good performance at the qualification of the ROADEF/EURO challenge in Operations Research has led to launching a collaboration with EDF (in tandem with DOLPHIN) on maintenance planning for power production plants.

Our accomplishments and scientific momentum have been praised by the scientific committee who did evaluate our project this year. Last but not least, we have had three doctoral students graduating in 2010 (L Gely, C.Joncour, and B. Vignac).

*Combinatorial optimization*is the field of discrete
optimization problems. In many applications, the most
important decisions (control variables) are binary (on/off
decisions) or integer (indivisible quantities). Extra
variables can represent continuous adjustments or amounts.
This results in models known as
*mixed integer programs*(MIP), where the relationships
between variables and input parameters are expressed as
linear constraints and the goal is defined as a linear
objective function. MIPs are notoriously difficult to solve:
good quality estimations of the optimal value (bounds) are
required to prune enumeration-based global-optimization
algorithms whose complexity is exponential. In the standard
approach to solving an MIP is so-called
*branch-and-bound algorithm*:
(
i)one solves the linear programming
(LP) relaxation using the simplex method;
(
i
i)if the LP solution is not
integer, one adds a disjunctive constraint on a factional
component (rounding it up or down) that defines two
sub-problems;
(
i
i
i)one applies this procedure
recursively, thus defining a binary enumeration tree that can
be pruned by comparing the local LP bound to the best known
integer solution. Commercial MIP solvers are essentially
based on branch-and-bound (such IBM Ilog-CPLEX or
FICO/Dash-Optimization's Xpress-mp). They have made
tremendous progress over the last decade (with a speedup by a
factor of 60). But extending their capabilities remains a
continuous challenge; given the combinatorial explosion
inherent to enumerative solution techniques, they remain
quickly overwhelmed beyond a certain problem size or
complexity.

Progress can be expected from the development of tighter formulations. Central to our field is the characterization of polyhedra defining or approximating the solution set and combinatorial algorithms to identify “efficiently” a minimum cost solution or separate an unfeasible point. With properly chosen formulations, exact optimization tools can be competitive with other methods (such as meta-heuristics) in constructing good approximate solutions within limited computational time, and of course has the important advantage of being able to provide a performance guarantee through the relaxation bounds. Decomposition techniques are implicitly leading to better problem formulation as well, while constraint propagation are tools from artificial intelligence to further improve formulation through intensive preprocessing. A new trend is the study of nonlinear models (non linearities are inherent in some engineering, economic and scientific applications) where solution techniques build on the best MIP approaches while demanding much more than simple extensions. Robust optimization is another area where recent progress have been made: the aim is to produce optimized solutions that remain of good quality even if the problem data has stochastic variations. In all cases, the study of specific models and challenging industrial applications is quite relevant because developments made into a specific context can become generic tools over time and see their way into commercial software.

Our project brings together researchers with expertise mathematical programming (polyhedral approaches, Dantzig-Wolfe decomposition, non-linear integer programing, stochastic programming, and dynamic programming), graph theory (characterization of graph properties, combinatorial algorithms) and constraint programming in the aim of producing better quality formulations and developing new methods to exploit these formulations. These new results are then applied to find high quality solutions for practical combinatorial problems such as routing, network design, planning, scheduling, cutting and packing problems.

Adding valid inequalities to the polyhedral description of
an MIP allows one to improve the resulting LP bound and hence
to better prune the enumeration tree. In a cutting plane
procedure, one attempt to identify valid inequalities that
are violated by the LP solution of the current formulation
and adds them to the formulation. This can be done at each
node of the branch-and-bound tree giving rise to a so-called
*branch-and-cut algorithm*
. The goal is to reduce the
resolution of an integer program to that of a linear program
by deriving a linear description of the convex hull of the
feasible solutions. Polyhedral theory tells us that if
Xis a mixed integer program:
where
with matrix
, then
conv(
X)is a polyhedron that can be
described in terms of linear constraints, i.e. it writes as
for some matrix
although the dimension
m^{'}is typically quite large. A fundamental result in this
field is the equivalence of complexity between solving the
combinatorial optimization problem
min{
c
x:
x
X}and solving the
*separation problem*over the associated polyhedron
conv(
X): if
, find a linear inequality
x_{0}satisfied by all points in
conv(
X)but violated by
. Hence, for NP-hard problems, one can not hope to get
a compact description of
conv(
X)nor a polynomial time exact
separation routine. Polyhedral studies focus on identifying
some of the inequalities that are involved in the polyhedral
description of
conv(
X)and derive efficient
*separation procedures*(cutting plane generation). Only
a subset of the inequalities
Cxdcan offer a good approximation, that combined with a
branch-and-bound enumeration techniques permits to solve the
problem. Using
*cutting plane algorithm*at each node of the
branch-and-bound tree, gives rise to the algorithm called
*branch-and-cut*.

An hierarchical approach to tackle complex combinatorial
problems consists in considering separately different
substructures (sub-problems). If one is able to implement
relatively efficient optimization on the substructures, this
can be exploited to reformulate the global problem as a
selection of specific sub-problem solutions that together
form a global solution. If the sub-problems correspond to
subset of constraints in the MIP formulation, this leads to
Dantzig-Wolfe decomposition. If it corresponds to isolating a
subset of decision variables, this leads to Bender's
decomposition. Both lead to extended formulations of the
problem with either a huge number of variables or
constraints. Dantzig-Wolfe approach requires specific
algorithmic approaches to generate sub-problem solutions and
associated global decision variables dynamically in the
course of the optimization. This procedure is known as
*column generation*, while its combination with
branch-and-bound enumeration is called,
*branch-and-price*. Alternatively, in Bender's approach,
when dealing with exponentially many constraints in the
reformulation,
*cutting plane procedures*defined in the previous
section reveal to be powerful. When optimization on a
substructure is (relatively) easy, there often exists a tight
reformulation of this substructure typically in an extended
variable space. This gives rise powerful reformulation of the
global problem, although it might be impractical given its
size (typically pseudo-polynomial). It can be possible to
project (part of) the extended formulation in a smaller
dimensional space if not the original variable space to bring
polyhedral insight (cuts derived through polyhedral studies
can often be recovered through such projections).

Constraint Programming focuses on iteratively reducing the variable domains (sets of feasible values) by applying logical and problem-specific operators. The latter propagates on selected variables the restrictions that are implied by the other variable domains through the relations between variables that are defined by the constraints of the problem. Combined with enumeration, it gives rise to exact optimization algorithms. A CP approach is particularly effective for tightly constrained problems, feasibility problems and min-max problems (minimizing the maximum of several variable values). Mixed Integer Programming (MIP), on the other hand, is effective for loosely constrained problems and for problems with an objective function defined as the weighted sum of variables. Many problems belong to the intersection of these two classes. For example, some scheduling and timetabling problems are tightly constrained and have a sum-type objective. For such problems, it is reasonable to use algorithms that exploit complementary strengths of Constraint Programming and Mixed Integer Programming.

Many engineering, management, and scientific applications involve not only discrete decisions, but also nonlinear relationships that significantly affect the feasibility and optimality of solutions. MINLP problems combine the difficulties of MIP with the challenges of handling nonlinear functions. MINLP is one of the most flexible modeling paradigms available. However, solving such models is much more challenging: available softwares are not nearly as effective as standard softwares for linear MIP. The most powerful algorithms combine sophisticated methods that maintain outer linear programming approximation or convex relaxations with branch-and-bound enumeration; hence, the role of strong convex reformulations is crucial. The development of results for structured submodels are essential building blocks. Preprocessing and bound reduction (domain reduction logic similar to that used in CP) are quite important too. Finally, decomposition methods also permit to develop tight outer approximations.

Many fundamental combinatorial optimization problems can
be modeled as the search for a specific structure in a graph.
For example, ensuring connectivity in a network amounts to
building a
*tree*that spans all the nodes. Inquiring about its
resistance to failure amounts to searching for a minimum
cardinality
*cut*that partitions the graph. Selecting disjoint pairs
of objects is represented by a so-called
*matching*. Disjunctive choices can be modeled by edges
in a so-called
*conflict graph*where one searches for
*stable sets*– a set of nodes that are not incident to
one another. Polyhedral combinatorics is the study of
combinatorial algorithms involving polyhedral considerations.
Not only it leads to efficient algorithms, but also,
conversely, efficient algorithms often imply polyhedral
characterizations and related min-max relations. Developments
of polyhedral properties of a fundamental problem will
typically provide us with more interesting inequalities well
suited for a branch-and-cut algorithm to more general
problems. Furthermore, one can use the fundamental problems
as new building bricks to decompose the more general problem
at hand. For problem that let themselves easily be formulated
in a graph setting, the graph theory and in particular graph
decomposition theorem might help.

Our group has tackled applications in logistics, transportation and routing , , , , in production planning , and inventory control , , in network design and traffic routing , , , , , , , , , in cutting and placement problems , , , , , , and in scheduling , .

We are actively working on problems arising in network topology design, implementing a survivability condition of the form “at least two paths link each pair of terminals”. We have extended polyhedral approaches to problem variants with bounded length requirements and re-routing restrictions . Associated to network design is the question of traffic routing in the network: one needs to check that the network capacity suffices to carry the demand for traffic. The assignment of traffic also implies the installation of specific hardware at transient or terminal nodes.

To accommodate the increase of traffic in telecommunication networks, today's optical networks use grooming and wavelength division multiplexing technologies. Packing multiple requests together in the same optical stream requires to convert the signal in the electrical domain at each aggregation of disaggregation of traffic at an origin, a destination or a bifurcation node. Traffic grooming and routing decisions along with wavelength assignments must be optimized to reduce opto-electronic system installation cost. We developed and compared several decomposition approaches , to deal with backbone optical network with relatively few nodes (around 20) but thousands of requests for which traditional multi-commodity network flow approaches are completely overwhelmed. We also studied the impact of imposing a restriction on the number of optical hops in any request route . We also developed a branch-and-cut approach to a problem that consists in placing sensors on the links of a network for a minimum cost .

We studied several time dependent formulations for the unit demand vehicle routing problem , . We gave new bounding flow inequalities for a single commodity flow formulation of the problem. We described their impact by projecting them on some other sets of variables, such as variables issued of the Picard and Queyranne formulation or the natural set of design variables. Some inequalities obtained by projection are facet defining for the polytope associated with the problem. We are now running more numerical experiments in order to validate in practice the efficiency of our theoretical results.

We developed a branch-and-price algorithm for the Bin Packing Problem with Conflicts which improves on other approaches available in the literature . The algorithm uses our methodological advances like the generic branching rule for the branch-and-price and the column based heuristic. One of the ingredients which contributes to the success of our method are fast algorithms we developed for solving the sub-problem which is the Knapsack Problem with Conflicts. Two variants of the sub-problem have been considered: with interval and arbitrary conflict graphs. The paper which presents this work is being finalized.

We have designed a new algorithm for vertex packing
(equivalently stable set) in claw-free graphs
. Previously the best known
algorithm for this problem had a running time of
O(
n^{6})(with
nthe number of vertices in the graph) while our new
algorithm runs in
O(
n^{3}).

We studied a variant of the knapsack problem encountered in inventory routing problem : we faced a multiple-class integer knapsack problem with setups (items are partitioned into classes whose use implies a setup cost and associated capacity consumption). We showed the extent to which classical results for the knapsack problem can be generalized to this variant with setups and we developed a specialized branch-and-bound algorithm.

We studied the orthogonal knapsack problem, with the help
of graph theory
,
,
. Fekete and Schepers proposed to
model multi-dimensional orthogonal placement problems by
using an efficient representation of all geometrically
symmetric solutions by a so called
*packing class*involving one
*interval graph*for each dimension. Though Fekete &
Schepers' framework is very efficient, we have however
identified several weaknesses in their algorithms: the most
obvious one is that they do not take advantage of the
different possibilities to represent interval graphs. We
propose to represent these graphs by matrices with
consecutive ones on each row. We proposed a branch-and-bound
algorithm for the 2d knapsack problem that uses our 2D
packing feasibility check.

Inventory routing problems combine the optimization of product deliveries (or pickups) with inventory control at customer sites. We considered an industrial application where one must construct the planning of single product pickups over time; each site accumulates stock at a deterministic rate; the stock is emptied on each visit. We have developed a truncated branch-and-price algorithm: periodic plans are generated for vehicles by solving a multiple choice knapsack sub-problem; the global planning of customer visits is generated by solving a master program. Confronted with the issue of symmetry in time, we used a state-space relaxation idea. Our algorithm provides solutions with reasonable deviation from optimality for large scale problems (260 customer sites, 60 time periods, 10 vehicles) coming from industry . We previously developed approximate solutions to a related problem combining vehicle routing and planning over a fixed time horizon (solving instances involving up to 6000 pick-ups and deliveries to plan over a twenty day time horizon with specific requirements on the frequency of visits to customers .

We participated to the project on an airborne radar scheduling. For this problem, we developed fast heuristics , methods to obtain dual bounds and exact algorithms . A substantial research has been done on machine scheduling problems. A new compact MIP formulation was proposed for a large class of these problems . Some approximation algorithms with an absolute error guarantee were presented for the NP-hard problem of minimizing maximum lateness problem on a single machine. An exact decomposition algorithm was developed for the NP-hard maximizing the weighted number of late jobs problem on a single machine . A dominant class of schedules for malleable parallel jobs was discovered in the NP-hard problem to minimize the total weighted completion time . We proved that a special case of the scheduling problem at cross docking terminals to minimize the storage cost is polynomially solvable . Finally, we participated in writing an invited survey in French on solution approaches for machine scheduling problems in general .

Another application area in which we have successfully developed MIP approaches is in the area of tactical production and supply chain planning. In , we proposed a simple heuristic for challenging multi-echelon problems that makes effective use of a standard MIP solver. contains a detailed investigation of what makes solving the MIP formulations of such problems challenging; it provides a survey of the known methods for strengthening formulations for these applications, and it also pinpoints the specific substructure that seems to cause the bottleneck in solving these models. Finally, the results of provide demonstrably stronger formulations for some problem classes than any previously proposed.

We have been developing
**robust optimization**models and methods to deal with a
number of applications like the above in which uncertainty is
involved. In
,
, we analyzed fundamental MIP
models that incorporate uncertainty and we have exploited the
structure of the stochastic formulation of the problems in
order to derive algorithms and strong formulations for these
and related problems. These results appear to be the first of
their kind for structured stochastic MIP models. In addition,
we have engaged in successful research to apply concepts such
as these to health care logistics
. We considered train timetabling
problems and their re-optimization after a perturbation in
the network
,
. The question of formulation is
central. Models of the literature are not satisfactory:
continuous time formulations have poor quality due to the
presence of discrete decision (re-sequencing or re-routing);
arc flow in time-space graph blow-up in size (they can only
handle a single line timetabling problem). We have developed
a discrete time formulation that strikes a compromise between
these two previous models. Based on various time and network
aggregation strategies, we develop a 2-stage approach,
solving the contiguous time model having fixed the precedence
based on a solution to the discrete time model.

We develop the prototype of a generic branch-and-price
code, named
*BaPCod*, for solving mixed integer programs by column
generation. Existing software tools (Minto, Abacus, Symphony,
BCP, G12) are limited to offering “
*tool-boxes*” to ease the implementation of algorithms
combining branch-and-price-and-cut. With these, the user must
implement three basic features for its application: the
reformulation, the setting-up of the column generation
procedure and the branching scheme. Other available codes
(Gencol, Maestro) that offer more by default were developed
for a specific class of applications (such as the vehicle
routing problem and its variants). Our prototype is a “
*black-box*” implementation that does not require user
input and is not application specific. The features are

(
i)the automation of the
Dantzig-Wolfe reformulation process (the user defines a mixed
integer programming problem in terms of variables and
constraints, identifies sub-problems, and can provide the
associated solvers if available, but he does not need to
explicitly define the reformulation, the explicit form of the
columns, their reduced cost, or the Lagrangian bounds).

(
i
i)a default column generation
procedure with standard initialization and stabilization (it
may offer a selection of solvers for the master) – the issue
of stabilization is discussed in
, and

(
i
i
i)a default branching scheme –
recent progress has been made on the issue of generic
branching scheme in
.

(
i
v)default primal heuristics
specially developed for use in a decomposition framework
.

The prototype software platform represents about 35000 lines of C++ code. It was/is used as background solver for 4 PhD thesis. It also served as the framework for our comparative study in a INRIA collaborative research action . It has been experimented by two of our industrial partners, Exeo Solutions (Bayonne), on an inventory routing problem, and Orange Lab (France Telecom, Paris) on network design problems, and it is currently being tested by EURODECISION (Versailles). The prototype also enables us to be very responsive in our industrial contact. It is used in our approach to the powerplant planning optimization challenge proposed by EDF.

For many applications, it would be interesting to be able to use parallel resources to solve realistic size MIPs that take too long to solve on a single workstation. However, using parallel computing resources to solve MIP is difficult, as parallelizing the standard branch-and-bound framework presents an array of challenges in terms of ramp-up, inter processor communication, and load balancing. In we propose a new framework (the Parallel Macro Partitioning (PMaP) framework) for MIPs that partitions the feasible domain of an MIP by using concepts derived from recently developed primal heuristics. Initial computational prototypes suggest they enable PMaP to use many processors effectively to solve difficult problems.

Beside the MIP heuristic developed in a decomposition framework , our team has been working on a new family of randomized rounding heuristics for MIP problems with general integer variables (i.e., not necessarily binary) . We have extensively tested these heuristics within the COIN-OR suite of optimization software, and it is our intent to incorporate them within this suite as part of the module CBC in the near future.

@

Our team has made progress in the area of “Decomposition Approaches for MIP”, “Mixed Integer Nonlinear Programming”, and “Polyhedral Combinatorics and graph theory”.

The Dantzig-Wolfe reformulation approach has shown to be very efficient on applications well suited to decomposition. It is to be placed in the broader context of reformulation techniques. A recent didactic effort was done where we reviewed the set of methods in Lagrangian approaches, present the method as a generic algorithm and reviewed possible approaches to branching.

We have finalized our work on developing a branching scheme that is compatible with the column generation procedure and that implies no structural modifications to the pricing problem. Our generic branching scheme proceeds by recursively partitioning the sub-problem solution set. Branching constraints are enforced in the pricing problem instead of being dualized in a Lagrangian way. The sub-problem problem is solved by a limited number of calls to the provided solver. The scheme avoids the enumeration of symmetric solutions. Its computational efficiency was demonstrated, solving problem to integrality without modifying the subproblem or expanding its variable space, which was a first .

In the past decade, significant progress has been achieved in developing generic primal heuristics that made their way into commercial mixed integer programming (MIP) solver. Extensions to the context of a column generation solution approach are not straightforward. The Dantzig-Wolfe decomposition principle can indeed be exploited in greedy, local search, rounding or truncated exact methods. The price coordination mechanism can bring a global view that may be lacking in some “myopic” approaches based on a compact formulation. However, the dynamic generation of variables requires specific adaptation of heuristic paradigms.

Based on our application specific experience with these techniques , , , , and on a review of generic classes of column generation based primal heuristics, in we focus on a so-called “diving” method in which we introduce diversification based on Limited Discrepancy Search. In , we moreover consider its combination with sub-mipping and relaxation induced neighborhood search. These add-ons can be interpreted as local-search or diversification mechanisms. While being a general purpose approach, the implementation of the selected heuristics illustrates the technicalities specific to column generation. The methods are numerically tested on variants of the cutting stock and vehicle routing problems.

Working in an extended variable space allows one to develop tight reformulations for mixed integer programs. However, the size of the extended formulation grows rapidly too large for a direct treatment by a MIP-solver. Then, one can use projection tools to derive valid inequalities for the original formulation and implement a cutting plane approach. Or, one can approximate the reformulation, using techniques such as variable aggregation or by reformulating a submodel only. Such approaches result in outer approximation of the intended extended formulation.

Our paper reviews the applications of the literature of such “column-and-row generation” procedure and analyses this approach's potential benefits compared to a standard column generation approach. Numerical experiments highlight a key observation: lifting pricing problem solutions in the space of the extended formulation permits their recombination into new sub-problem solutions and results in faster convergence.

In the follow-up of , we developed the combination of Dantzig-Wolfe and Bender's decomposition: Bender's Master is solved by column generation .

Our team is involved in the effort to synthesize advances and inspire new ideas in order to transform MINLP into an area in which researchers and practitioners can access robust tools and methods capable of solving a wide range of decision support problems. In , we exploited the information generated by solvers as they make branching decisions to define a structured family of disjunctive cuts. Even for MIPs, these ideas seem capable of significantly reducing the size of branch-and-bound trees ; moreover, the ideas themselves are directly applicable to MINLPs, and we are currently investigating how best to apply them.

We have extended some of these results to products of
more than two variables, considering both convex hull
descriptions and separation. One important result (first
presented during an invited presentation at the 2010 Mixed
Integer Programming Workshop,
http://
nvariables, regardless of the size of
n.

Although well studied, important questions on the rank
of the Gomory-Chàtal operator when restricting to polytopes
contained in the
n-dimensional 0/1 cube have not been answered yet. In
particular, the question on the maximal rank of the
Gomory-Chàtal-procedure for this class of polytopes is
still open. So far, the best-known upper bound is
O(
n2log(
n))and the best-known lower
bound, which is based on a randomized construction of a
family of polytopes, is
(1 +
)
n, both of which were established
in Eisenbrand and Scultz
. The main techniques to prove
lower bounds were introduced in
. In
we revisit one of those
techniques and we develop a simpler method to establish
lower bounds. We show the power and applicability of this
method on classical examples from the literature as well as
provide new families of polytopes with high rank.
Furthermore, we provide a deterministic family of polytopes
achieving a Chvàtal-Gomory rank of at least
(1 + 1/
e)
n-1>
nand we conclude the paper with
showing how to obtain a lower bound on the rank from solely
examining the integrality gap.

We built on our previous work on the stable set problem (selecting disjoint nodes) in claw-free graphs (graphs with no vertex having a stable set of size 3 in its neighborhood). This problem is a fundamental generalization of the matching problem that offers a nice playground for building the theory of polyhedral Combinatorics further. Indeed Minty gave the first polynomial time algorithm for solving this problem in 1980 but unfortunately he did not reveal the polyhedral counterpart. Describing the stable set polytope of claw-free graphs was thus ranked as one of the top ten open problems in combinatorial optimization by Groetschel, Lovasz and Schrijver in 1986. Our team has had significant contributions on this problem both on the polyhedral and algorithmic aspects.

After closing a 25-years old conjecture for an interesting subclass of claw-free graphs , giving a complete description of the rank facets of the stable set of fuzzy circular interval graphs, we provided new facets for claw-free graphs , we gave a characterization of strongly minimal facets for quasi-line graphs , an extended formulation and polytime separation procedure for the stable set polytope of claw-free graphs and a full characterization of the polytope for claw-free graphs with 4 .

In
we propose an algorithm for
solving the maximum weighted stable set problem on
claw-free graphs that runs in
O(
n3)-time, drastically improving
the previous best known complexity bound. This algorithm is
based on a novel decomposition theorem for claw-free
graphs, which is also introduced in the present paper.
Despite being weaker than the well-known structure result
for claw-free graphs given by Chudnovsky and Seymour
, our decomposition theorem is,
on the other hand, algorithmic, i.e. it is coupled with an
O(
n3)-time procedure that actually
produces the decomposition. We also believe that our
algorithmic decomposition result is interesting on its own
and might be also useful to solve other kind of problems on
claw-free graphs.

Interestingly the composition operation at the core of
this decomposition theorem, which as we have just seen has
some nice algorithmic consequences, appears to also have a
nice polyhedral behavior for the stable set polytope that
go much beyond claw-free graphs. Indeed, in
we show how one can use the
structure of the composition to describe the stable set
polytope from the matching one and, more importantly, how
one can use it to separate over this stable set polytope in
polynomial time. We then apply those general results to the
stable set in claw-free graphs, to show that the stable set
polytope can be reduced to understanding the polytope in
very basic structures (for most of which it is already
known). In particular for a general claw-free graph
G, we show two integral extended formulation for
STAB(
G)and a procedure to separate in
polynomial time over
STAB(
G); moreover, we provide a
complete characterization of
STAB(
G)when
Gis any claw-free graph with stability number at
least 4 having neither homogeneous pairs nor 1-joins.

In
, we focus on the facets of the
stable set polytope of quasi-line graphs (a subclass of
claw-free graphs). While
*Ben Rebea Theorem*provides a complete linear
description of this polytope (see Eisenbrand et al.
), no minimal description is
available. In this paper, we shed some light on this
question. We show that any facet of this polytope is such
that the restriction of the inequality to the graph induced
by the vertices with maximal coefficient yield a rank facet
for this subgraph. We build upon this result and a result
from Galluccio and Sassano
for rank-minimal facets in
claw-free graphs to provide a complete description of the
strongly minimal facets for quasi-line graphs. Finally we
show that our result supports two conjectures refining Ben
Rebea Theorem for the stable set polytope of circulant
graphs and fuzzy circular interval graphs due respectively
to Pêcher and Wagler
and Oriolo and Stauffer
and that it offers a possible
line of attack.

Another central contribution of our team concerns the chromatic number of a graph (the minimum number of independent stable sets needed to cover the graph). We investigated the circular-chromatic number. It is a well-studied refinement of the chromatic number of a graph (designed for problems with periodic solutions): the chromatic number of a graph is the integer ceiling of its circular-chromatic number. Xuding Zhu noticed in 2000 that circular cliques are the relevant circular counterpart of cliques, with respect to the circular chromatic number, thereby introducing circular-perfect graphs, a super-class of perfect graphs. It is unknown whether the circular-chromatic number of a circular-perfect graph is computable in polynomial time in general.

In
, we design a polynomial time
algorithm that computes this circular chromatic number when
the circular-perfect graphs is claw-free. We also proved
that the chromatic number of circular-perfect graphs is
computable in polynomial time
, thereby extending Grötschel,
Lovász and Schrijver's result to the whole family of
circular-perfect graphs. Last but not least, we managed
recently to give closed formulas for the Lovász Theta
number of circular-cliques (previously, closed formulas
were known for circular-cliques with clique number at most
3 only), which implies that the circular-chromatic number
of
*dense*circular-perfect graphs is computable in
polynomial time
.

Fuzzy circular interval graphs are a generalization of proper circular arc graphs and have been recently introduced by Chudnovsky and Seymour as a fundamental subclass of claw-free graphs. In , we provide a polynomial-time algorithm for recognizing such graphs, and more importantly for building a suitable representation.

The models on which we made progress can be partitions in three areas: “Network Design and Routing”, “Packing and Covering Problems”, and “Planning, Scheduling, and Logistic Problems”.

To accommodate the increase of traffic in telecommunication networks, today's optical networks use grooming and wavelength division multiplexing technologies. Packing multiple requests together in the same optical stream requires to convert the signal in the electrical domain at each aggregation of disaggregation of traffic at an origin, a destination or a bifurcation node. Traffic grooming and routing decisions along with wavelength assignments must be optimized to reduce opto-electronic system installation cost. In collaboration with B. Jaumard from Concordia University in Quebec, we developed and compared several decomposition approaches to deal with backbone optical network with relatively few nodes (around 20) but thousands of requests for which traditional multi-commodity network flow approaches are completely overwhelmed. We also studied the impact of imposing a restriction on the number of optical hops in any request route.

In collaboration with Teresa Godinho from the
Polytechnic Institute of Beja, Portugal, and Luis Gouveia
from the University of Lisbon, Portugal, Pierre Pesneau has
studied the time-dependent travelling salesman problem.
This problem is a generalization of the common Asymmetric
Travelling Salesman Problem where the cost of a link
depends on its position in the tour. Observing that the
main feature of the well-known Picard and Queyranne
formulation for the problem is the use, as a sub-problem,
of the exact description of a circuit on
nnodes (that may repeat arcs and nodes), they
proposed a new formulation by strengthening this
sub-problem. For a given node
k, the new sub-problem describes exactly a circuit on
nnodes (as in the previous model) but going through
node
kexactly once. An even stronger formulation is
obtained by duplicating this sub-problem for each possible
node
k. To obtain such formulation, it has been necessary
to consider additional sets of variables and the new
formulations, even compact, are quite large. However, the
projections of the linear relaxation of these formulations
on the space of the original Picard and Queyranne variables
are described and give potential issues to handle such
formulations. The results given by the computational
experiments on the extended formulation show very good dual
bounds issued from the linear relaxation of the
formulation. The integrality gap is even closed to zero for
several instances of the literature. These results have led
to a paper
submitted for publication in
Discrete Applied Mathematics.

Along with Laurent Alfandari, Sylvie Borne and Lucas Létocart from the University Paris 13, Pierre Pesneau is studying integer quadratic or integer linear programming formulations for some variants of the Asymmetric Travelling Salesman Problem. They study the case where there is a lower bound on the distance (in number of links) between cities of some subset. Such case appears, for instance, in the design the shortest circuit for a travelling salesman who has to visit a minimum number of clients before having a break (or a night). Another close application can be found in where the authors consider lower bounds on the capacity for a vehicle routing problem. In previous work they made several proposals of reformulations and solution approaches based on branch-and-price-and-cut. Now they experimented with one of these reformulations, developing a branch-and-price-and-cut under our software platform BaPCod. (In the process, BaPCod was developed further to encompass delayed generation of constraints needed for the validity of the formulation. The preliminary results are encouraging and it is planed to develop this application in more details and to further complete the cut generator framework within BaPCod.

Pierre Pesneau has started a new collaboration with
Pierre Fouilhoux from the University Paris 6, France on a
variant of the maximum multi-cut problem. Given a graph and
an integer
k, a multi-cut is a set of edges when deleted, they
are disconnecting the graph in
kcomponents. The maximum multi-cut problem finds such
a cut of maximum weight. Note that when
kis equal to 2, the problem is simply the well-known
maximum cut problem. In the considered case, one reduces
the search on cuts such that, when deleted, it leaves
exactly
k
**connected**components. This problem has various
applications. We can cite for instance an application
coming from the Cemagref (French research institute in
environmental sciences and technologies) where the goal is
to decompose a region in several connected areas while
maximizing the dissimilarities between these areas. Other
applications could be found for domain decomposition in
numerical calculus. This study has already led to several
reformulations of the problem, in particular a quadratic
formulation that has been linearised and a formulation
suited for a branch-and-price solution. The perspectives
are to study the polyhedron associated with this problem
and develop solution algorithms. To this aim, concerning
the branch-and-price based formulation, it will be
necessary to study the sub-problem induced by the
formulation that consist in finding an induced connected
sub-graph of minimum weight, where the weights are
unrestricted and are carried by the edges and the nodes. It
seems this problem is itself a challenge.

The bin-packing problem raise the question of the minimum number of bin of fixed size we need to pack a set of items of different sizes. We studied a generalization of this problem where items can be in conflicts and thus cannot be put together in the same bin. We show in that the instances of the literature with 120 to 1000 items can be solved to optimality with a generic Branch-and-Price algorithm, such as our prototype BaPCod, within competitive computing time. Moreover, we solved to optimality all the 37 open instances. The approach involves generic primal heuristics, generic branching, but a specific pricing procedure.

The knapsack variant encountered in our bin packing problem resolution considers conflicts between items. This problem is quite difficult to solve compared to the usual knapsack problem. The latter is already NP-hard, but can be usually efficiently solved by dynamic programming. We have shown that when the conflict graph (the graph defining the conflicts between the items) is an interval graph, this generalization of the knapsack can also be solved quite efficiently by dynamic programming with the same complexity than the one to solve the common knapsack problem. For the case, when the conflict graph is arbitrary, we proposed a very efficient enumeration algorithm which outperforms the approaches used in the literature.

Another research concerns scheduling parallel jobs i.e. which can be executed on more than one processor at the same time. With the emergence of new production, communication and parallel computing system, the usual scheduling requirement that a job is executed only on one processor has become, in many cases, obsolete and unfounded. Therefore, parallel jobs scheduling is becoming more and more widespread. In this work, we consider the NP-hard problem of scheduling malleable jobs to minimize the total weighted completion time (or mean weighted flow time). For this problem, we introduce the class of “ascending” schedules in which, for each job, the number of machines assigned to it cannot decrease over time while this job is being processed. We prove that, under a natural assumption on the processing time functions of jobs, the set of ascending schedules is dominant for the problem. This result can be used to reduce the search space while looking for an optimal solution .

We have also studied in a scheduling problem that takes place at cross docking terminals. In such places, products from incoming trucks are sorted according to there destinations and transferred to outgoing trucks using a temporary storage. Such terminals allow companies to reduce storage and transportation costs in supply chain. We focus on the operational activities at cross docking terminals. In , we consider the trucks scheduling problem with the objective to minimise the storage usage during the product transfer. We show that a simplification of this NP-hard problem in which the arrival sequences of incoming and outgoing trucks are fixed and outgoing tracks can take products of only one type is polynomially solvable by proposing a dynamic programming algorithm for it. In , this result has been extended to the case in which outgoing trucks can take products of several types. This work also presents the results of numerical tests of the algorithm on randomly generated instances are presented.

In
, we developed rigorous
computational methods to find high quality production plans
for big bucket lot-sizing problems of realistic size. By
*big bucket*we mean problems in which multiple product
categories compete for the same capacities (of machines,
labor, etc.) In
, we have compared various
methods for finding performance guarantees (lower bounds)
for realistically sized instances of such problems. These
methods include both those previously proposed in the
literature and those we have developed ourselves. Our
methods of comparison are both theoretical and
computational; one of the primary contributions of this
research is to identify and highlight those aspects of
these problems that prevent us from solving them more
effectively. This identification could be crucial in
improving our ability to solve such models.

We are currently working on a project aiming to plan the energy production and the maintenance breaks for a set of power plants generating electricity. This problem has two different levels of decisions. The first one consist in determining, for a certain time horizon, when the different power plants will have to stop in order to perform a refueling and to decide the amount of this refueling. Given a set of scenarios defining variable levels of energy consumption, the second decision level aims to decide the quantity of power each plant will have to produce.

As the number of periods composing the time horizon, and the number of scenario are quite large, the size of any MIP formulation for such problem will forbid an exact resolution of the problem in an acceptable time. However, our objective is to show that exact methods can be used on a simplified problem (implementing an hierarchical optimization) and combined to design heuristics for the solution of large scale problem.

The allocation of surgeries to operating rooms (ORs) is a challenging combinatorial optimization problem. There is moreover significant uncertainty in the duration of surgical procedures, which further complicates assignment decisions. In , we present stochastic optimization models for the assignment of surgeries to ORs on a given day of surgery. The objective includes a fixed cost of opening ORs and a variable cost of overtime relative to a fixed length-of-day. We describe two types of models. The first is a two-stage stochastic linear program with binary decisions in the first-stage and simple recourse in the second stage. The second is its robust counterpart, in which the objective is to minimize the maximum cost associated with an uncertainty set for surgery durations. We describe the mathematical models, bounds on the optimal solution, and solution methodologies, including an easy-to-implement heuristic. Numerical experiments based on real data from a large health care provider are used to contrast the results for the two models, and illustrate the potential for impact in practice. Based on our numerical experimentation we find that a fast and easy-to-implement heuristic works fairly well on average across many instances. We also find that the robust method performs approximately as well as the heuristic, is much faster than solving the stochastic recourse model, and has the benefit of limiting the worst-case outcome of the recourse problem.

With C. Joncour (PhD student) and P. Valicov (PhD
student), we investigate the orthogonal knapsack problem,
with the help of graph theory
. Fekete and Schepers managed a
recent breakthrough in solving multi-dimensional orthogonal
placement problems by using an efficient representation of
all geometrically symmetric solutions by a so called
*packing class*involving one
*interval graph*(whose complement admits a transitive
orientation: each such orientation of the edges corresponds
to a specific placement of the forms) for each dimension.
Though Fekete & Schepers' framework is very efficient,
we have however identified several weaknesses in their
algorithms: the most obvious one is that they do not take
advantage of the different possibilities to represent
interval graphs.

In , , and , we give two new algorithms: the first one is based upon matrices with consecutive ones on each row as data structures and the second one uses so-called MPQ-trees. These two new algorithms are very efficient, as they outperform Fekete and Schepers' on most standard benchmarks.

The development of the prototype software platform has made good progress this year thanks to our junior engineer, F. Labat, who has been hired for one year on an ADT: the developments focus on the redesign of the version manager along with continuous integration tools and automatic bug reports; new compilation environment using cmake tools; code transfer to several platforms; a revised interface with MIP solvers; code profiling to identify the bottlenecks; performance improvements (by a factor 10 on large scale applications). These developments of the environment and progress in software re-engineering were done in parallel to the implementation of new methodologies (such generic primal heuristics and the prototyping of simultaneous column-and-row generation approach for extended formulations).

We have also launched a new collaboration with EURODECISION (Versailles): the company is testing our prototype on industrial applications in the aim of fostering future exchanges on further methodological developments and efficient implementation of core modules.

The prototype also enables us to be very responsive in our industrial contact. In particular, it was used in our approach to the powerplant planning optimization challenge proposed by EDF.

A research proposal on the generic methodologies underlying the Branch-and-Price approach has been submitted. The purpose is to install a collaboration with M. Poggi and E. Uchoa (from Rio) and the company GAPSO (a Brazilian spin-up launched by these academics). In this context, the software platform BaPCod shall serve as a proof-of-concept code and it will benefit from the transfer of knowledge between the parties.

2D-KNAP is a software available on LaForge INRIA to check whether a 2D orthogonal packing admits a feasible solution. Fekete and Schepers introduced a tuple of interval graphs as data structures to store a feasible packing, and gave a very efficient algorithm. In 2D-KNAP, feasibility checks are based on an alternative graph theory characterization of interval graphs: Fulkerson and Gross's decomposition into maximal cliques . The algorithm uses consecutive one matrices as data structures.

Our contract with SNCF, “Innovation et Recherche” was concluded with the PhD defence of L. Gely . In his Master thesis work, L. Gely produced timetables in the aim of maximizing the throughput (number of trains) that can be handled by a given network . In this project, we considered the problem of managing perturbations . Network managers must re-optimize train schedules in the event of a significant unforeseen event that translates into new constraints on the availability of resources. The control parameters are the speed of the trains, their routing and sequencing. The aim is to re-schedule trains to return as quickly as possible to the theoretic timetable and to restrict the consequences of the perturbation to a limited area. The question of formulation is again central to the approach that shall be developed here. The models of the literature are not satisfactory. Continuous time formulations have poor quality due to the presence of discrete decision (re-sequencing or re-routing). Other standard models based on arc flow in time-space graph blow-up in size. Formulations in time-space graphs have therefore been limited to tackling single line timetabling problems. We have developed a discrete time formulation that strikes a compromise between these two previous models. We further proposed a hybrid model combining the advantage of the continuous and the discrete time formulations. These mathematical programming contributions are completed by a deep analysis of the real-life questions that need consideration in the mathematical model and by an effort to design the integration in the information system of the SNCF along with a reflexions on the integration with simulation tools.

While recruited on an internship within RealOpt, Andeol Evain, an ENS student, studied the logistics of waste containers. The problem submitted by Exeo Solution consists in the planning of the pick-up of full container and delivery of empty container at customer sites by simple vehicles that can carry a single container, or vehicles with a trailer attached that have a total capacity of 2 containers but require more time when handling containers. We developed the prototype of a branch-and-price approach for this problem. The study is ongoing. This work should relaunch our long-term collaboration with Exeo Solutions.

RealOpt took part to the EURO/ROADEF challenge proposed by EDF on the optimization problems that arise in the planning of maintenance of nuclear powerplants. This rich experience (RealOpt was rank 4 out of around 40 teams at the qualification stage) is about to be followed by a research contract with EDF taking the form of a PhD project. In the context of a partnership between INRIA and EDF, RealOpt and DOLPHIN shall joint effort on this research project whose aim is to model and solve stochastic combinatorial optimization problems that arise in the management of maintenance schedule and power production.

With Jeffrey T. Linderoth and James Luedtke of the University of Wisconsin-Madison, and Sven Leyffer and Todd R. Munson of Argonne National Laboratory (a research unit of the United States Department of Energy), Andrew Miller was awarded two grants in 2008 from United States government sources for the project “Next Generation Mixed Integer Nonlinear Programming Solvers: Structure, Search and Implementation".

The first grant (Department of Energy grant number DE-PS02-08ER08-13) began on August 15, 2008 and lasts through August 14, 2011. The second grant (grant number CCF 0830153 of the National Science Foundation) started on January 1, 2009, and continue through December 31, 2011.

André Raspaud launched in 2005 a fruitful cooperation with the Department of Applied Mathematics of the Sun Yat-Sen University of Kaohsiung, Taiwan.

The ANR project GraTel (submitted by A. Pêcher in 2009)
is a follow-up of this cooperation: it is a France-Taiwan
project devoted to telecommunications, with the help of
graph colorings and polyhedral graph theory. It is a 4
years project, which started in January 2010: see
https://

Gautier Stauffer co-organized the First Cargese Workshop
on Combinatorial Optimization. This workshop was focused on
Extended Formulations and involved the main international
actors in this field (cf.
http://

Pierre Pesneau is an active member of the organizing
committee of the working group on Polyhedra and Combinatorial
Optimization affiliated to the French operation research
society (ROADEF) and the operation research group of the
CNRS. The purpose of this working group is to promote the
field of polyhedra in the research domain of combinatorial
optimization. Among the events organized by this group,
Pierre Pesneau is in charge of the scientific days organized
by the group. This meeting gathers once or twice a year,
during a day young and confirmed researchers around a
particular theme. On December 7th, 2010, one of these
meetings was organized on "Graph partitioning":
http://

Finally, F. Vanderbeck is pursuing his 2-years mandate as the vice-president in charge of scientific matters in the french operations research society board.

*C. Joncour*, S. Michel, R.
Sadykov, D. Sverdlov, F. Vanderbeck. Column generation
based heuristics, International Symposium on
Combinatorial Optimization, Tunisia, Hammamet, 2010,

*C. Joncour*, A. Pêcher.
Consecutive ones matrices for multi-dimensional
orthogonal packing problems, International Symposium on
Combinatorial Optimization, Tunisia, Hammamet, 2010,

*C. Joncour*, A. Pêcher, P.
Valicov. MPQ-trees for orthogonal packing problem, in
International Symposium on Combinatorial Optimization,
Tunisia, Hammamet, 2010,

*A. Pêcher*, A. Wagler. Clique
and chromatic number of circular-perfect graphs, in
International Symposium on Combinatorial Optimization,
Tunisia, Hammamet, 2010,

M. T. Godinho, L. Gouveia,
*P. Pesneau*, Hop-indexed
Circuit-based formulations for the Travelling Salesman
Problem, International Symposium on Combinatorial
Optimization, Tunisia, Hammamet, 2010,

*R. Sadykov*. A polynomial
algorithm for a simple scheduling problem at cross
docking terminals, Project Management and Scheduling,
France, Tours, 2010,

*Ruslan Sadykov*, and François
Vanderbeck, Bin Packing with conflicts: a generic
branch-and-price algorithm, Annual Conference of the
French Operations Research Society, ROADEF 2010, Toulouse
.

*R. Sadykov*. Solving a
scheduling problem at cross docking terminals, European
Conference on Operational Research (EURO'10), Portugal,
Lisbonne, 2010,

Y. Faenza, G. Oriolo,
*G. Stauffer*, An Algorithmic
Decomposition of Claw-free Graphs Leading to an O(
n^{3})-algorithm for the Weighted Stable Set Problem,
ACM-SIAM Symposium on Discrete Algorithms (SODA) 2011,
United States, San Francisco, September 2010,

*G. Stauffer*,G. Massonet, C.
Rapine, J.-P. Gayon. A simple and fast 2-approximation
for deterministic lot-sizing in one warehouse
multi-retailer systems, ACM-SIAM Symposium on Discrete
Algorithms (SODA) 2011, United States, San Francisco,
September 2010,

C.Joncour, S.Michel, R.Sadykov,
*F. Vanderbeck*, Primal
Heuristics for Branch-and-Price, European Conference on
Operational Research (EURO'10), Portugal, Lisbon,
.

Ruslan Sadykov, and
*François*Vanderbeck, Column
Generation for Extended Formulations, First Cargese
Workshop on Combinatorial Optimization,
.

A. Pêcher: 2010 International Conference on Graph Theory, Combinatorics and Applications, Jinhua, Chine. “On the Lovasz's Theta function of power of cordless cycles”

G. Stauffer: Aussois 14th Workshop in Combinatorial Optimization. "The hidden matching structure of the composition of strips : a polyhedral perspective". January 2010, Aussois - France.

Pierre Pesneau was invited by Luis Gouveia at the University of Lisbon for collaboration on the Time Dependent Travelling Salesman Problem. January 26th - 31st, May 26th - 29th, July 27th - 31st 2010.

Gautier Stauffer was invited by Sebastian Pokutta at TU Darmstadt for collaboration on the Chvàtal-Gomory rank of 0/1 polytopes. April 5th - 17th 2010.

G. Stauffer: Seminar at INP Grenoble (hosted by Andras Sebo). The hidden matching structure of the composition of strips : a polyhedral perspective. February 2010, Grenoble - France.

G. Stauffer: Seminar at TU Darmstadt (hosted by Sebastian Pokutta). The p-median Polytope of Y-free Graphs: An Application of the Matching Theory. April 2010, Darmstadt - Germany.

G. Stauffer: Seminar at IBM Zurich Research Lab (hosted by Eleni Pratsini). Managing inventories in Distribution Networks: when consulting practice yields good theoretical approximations. September 2010, Zurich - Switzerland.

G. Stauffer: Seminar at CORE (hosted by Laurence Wolsey). A simple and fast 2-approximation for the one warehouse multi-retailer problem. November 2010, Louvain-la-Neuve - Belgium.

F. Vanderbeck: invited talk at the International Symposium on Combinatorial Optimization (ISCO'10), Tunisia (on the invitation of stream organizers: M. Juenger and F. Rendl), Column generation based heuristics, .

Several international and national colleagues visited us (short visits for scientific exchanges and seminars presentations):

Amélie Lambert, Research and Teaching Assistant, Ecole Nationale supérieure d'Informatique pour l'Industrie et l'Entreprise, February 16. Integer quadratic programming.

Semi Gabteni, Amadeus, April 16. Generic "Branch-and-Price" solver.

Denis Montaut and Frédéric Fabien, Eurodecision, Mai 18. “Decomposition approaches for Time Tabling Problems”.

Marcos Goycoolea, Associate Professor, School of Business, Universidad Adolfo IbaÃ±ez, May 21- 24. Mixed Integer NonLinear programming.

Andeol Evain, ENS Cachan, Mai 17 - July 10. "Logistics of waste containers"

Ted Ralph, Associate Professor, Leigh University, June 2-7. Generic Frameworks for Decomposition Methods in Integer Programming.

Sylvie Borne, University Paris 13, France, June 7 - 11. "Hop Constrained Travelling Salesman Problem"

Nicola Bianchesi, University of Brescia, Italy, June 7 - 12.

Gianpaolo Oriolo, University of Rome Tor Vergata May 17 to June 5. "Decomposition algorithm for claw-free graphs"

Sebastian Pokutta, Technical University of Darmstadt, June 28 - July 10. "Extended Formulations"

Marcus Poggi, PUC Rio, Sept 13-17. Decomposition approach in MIP

Yuri Faenza has defended is PhD thesis on extended formulations for several polyhedra in combinatorial optimization and related topics. Advisers: G. Oriolo and G. Stauffer.

Cédric Joncour has defended is PhD started in December 2010. His doctoral study was on 2-D packing problems . Advisers: A. Pêcher and F. Vanderbeck.

Laurent Gely has defended is PhD started in December 2010. His doctoral study was on rescheduling of trains (SNCF CIFRE PhD) . Advisers: G. Dessagne, P. Pesneau and F. Vanderbeck.

Benoit Vignac has defended is PhD started in January 2010 on traffic routing in telecommunication network . Advisers: B. Jaumard, G. Laporte, F. Vanderbeck.

Guillaume Massonnet started his PhD thesis in September 2009 on cost balancing techniques for inventory control with uncertainty. Advisers: Jean-Phillippe Gayon, Christophe Rapine and G. Stauffer.

Each member of the team is quite involved in teaching in the thematic specialties of the project, including in the research track of the Masters in applied mathematics or computer science and an Operations Research Track in the computer science department of the Engineering school ENSEIRB-MATMECA. Moreover, we are largely implied in the organization of the curriculum:

Andrew Miller is the head of the operations management specialty in the Master of Applied Mathematics, Statistics and Econometrics.

Arnaud Pêcher is the head of IUT Computer Science's special year.

Pierre Pesneau is head of the professional curriculum of the operations management specialty.

François Vanderbeck was head of the Master of Applied Mathematics, Statistics and Econometrics until September. He then took charge of establishing a joint curriculum in Operations Research with ENSEIRB-MATMECA.

Gautier Stauffer is the project organizer for the operations management specialty of the Master of Applied Mathematics, Statistics and Econometric.