COPRIN is a joint project between INRIA, CNRS, University of Nice-Sophia Antipolis (UNSA) and Certis (École des Ponts et Chaussées).

COPRIN is a joint project between Nice/Sophia-Antipolis University (UNSA), CNRS, ENPC and INRIA. Its scientific objective is to develop and implement systems solving algorithms based on constraints propagation methods, interval analysis and symbolic computation, using interval arithmetic as the primary tool.

Our research aims at developing algorithms that can be used for any problem or are specific to a given class of problems, especially problems that derive from application domains for which we have an internal expertise (such as mechanism theory and software engineering).

Implementation of the algorithms is performed within the framework of the generic software tool
`IcosAlias`, currently under development, and which purpose it is to allow to design and test solving algorithms obtained as the combination of various software modules.
`IcosAlias`is based on the already existing libraries
`ICOS`and
`ALIAS`, still under development.

As solving algorithms' theoretical complexity analyses are usually extremely difficult, the efficiency of algorithms is experimentally evaluated through
`IcosAlias`or
`ALIAS`on various realistic test examples.

Meanwhile, dissemination also represents an essential part of our activities, as interval analysis based methods are not sufficiently known in the engineering and academic communities.

The scientific objective of the COPRIN project is to develop and implement systems solving and optimization algorithms based on constraints propagation methods, interval analysis and symbolic computation, using interval arithmetics as the primary tool.

The results obtained with these algorithms are certified in the sense that no solution can be missed and that, for 0-dimensional system solving, solutions can be calculated with an arbitrary accuracy. Furthermore, some of our algorithms allow us to deal with systems involving uncertain coefficients.

A system is constituted by a set of relations that may use all the usual mathematical operators and functions (hence we may deal, for example, with the relation
sin(
x+
y) + log(cos(
e
^{x}) +
y2)
0).

We are interested in real-value constraint satisfaction problems ((
f(
X) = 0,
f(
X)
0)), in optimization problems and in proving the existence of properties (for example,
Xsuch as
f(
X) = 0or two values
X_{1},
X_{2}such as
f(
X_{1})>0and
f(
X_{2})<0).

Solutions are searched within a finite domain (called a
*box*) which may be either continuous or mixed (i.e. for which some variables must belong to a continuous range while other variables may only have value within a discrete set). Importantly, we are trying to find all the solutions within the domain as soon as the computer arithmetics
allow it: in other words we are looking for
*certified*solutions. For example, for 0-dimensional system solving, we provide a domain that includes one, and only one, solution, together with a numerical approximation of this solution, which may be refined at will, using multi-precision.

Our approach aims at developing various operators that will be applied in sequence on a box:

*exclusion operators*: these operators determine that there is no solution to the problem within a given box

*contractors*: these operators may reduce the size of the box, i.e. decrease the width of the variables' allowed ranges

*existence operators*: these operators allow to determine that there is a unique solution within a given box and are usually associated with a numerical scheme that enables to compute this solution safely

If a given box is not rejected by the exclusion operators and is not modified by the other operators, then we will bisect one of the variables in order to create two new boxes that will be processed later on. Methods for choosing the bisected variable also clearly fall within the scope of the project.

Our research aims at developing operators that can be used for any problem or are specific to a given class of problem, especially problems that are issued from application domains for which we have an internal expertise (such as mechanism theory and software engineering).

We are also studying symbolic computation based methods in order to:

develop a user-friendly interface that will automatically generate and run an executable program, and return the result to the interface;

analyze the semantic and syntax of the relations involved in a problem, so as to automatically generate specific operators, or to obtain a better interval evaluation of the expressions (as the evaluation of an expression using interval arithmetics is very sensitive to the expression syntax)

in order to allow for the calculation of the solutions with an arbitrary accuracy. For this purpose two approaches are currently being used:

certified interval solutions are obtained through a compiled solving program, and then a symbolic computation procedure is used to calculate the solutions up to the desired accuracy

a multi-precision interval arithmetics package is used within a symbolic-numeric solving program and allows to obtain the solutions with an arbitrary accuracy

The first approach is usually much faster than the second one (by a factor of approximately 1,000 to 10,000) but may fail to determine solutions if the system is numerically badly conditioned, while the second approach will always find the solutions, although the computation time may be problematic.

While the methods developed in the project may be used for a very broad set of application domains (for example, this year has seen the start of a prospective research project on the application of interval analysis for control theory), the project's size does not allow for all domains to be addressed. As a consequence, we decided to focus our applications on two domains in which we already have expertise: mechanism theory (including robotics) and software engineering.

mechanism theory: our research focuses on optimal design and geometric modelling of mechanisms, especially for the machine-tool industry, automotive suspensions, virtual reality and medical robotics. As other domains may exhibit equivalent problems as mechanism theory (e.g. molecular chemistry), they may also be addressed, but may not become one of the project's main research axes.

software engineering: our research focuses on the automatic generation of test data sets, i.e. the generation of input data allowing a software module's given step to be executed. As of 2006, this topic will probably not be part of this project anymore, because M. Rueher and C. Michel-main researchers involved in the project-, are leaving.

Software development is an essential part of the research within the COPRIN project, as our research may only be experimentally validated. Software developments are addressed along the following axes:

interval arithmetics: although our purpose is not specifically aimed at this very specialised area (we generally rely on existing packages) interval arithmetics is an important part of our interval analysis algorithms and we may have to extend the existing packages, especially in order to deal with multi-precision and arithmetics extensions

interval analysis libraries: every day, we use two libraries that have been designed within the project and are still under development. In the long run, we aim at developing a generic programming framework that will allow for modularity and flexibility, aiming at being able to easily test new functionalities and to build specific solvers by a simple juxtaposition of existing modules

interface to interval analysis: in our opinion, interval analysis software must be available within general purpose scientific software (such as
`Maple`,
`Mathematica`,
`Scilab`) and not only as a stand-alone tool. Most end-users are indeed reluctant to learn a new programming language only to deal with solving problems that are only small elements of a more general problematic context. Furthermore, interval analysis efficiency may benefit from
the functionalities available in general purpose scientific software.

In order to extend interval arithmetic to generalized intervals, we suggest an efficient C package. Together with the concept of modal intervals, this kind of arithmetic allows to use universally quantified variables where classical interval arithmetic only deals with existence
quantification. The code structure is similar and compatible with the well known
`BIAS/Profil`library. We plan to use this type of arithmetic to solve robots' kinematic problems.

The
`ALIAS`library (
*Algorithms Library of Interval Analysis for Systems*) is a collection of procedures based on interval analysis for systems solving and optimization. Its development started in 1998.

`ALIAS`is composed of two parts:

`ALIAS-C++`: the C++ library (90,000 code lines) which is the algorithms' core

`ALIAS-Maple`: the
`Maple`interface for
`ALIAS-C++`(40,000 code lines). This interface allows to specify a solving problem within
`Maple`and to get the results within the same
`Maple`session. Its role is not only to automatically generate the C++ code, but also to perform an analysis of the problem in order to improve the solver's efficiency. Furthermore, a distributed implementation is available directly within the interface, for the algorithms.

Our effort this year has particularly focused on:

experimentation on grid computing: we experimented, in collaboration with the OASIS project, using the ProActive library for solving a difficult problem of quantum mechanics on the GRID 5000 platform. Although calculation is not over yet, the total computation time involved in the calculation is currently equivalent to 14 years' computation on a single PC. The experimental nature of the GRID 5000 platform has not allowed us to fully deploy our solver on the platform

development of a
`Maple`based system solver: feedback from the on-line solver (see next section) has shown that the numerical accuracy of our C++ based solvers was not sufficient to deal with borderline systems for which numerical accuracy must be larger than the
`double`type in C++. We are currently developing analogs of these solvers, based on the
`Maple`interval arithmetic package
`intpakX`, that will allow to specify an arbitrary accuracy for the solutions.

The current version of the
`ALIAS`library is available through the Web page
http://www-sop.inria.fr/coprin/logiciels/ALIAS.

The
`IcosAlias`software is an interval-based constraint solver, that has been tested and used by different members of COPRIN. The current version, which modularity has been improved, includes contractors (hull consistency, 3B, Box), search heuristics, interval analysis methods
(univariate and multivariate Newton) with outer approximations of linear systems solved by various methods (Krawczyk, Gauss-Seidel, Hansen-Segupta, Gauss elimination). The main upgrade is the introduction of quantified parameters into the language. On-going work includes, in the linear
case, extending some interval analysis results to generalized solutions sets (i.e., solutions to systems which parameters include uncertainties).

As to dissemination, Y. Papegay wrote a web interface for two
`ALIAS`-based solvers. Through this on-line service, available since the end of September 2004, a set of up to 5 equations/inequalities may be submitted to our solvers and all the solutions of the system may possibly be obtained (see
http://www-sop.inria.fr/coprin). Around 150 systems have been submitted since
September 2004. A new on-line facility should be added soon: the possibility of using the multi-precision
`Maple`based solvers.

Besides, we are implementing a
`Mathematica`interface to the
`ALIAS`library, aiming at providing a transparent access to the functionalities of
`ALIAS`for the community of
`Mathematica`users, and thus extending the dissemination of our library. Our main goal is to provide
`ALIAS`with a high-level modular interface in order to quickly prototype and easily test new combinations of interval analysis algorithms, with the added benefit of using a computational environment that includes arbitrary precision interval arithmetics, and high-level symbolic
computation functionalities
.

We are currently developing a
`Scilab`interface to efficient interval arithmetic, based on the C/C++ library
`BIAS/Profil`. The aim is to offer end-users the same simple
`Scilab`syntax for using intervals as for other data types, and to allow to easily prototype interval-based algorithms.

There are currently three packages which allow the use of high-level functionalities:

*Univariate Polynomial Tools*(solving, computing bounds on real roots,...) for univariate polynomials having interval or real coefficients

*Linear solver*gives access to various methods for solving a set of linear interval equations

*Kaucher arithmetic*to deal with universally or existentially quantified parameters.

These functions are essentially based on the
`ALIAS`,
`IcosAlias`and
`BIASK`libraries.

The core of our activities in robotics and mechanism theory is the optimal design of mechanisms and the analysis of parallel robots . This year, we specifically dealt with:

the long-term work on parallel robot's optimal design

kinematics analysis and design of parallel robots using cams as passive constraints for the legs

a theoretical analysis of the use of classical accuracy indexes for parallel robots

Meanwhile, a long term research effort continues on finding a solving method for distance equations, and on wire robots (i.e. robots whose actuators are constituted of wires whose lengths may change).

Our methodology for optimal design is to determine an approximation of the domain including all the possible values of the
ndesign parameters, so that any design requirement (or a set of them) may be satisfied. Such approximation is obtained as a set of boxes in the parameter space, an
ndimensional space in which each axis represents the value of one design parameter. If a domain approximation
A_{i}is obtained for any design requirement
R_{i}in the set of requirements, then the possible design parameter values will be obtained as the intersection of all the
A_{i}
,
. This methodology has the following advantages over more classical approaches:

it allows to deal with
*imperative*requirements, i.e. requirements that necessarily need to be satisfied by a design solution

it allows to present various possible compromises between requirements that are antagonistic

it allows to deal with uncertainties: for example, the physical instance of a theoretical solution may differ because there are manufacturing tolerances. In our approach, the approximation only includes boxes which width is at least twice the manufacturing tolerances
_{i}. This allows to choose nominal values for the design parameters, so that the point representing their real values is still guaranteed to lie in a box.

However, this approach may somehow be difficult because we must calculate the approximation
A_{i}: and interval analysis would be a tool of choice. We have shown that it was possible to compute the
A_{i}in a 26-dimensional parameter space, so that the robot workspace will include a large, arbitrary set of pre-defined poses, while the robot's positioning errors at these poses will not be larger than a pre-defined threshold
.

Classically, parallel robot legs are submitted to passive constraint generated by higher kinematic pairs such as universal and spherical joints. But other passive constraints may be considered as well, and we have investigated the use of cams to impose passive constraints. The robot's base and platform include pairs of arbitrary surfaces, that are constrained to lie in contact during the robot's motion. This type of robot may be used, for example, to model complex human joints, such as the knee. With such surface constraints, the robot's kinematics (i.e. the relations between the actuated joint variables and the end-effector's pose) may become quite complex. An interval-based kinematic solver, which allows for the use of arbitrary constraint surfaces, has been developed and successfully tested.

The synthesis problem of such a robot was also addressed, with 1 d.o.f.: the problem is to determine the parametric constraint surfaces' parameters, so that the robot goes through specified way points . This may allow, for example, to determine knee joint models whose motion would be close to experimental data.

The analysis of accuracy for serial robots is a well established research domain. First the robot's Jacobian matrix, which relates sensor measurement errors to the end-effector's positioning errors, is established. Then, various accuracy indexes based on this matrix are calculated (such as the manipulability index or the condition number). These indexes may be local (i.e. valid for one given pose) or global (i.e. valid over a given workspace). Until now these indexes have been used for parallel robots as well, the duality between serial and parallel robots only imposing the use of the robot's inverse jacobian matrix instead of the jacobian matrix.

In theory, we have shown that this approach may not be valid. First, multiple inverse jacobian matrices may be defined, and not all of them include the same amount of information. For example, for robots with
n<6d.o.f, singularities may not all be described by the
n×
nmatrix relating the actuated joint velocities with the end-effector velocities. We also considered the classical accuracy indexes and showed that they did not always give consistent accuracy ranking when they are calculated for a given set of poses. Indeed we are
able to calculate the maximal positioning errors of the end-effector at a given pose and hence to order the poses according to their accuracy: we then expect the accuracy indexes to reflect this ranking and this is not what has been observed. Consequently, we proposed new, global,
consistent accuracy indexes, but we showed that their calculation is a challenging task
.

We intend to build a wire robot that will be used as a force-feedback haptic device for the workbench of the research unit. Although several wire robots have been developed in many laboratories, most suffer from a lack of precision and flexibility. Indeed, performance requirements may slightly change between two virtual reality tasks, and it is not realistic to assume that a system with a fixed geometry may be able to deal with all of them. To solve these problems, we focused on the following objectives:

*mechanical design*: the wire actuation system should be designed to allow fast changes in the robot's geometry, a precise evaluation of wire lengths (for optimal measurement of the platform location) and of the force applied at the end-effector level, together with the possibility
of quickly modifying the maximal change in the wire lengths

*modularity*: the performances of a parallel robot are very sensitive to the robot's geometry. We intend to develop algorithms allowing to determine the robot's optimal geometry, given the task requirements. The relations between the force limitations (the wire can only pull and
its tensions are limited) and the workspace are currently being investigated by M. Gouttefarde.

The necessary hardware has been studied and the robot's mechanical construction will start early in 2006.

Identification of a 4-RPR planar robot's kinematic parameters is a basic problem, which in turn illustrates a core issue of parallel robot calibration: the calibration equations involve both kinematic parameters that have to be identified and variables that are specific for each calibration configuration. To reduce the size of the system and the number of unknown variables the classic approach is to manipulate the calibration equations in order to get a smaller set of equations that involves a reduced number of pose-dependent variables. However, this approach has drawbacks: the elimination is not simple to perform (the equations are highly non linear) and the resulting equations, that are largely more complex than the initial set, may be badly numerically conditioned. We have compared classic elimination strategies (1), elimination based on an algebraic formulation for the calibration equations (2) and a calibration algorithm based on the initial set of calibration equations (3). Method (3) applied to the 4-RPR planar robot leads to the best result. This research was performed in collaboration with I. Emiris of Athens University, within the framework of a PAI Platon.

Another item of research addressed the problems of calibration strategies and of checking the validity of the robot's kinematic model, used to obtain the calibration equations. The calibration equations are obtained by setting the robot in various calibration configurations and partially
measuring the robot's state, using its proprioceptive sensors
. These sensor measurements are then plugged into the robot's kinematic model in order to get one calibration
equation, that may be written as
, where
are the kinematic parameters to be identified;
the unknown pose parameters;
the robot's unmeasured state parameters; and
^{m}the measured state parameters. A calibration equation is obtained for each of the
ncalibration configurations and as the measurements are uncertain, the kinematics parameters are obtained as the ones that minimize
. Using experimental data, we have shown that this approach leads to a paradox: the calculated
may be such that some calibration equations will not be satisfied whatever the measurement values within their uncertainty ranges (which are known). We suggest, instead, an interval-based approach that allows to determine the set
of all possible values of
so that all calibration equations may be satisfied
. If this set is empty, then the kinematic model used for establishing the calibration equation is not valid.

A second problem is that simplifying assumptions are used to establish the robot's kinematic model (e.g. that joint axes are parallel or perpendicular), as otherwise, the model might become very complex. However, the validity of these assumptions has to be checked; for example if the set is empty, i.e. the kinematic model is not valid, the kinematic model's level of error must be ascertained. As a consequence, we suggested an index that measures how much the calibration equation has to be modified so that is not empty anymore . This index may be used to get an insight on the quality of the model.

A challenging problem in robotics is to certify manipulators' performances. These performances are dependent upon the robot's nominal geometry, but clearance and manufacturing tolerances also have a very negative influence and should be taken into account.

Statistical tools cannot be applied to performance evaluation as tolerances and clearance do not follow classic statistical distribution such as Gaussian distribution. However, it is usually possible to obtain uncertainty ranges. Then, interval analysis methods may be used to check a given robot's kinematic performance.

Robot geometry is usually described with Denavit-Hartenberg parameterization. We have developed a dedicated kinematic equations' interval evaluation based on the Denavit-Hartenberg model. We are using this model's particular structure to provide a sharp evaluation of these equations.

A lot of research has been done in the field of system theory, specifically in order to analyze linear systems' stability,
*i.e*in order to study the state matrix's eigenvalues, or, equivalently, its polynomial transfer function's zeros. Nevertheless, we know that the stability or instability of a system is highly sensitive to data,
*i.e.*to the coefficients of the state matrix or the coefficients of the system's transfer function. One of the consequences is the fact that two types of problems arise when using numerical methods for dealing with the stability analysis of a system. The first problem is related to
numerical certification. In fact, the errors due to numerical rounding in a computer may have an impact on the final result of a system's stability. The second problem is related to modeling errors. Usually, the system (through its state matrix or its polynomial transfer function) is only
known with some errors (uncertainty). Hence the
*real*system is known to belong to a set of systems. More precisely, the state matrix is known to belong to some interval matrix, or the polynomial transfer function is known to belong to a set of polynomials with interval coefficients. An important point is to address the stability
problem of a whole set of systems and to be able to determine with certainty whether any matrix in a set described by an interval matrix is stable or unstable.

Interval analysis is a good tool to handle these kinds of problems since it considers not a single
*object*, but some
*interval objects*, here interval matrices or interval coefficient polynomials.

We selected some methods, within the large range of existing methods used for analyzing linear systems' stability, that could be extended to uncertain systems. We mainly worked with the extension and optimization of the Gershgorin circles method, and some exclusion methods (specifically using Dedieu and Yakoubsohn exclusion functions). The first numerical tests are promising.

Meanwhile, we started programming the global optimization methods, using interval approach. This will be useful for the final implementation of the above-mentioned stability analysis method. This particular research is being performed in collaboration with S. Icart of the I3S laboratory.

Interval methods have shown their ability to accurately locate and prove the existence of global optima safely and rigorously, but unfortunately, these methods are rather slow. Many efficient solvers for optimization problems are based on linear relaxations. However, these relaxations
are numerically ill-conditioned, and thus may overestimate, or worse, underestimate, global minima. We have developed
`QuadOpt`, an efficient and safe framework to rigorously bind the global optimum as well as the corresponding values for the parameters
,
.
`QuadOpt`uses consistency techniques to speed up the interval narrowing algorithms' initial convergence. A lower bound is computed on the constraint system's and objective function's initial convergence. All these computations are based on the safe and rigorous implementation of
linear programming techniques. The first experimental results are very promising.

This research represents the main body of a long-term on-going collaboration with Airbus, whose objective is to directly translate the work of aeronautics engineers into digital simulators, in order to make aircraft design more efficient. This project already has applications in the aircraft making development departments.

Modeling and simulation processes usually begin with using scientific theories for describing physical features with formula and computation algorithms. Using these models, numerical codes are implemented in order to simulate and visualize features. In an industrial context, the large number of parameters and equations involved in models make the whole process very long, complex and expensive, especially as safe and reliable codes are required.

Previous research led to a model edition environment, based on symbolic computation tools, which makes it possible to enter models' formula and algorithms and validate them numerically on a reduced set of data. From then on, we tried to use these models to fully and automatically generate a numerical real-time simulation engine to be plugged into flight simulators, while bringing out technical documentation associated with such simulations, which is a necessary tool for corporate memory.

A first prototype of C code generation for real-time simulation was designed last year. In 2005, we mainly worked on

scheduling of sub-model evaluation in large models,

instantiation of generic models,

optimization of the generated C code,

and we extended our prototype to a more robust and complete tool. Our long-term objectives are to use the computational model

to perform an automated sensitivity analysis (to find out which of the model's parameters have the largest influence on the output)

to determine the parameters' possible values, so that the model's output satisfies given constraints

A distance equation describes that the distance between two points in a
n-dimensional space is known, while the coordinates of at least one of these points are unknown. Systems of distance equations frequently occur in robotics, but also in other domains such as molecular biology.

Within the
`ALIAS`library, we have developed a distance equations solver that may be used for this problem. It features specific versions of most of the theorems that we are using within our general purpose solvers, and that have been revisited for distance equations, leading to stronger
versions of the theorems. For example, unicity regions (i.e. the regions in which there is a unique solution that may be safely computed using a given iterative scheme) obtained with the Kantorovitch theorem or Neumaier exclusion theorem are usually about 20 times larger than those obtained
with the general purpose version. This solver was used, for example, to efficiently solve the difficult problem of parallel robots' direct kinematics.

This solver was somewhat improved recently , through the work done on the interval Newton scheme embedded within it.

The aim of our research is to propose and study new approaches for solving distance systems equations, with uncertainties on the parameters. In this case, the problem does not have isolated solutions but its solution sets consist of a hyper volume. Our objective is to approximate the solution set as accurately as possible. A classical approach for solving systems with uncertainties combines a filtering phase with a bisection phase in order to compute the solution sets' outer approximation.

We proposed and studied two approaches aimed at improving the results of the classic algorithms applied to distance equations . The first approach performs a space separation algorithm (SSA) in order to isolate different solution domains. This algorithm allows us to improve our knowledge of the solution space's geometry but it doesn't reduce the computation time.

The second approach relies on a method for detecting inner boxes (boxes that are included inside the solution space of uncertain distance equations). Different methods for detecting inner boxes based on classic interval analyses, quantified elimination and generalized interval arithmetics were implemented and analyzed. Preliminary results show that generalized interval arithmetics seems to be the most promising, even in terms of computation time.

All methods studied up to now give sufficient but not necessary conditions for detecting inner boxes in a system of distance equations. We are currently studying a new test based on generalized interval arithmetics and geometric considerations, in order to obtain necessary and sufficient conditions, and we are also studying its general extension.

Classic interval theory allows to rigorously deal with uncertain quantities, like measurement errors. The modal intervals theory provides a richer interpretation, which allows to deal with different quantifiers for uncertainties. As a consequence, it is a promising framework for the approximation of quantified constraints.

In spite of its promising applications, modal intervals theory is not widely used, due to its complicated construction. We proposed a new formulation of modal intervals theory , , which uses generalized intervals (intervals whose bounds are not constrained to be ordered). This new formulation make it easier to understand and use the theory, while allowing us to introduce new developments: namely, a linearization process that is compatible with the concepts of modal intervals theory. This led to two new analysis tools: a new mean-value extension to generalized intervals, and an extension of the interval Newton operator, which deals with parametric systems of equations where parameters can be quantified. Together with this generalized Newton operator, a generalized Hansen-Sengupta operator was designed. Although all these developments are still theoretical, we believe that they will be useful for the approximation of quantified constraints.

We have continued developing the C++
`INCOP`library, implementing incomplete methods for solving combinatorial optimization problems. This library offers classic local search methods such as simulated annealing, tabu search as well as a population based method, Go With the Winners. Several problems were encoded, including
Constraint Satisfaction Problems, graph coloring, and frequency assignment.

A new and simple local search meta-heuristics named
`IDW`(Intensification - Diversification - Walk) was developed last year. The next move is selected through the exploration of only one part of the neighborhood, and the first move leading to a better or equal configuration is chosen. When no such move is found in that part of the
neighborhood, the algorithm selects a worsening move. The new version of the algorithm developed this year has two parameters. The first parameter is the size of the part of the neighborhood that is examined at each move (intensification). The second parameter defines which worsening
neighbor, called spare neighbor, to chose for escaping local minima when no better or equal neighbor has been found. The value of this parameter indicates how among many visited bad neighbors one selects the less worsening one, and consequently how much one deteriorates the evaluation
function for escaping a local minimum (diversification).

An automatic parameter tuning tool automatically determines the parameter values. This meta-heuristics gave good results on different benchmarks (graph coloring, car sequencing, frequency assignment ...) .

Meta-heuristics are considered as interesting methods for solving combinatorial optimization problems when complete methods are intractable. The aim of this study is to extend these methods to global optimization problems with continuous variables, when classic complete methods such as branch & bound are not efficient enough. One key issue is to define a neighborhood. We implemented a first prototype in Java, with the main common meta-heuristics (simulated annealing, tabu search), and tested them on benchmarks .

``Geometric constraint solving'' is a challenging problem appearing in several fields such as industrial drawing and computer aided design (CAD). One version of this problem can be defined as follows: finding one or several solutions (i.e., position, orientation, dimension) to a system constituted of geometric objects (e.g., points, lines, planes) that are subjected to geometric constraints (e.g., incidences, parallelism, distances, angles). The corresponding equation system is generally non linear and sometimes non generic (i.e., with singularities). Symbolic and numerical methods (Newton-Raphson, continuation) may have difficulties solving large real-life geometric constraint systems.

The most common approach to solving this problem is to adopt a ``divide and rule'' method, and to solve the obtained equation subsystems by numerical methods or, preferably, by hard-coded procedures corresponding to geometric theorems. Our long-term effort in this domain has resulted in the following original contributions:

we designed a graph-based decomposition algorithm, called
`GPDOF`, and applied it to large scale geometric constraint systems extracted from a specific scene modeling problem in computer vision (an article on
`GPDOF`has been submitted).

an algorithm, called Inter-Block Backtracking (
`IBB`), can construct a total solution by mixing the partial solutions found in the subsystems by an interval-based solver, whatever the used decomposition of the systems
. A new implementation of
`IBB`, using the
`IcosAlias`interval-based constraint solver, was developed this year.

We enjoy well established working relations with the computer geometry and CAD national research community. Moreover, we are currently setting up a comprehensive survey about geometric constraint systems' decomposition techniques, in collaboration with D. Michelucci in Dijon; P. Schreck and P. Mathis in Strasbourg and C. Jermann in Nantes.

Long-term research will now focus on gathering all the available algorithmic bricks (decomposition algorithms; interval-based computation techniques, etc) in order to develop a new geometric constraint solver. This work is led by the COPRIN project in collaboration with several French researchers.

Filtering variable domains usually means narrowing the bounds, but some basic operators (such as projections) may also reveal "gaps" within intervals (i.e. domain where there is no solution to the problem at hand). It is interesting to know whether the presence of these gaps can influence propagation algorithms, in order to achieve arc-consistency.

As a consequence, we pursued and finalized our work on box-set consistency, a method that overcomes the problem by combining hull consistency and a gap-guided splitting strategy .

We also investigated a very different approach that avoids the subdivision problem by representing domains with unions of intervals. We devised a new class of partial consistencies over continuous CSPs related to unions of intervals, and derived an algorithm called
*interval global consistency (IGC)*
. The originality of this interval lies in the BDD-like data structure used to maintain the
*union*consistency, while the main loop remains as simple as the classical HC4 algorithm.

We introduced a new filtering algorithm for handling systems of quadratic equations and inequalities. Such constraints are widely used to model distance relations in numerous application areas, ranging from robotics to chemistry. Classic filtering algorithms are based upon local consistencies and are thus often unable to achieve a significant pruning of the variables domains occurring in quadratic constraint systems. However, these approaches have a drawback, which comes from the independent handling of the constraints. We thus introduced a global filtering algorithm, which works on a tight linear relaxation of the quadratic constraints, the Simplex algorithm being then used to narrow the domains. Since most implementations of the Simplex use floating point numbers and thus are numerically unsafe, we provided a procedure to generate a safe linearization, and we also used a procedure worked out by Neumaier and Shcherbina to get a safe objective value when calling the Simplex algorithm. Using these two procedures prevents the Simplex algorithm from removing any solution while filtering linear constraint systems. Experimental results on classic benchmarks show that this new algorithm yields a much more effective pruning of the domains than local consistency filtering algorithms , , .

Classic methods for solving numerical Constraints Satisfaction Problems (CSP) are based on branch and prune algorithms, a dichotomous enumeration process interleaved with consistency filtering algorithms. In many interval solvers, the pruning step is based on local consistencies
(Hull-Consistency, Box-consistency) or partial consistencies (kB-consistencies, Bound-consistency). The associated pruning algorithms may identify gaps within some domains, i.e. inconsistent intervals that are strictly included in the domain. However, the identification of such gaps is not
generally used in bisection strategy, neither when choosing the bisected variable nor the the bisected range cutting point. We introduced a search strategy, named
`MindTheGaps`
,
, which takes advantage of the gaps identified during the filtering process. Gaps are collected with a negligible
overhead, and are used to select the variable to be bisected as well as to define relevant cutting points within the domain. Splitting the domain by removing such gaps definitely reduces the search space, allows one to discard some redundant solutions, and furthermore, helps the search
algorithm to isolate different solutions. The first experimental results show that
`MindTheGaps`significantly improves the performance of the search process.

We worked with an automotive supplier on the design of a wire-based measurement system, which allows to measure the 6D pose of a motor element during extensive car trials on a special track. The purpose of these trials is to identify the motor element's motions during a typical car run, and then to submit this element to similar motions during extensive tests with special vibration machines. Our task was to determine the design of the measurement system so that workspace and accuracy requirements were satisfied, while stringent constraints on the overall size of the system were specified.

To improve the production of numerical (flight) simulators from aerodynamic models, Airbus France is interested in methods and tools like those described in .

In 2003, a two year contract for prototyping code generation features was signed. In 2005, this contract has been extended for another two year period, in order to improve code generation and to start working on the sensitivity analysis problem. For confidentiality reasons, no further details can be given here.

Amadeus is a company that manages flight fares for several airlines. A collaboration was started in 1998, with a view to develop new optimization algorithms based on constraint programming and graph methods for fare quote problems. We are also heavily involved in developing the test suite that is used for evaluation by the developers of Amadeus.

Several experimental prototypes were developed and many successful ideas have been embedded in software sold by Amadeus to travel agencies. In 2005, 12 days were spent in consulting work.

The aim of this project was to study the calibration of a deployable mechanism used by a satellite for the positioning of an Earth Observation Telescope. The project was extended for a second year.

Using floating point numbers to represent real numbers is the reason for an important amount of failures and potential faults in software for critical systems. The modeling of such systems, combined with model checking techniques, proof and test case generation techniques, enhances the quality of the development process and improves the reliability of systems which integrate pieces of software. Unfortunately, the currently available approaches, notations and techniques do not really take into account floating point numbers, although the usual way to do computation over real numbers with a computer is to use floating point numbers. The main difficulty with getting a correct account of floating point numbers comes from:

poor properties of floating point number arithmetics,

dependency of floating point number properties on the computer architecture (even if the floating point unit is IEEE 754 compliant).

The aim of the V3F ACI project is to provide tools required to evaluate the representation of real numbers by means of floating point numbers during the software validation and checking phases. More precisely, our aim is to develop a framework relying on CSP approaches for the validation of program computations with hypothesis coming from the modeling phase. Constraint methods have been successfully used in many applications related to software validation and checking. They have already shown their capabilities in automatic test case generation, in model checking as well as in code analysis. However, these CSP techniques are still restricted to handle integer, rational and real numbers, and the challenge is thus to be able to solve techniques allowing to handle floating point numbers. We are developing solving techniques adapted to floating point numbers in order to validate and check critical software. We are also studying the use of such a solver in the processes of model checking, and the use of automatic test case generation and static code checking.

V3F ACI project is a joint research project with:

LIFC, Laboratoire d'Informatique de l'Université de Franche-Comté (CNRS - INRIA),

IRISA, Institut de Recherche en Informatique et Systèmes Aléatoires, Rennes,

CEA, Commissariat à l'Énergie Atomique, Saclay, Paris.

The DANOCOPS project aims at exploring an innovating technique to automatically detect non-conformities between the program and its specifications. Our approach is based on the use of constraint programming techniques: a CSP is built from the program while another CSP is built from specifications. We will use these two CSPs in order to extract information which shows non-conformities. While constraint programming has shown its ability in the fields of test case generation-either structural or functional test case generation-,there has been no attempt, to the best of our knowledge, to take advantage of constraint programming to check that a program conforms to its specifications. If the latter seems to be out of the range of constraint programming, non-conformities detection appears to be a more reachable aim. The DANOCOPS RNTL project is a joint research project with:

THALES SYSTEMES AEROPORTES, Paris,

AXLOG ingénierie, Arceuil,

LIFC, Laboratoire d'Informatique de l'Université de Franche-Comté (CNRS - INRIA),

LSR, Laboratoire Logiciels Systèmes Réseaux (UMR 5526), Saint Martin d'Hères.

H. Batnini and G. Chabert participated in Journées Francophones de programmation par contraintes (JFPC 2005), Lens, June 8-10.

D. Daney participated in the "Journées Nationales de la Recherche en Robotique (JNRR)", Guidel, October 5-7.

Alexandre Goldsztejn has presented his work in two sessions of the French working group MEA ( http://www-lag.ensieg.inpg.fr/gt-ensembliste/).

B.Neveu has presented a paper at ROADEF 2005, 6ème congrès de la Société Française de Recherche Opérationnelle et d'Aide à la Décision, Tours, February 14-16, and participated in Journées Francophones de programmation par contraintes (JFPC 2005), Lens, June 8-10 and he has participated in ``AS Contraintes géométriques'' workshops in Saint-Ouen (January 6-7), and Marseille (June 2-3).

M. Rueher participated in the "ACI Sécurité" Workshop

G. Trombettoni participated in Journées Francophones de programmation par contraintes (JFPC 2005), Lens, June 8-10 and he has participated in ``AS Contraintes géométriques'' workshops in Saint-Ouen (January 6-7), and Marseille (June 2-3), together with G. Chabert.

H. Batnini participated in the Eleventh International conference on principles and practice of constraint programming (CP2005), Sitges, Spain (October 1-5).

D. Daney participated in IEEE ICRA (Barcelona) and to the 8th International Workshop on Computer Algebra in Scientific Computing, September 12-16 2005, Kalamata, Greece.

Alexandre Goldsztejn has presented a paper in the workshop QCP2005 held in conjunction of the CP2005 conference (October 1st to 5th, 2005, Sitges, Spain).

C. Grandon has presented a paper at the CP2005 conference on October 1-5, Sitges, Spain.

J-P. Merlet has presented papers at IEEE ICRA (Barcelona), Computational Kinematics workshop (Cassino), 2nd Int. Colloquium (Braunschweig), Robotics: Science and Systems (MIT, Boston), Int. Symp. on Robotics Research (San Francisco), Interval methods and their applications (Copenhagen), Applications of interval analysis (Sitges).

B. Neveu participated in the CP2005 conference, Sitges, Spain (October 1-5) and was invited speaker to the Franco-Japanese Workshop on Constraint Programming, November 2005, Le Croisic, France.

Y. Papegay has made a presentation during the Computational Kinematics workshop CK2005, Cassino (Italy), he attended the 7th International Mathematica Symposium (IMS'2005) in Perth (Western Australia) in August
and gave a talk at the Wolfram 2005 Technology Conference in October at Urbana Champaign, (Illinois, USA) to
present his ongoing work on interfacing Mathematica with
`ALIAS`.

M. Rueher participated in the CP-AI-OR'05 and CP2005 conferences and was invited speaker to the 8th International Conference on Computer Graphics and Artificial Intelligence (3IA'2005)

O. Pourtallier presented a paper to the ``Optimization days'' held at GERAD Montreal, May 2005. She participated in the workshop organized for the 25 anniversary of the GERAD Montreal, May 2005.

G. Trombettoni participated in the CP2005 conference on October 1-5, Sitges, Spain.

The "Conseil Général" has allocated a grant of 10 000 euros for the development of the wire robot that we are currently developing for the virtual reality workbench.

Our proposal
`ARES`for a STREP "Adventure" has been accepted by the EU. The partners of this project are: Scuola Superiore Sant'Anna, Pisa (coordinator), ETH Zürich, Universitat de Barcelona, Centre de Recerca en Bioelectrònica i Nanobiociència.

The purpose of this project is to conceptually investigate and develop a prototype of a system for endoluminal surgery, based on a fleet of modular micro-robots that may be connected to form a reconfigurable mechanism. The project will primarily focus on critical theoretical and
technological issues for implementing the concept, including the kinematic analysis of the reconfigurable internal mechanisms, locking/unlocking systems, control and communication, design, fabrication and integration of prototype sensing and actuation modules.
`ARES`will start at the beginning of 2006.

D. Daney was a member of the program committee of ``Journées Nationales de la Recherche en Robotique (JNRR)''

J-P. Merlet hold the position of CISC (Advisor for Science Information and Communication) at INRIA Sophia-Antipolis, is member of the INRIA Evaluation Board, is replacement member of the "commission de spécialistes" (61th section) of Nice University, is Chairman of the IFToMM (International Federation on the Theory of Machines and Mechanisms) Technical Committee on "Computational Kinematics" (until September 2005) and Chairman of the French section of IFToMM. As Chairman he has proposed during the IFToMM World Congress (the largest conference in the field of Mechanism Theory that is held every 4 years) that France hosts the 2007 World Congress. This proposal has been accepted and J-P. Merlet will be the General Chairman. He was an Associate Editor of IEEE Transactions on Robotics (until June 2005), and is an Associate Editor of Mechanism and Machine Theory (since September 2005), and has been member of the Program Committee of the IEEE Int. Conf. on Robotics and Automation and Int. Symp. on Robotics Research, while being session Chairs for these conferences. He was also member of the Program Committee of the workshop Applications of interval analysis, a satellite workshop of IntCP. He has been reviewer for ASME J. of Mechanical Design, Int. J. of Robotics Research, IEEE Trans. on Systems, Man and Cybernetics, Mechanism and Machine Theory, and book reviewer for Mecanica. He is a member of an informal Advisory Committee involving academics and industrial partners that is promoting nanobiotechnology at Sophia-Antipolis. He has also launched, in collaboration with the ICARE project, a prospective work on the future of robotics in Sophia-Antipolis.

M. Rueher and J-P. Merlet are members of the
`Ensemble`working group that promotes the use of interval analysis in the field of Control Theory

B. Neveu has been a member of the program committee of JFPC 2005 and of Soft-2005 (7th International Workshop on Preferences and Soft Constraints), and reviewer for the conferences CPAIOR 2005, CP 2005, SAC 2006, EA05, GECCO 2005 and the journal IJPLM

Yves Papegay is a member of the ``Commission de Spécialiste numéro 4'' of the University of French Polynesia and a member of the program committee of the "Journées Nationales de Calcul Formel" held in November at Luminy.

M. Rueher was Co-Chair of the RCA'2005 (Reliable Computations and their Applications) technical Track at the 20th ACM Symposium on Applied Computing SAC'2005, a member of the program committee of the CP-AI-OR'05 and CP2005 conferences. He was a member of the committee of Specif thesis price 2005. He is a member of the national committee CNU 27 and is the President of the "commission de spécialistes" (27th section) of Nice University,

G. Trombettoni served in the CPAIOR'05 conference program committee, and the JFPC 2005 conference program committee, and has been a reviewer for the the CP, CPAIOR, JFPC, SAC and SOCG conferences and for the CARI journal.

This year we worked, with the support of INRIA's multimedia service, on the scenario of a scientific movie that will illustrate the use of interval analysis for system solving. It indeed appears that the interval analysis community experiments difficulties when trying to demonstrate the advantages of this tool to other scientific or industrial communities. Ironically, the mathematical theory underlying interval analysis may be easily graphically illustrated. The interval analysis movie will thus illustrate the interval analysis-based solution of a system of 2 equations with 2 unknowns; it should be ready at the beginning of 2006.

H. Batnini taught in Image processing (Licence I), System administration (Licence I and II) and Data bases (Licence III).

G. Chabert taught in algorithmic in Java (Licence I), Functionnal programming with Scheme (Licence II) and Compilers (Licence III).

D. Daney taught 15 hours in a Medical robotics, DESS ``Génie Bio-médical'' at UNSA.

J-P. Merlet has taught 6 hours of robotics at ISIA and was instructor during the IEEE Robotics and Automation Society International Robotics course in Tokyo.

O. Pourtallier has taught 6 hours on game theory to master OSE, at École des Mines de Paris, Sophia Antipolis, 9 hours on game theory at the Summer school CIMPA-UNESCO ``Modèles et outils mathématiques pour l'analyse et la régulation des systèmes halieutiques.'' Nouhadibou, Mauritania, July 2005 and 6 hours on optimization, to DESS IMAFA at UNSA.

M. Rueher, B. Neveu, G. Trombettoni have given lectures on constraint programming in the Computer Science Master at UNSA (30 h).

M. Rueher has taught Data Base and Logic programming and Prolog

G. Trombettoni is an assistant professor in computer science at IUT R&T (networks and telecoms) of Sophia Antipolis.

J-P. Merlet has been a jury member of 2 PhD and 1 HDR defenses

J-P. Merlet and D. Daney act as advisors for the post-doctorate of M. Gouttefarde.

B. Neveu has been reviewer and jury member of 1 PhD defense.

M. Rueher has been a jury member of 4 PhD and 4 HDR defenses

M. Rueher is the head of the Master Degree Program STIC (Spécialité ISI) and of the 3rd year of ESSI.

**Current PhD thesis:**

H. Batnini,
*Contraintes globales sur le continu*, University of Nice-Sophia Antipolis (to be defended in December).

G. Chabert,
*Langage de pilotage et de paramétrage d'algorithmes de résolution de contraintes par intervalles*, University of Nice-Sophia Antipolis.

A. Goldsztejn,
*Définition et Applications des Extensions des Fonctions Réelles aux Intervalles Généralisés. Révision de la Théorie des Intervalles Modaux et Nouveaux Résultats*, University of Nice-Sophia Antipolis (defended in November).

C. Grandon,
*Résolution de systèmes d'équations avec incertitudes*, University of Nice-Sophia Antipolis.