COPRIN is a joint project between INRIA, CNRS, University of Nice-Sophia Antipolis (UNSA) and Certis.
COPRIN is a joint project between Nice/Sophia-Antipolis University (UNSA), CNRS, ENPC and INRIA. Its scientific objective is to develop and implement systems solving and optimization algorithms based on constraints propagation methods, interval analysis and symbolic computation, with interval arithmetic as the primary tool.
We are interested in real-valued constraint satisfaction problems
((f(X) = 0, f(X)0)), in optimization problems and in the proof
of the existence of properties (for example it exists
X such that f(X) = 0 or it exists two values
X1, X2 such that f(X1)>0 and f(X2)<0)
Solutions will be searched within a finite domain (called a box) which may be either continuous or mixed (i.e. for which some variables must belong to a continuous range while other variables may only have value within a discrete set). An important point is that we aim to find all the solutions within the domain as soon as the computer arithmetic will allow it: in other words we are looking for certified solutions.
Our research aims to develop algorithms that can be used for any problem or are specific to a given class of problem, especially problems that are issued from application domains for which we have an internal expertise (such as mechanism theory and software engineering).
Implementation of the algorithms will be performed within the frameworks of the generic software tool IcosAlias, currently under development, whose purpose is to allow one to design and test solving algorithms obtained as the combination of various software modules. IcosAlias will be based on the already existing libraries ICOS and ALIAS.
As a theoretical complexity analysis of the solving algorithms is usually extremely difficult and as the usual worst case analysis leads to exponential complexity for problems that are not representative of the application cases we are considering, the efficiency of the algorithm will be experimentally evaluated through IcosAlias on various realistic bench examples.
Dissemination is also an essential component of our activity as interval analysis based methods are not sufficiently known in the engineering and academic communities.
The scientific objective of the COPRIN project is to develop and implement systems solving and optimization algorithms based on constraints propagation methods, interval analysis and symbolic computation, with interval arithmetic as the primary tool.
The results obtained with these algorithms are certified in the sense that all solutions will be obtained and can be calculated with an arbitrary accuracy. Furthermore some of our algorithms will allow us to deal with systems involving uncertain coefficients.
A system will be constituted by a set of relations that may use all
the usual mathematical operators and functions (hence we may deal, for
example, with the relation
sin(x + y) + log(cos(ex) + y2)0).
We are interested in real-valued constraint satisfaction problems
((f(X) = 0, f(X)0)), in optimization problems and in the proof
of the existence of properties (for example it exists
X such that f(X) = 0 or it exists two values
X1, X2 such that f(X1)>0 and f(X2)<0)
Solutions will be searched within a finite domain (called a box) which may be either continuous or mixed (i.e. for which some variables must belong to a continuous range while other variables may only have value within a discrete set). An important point is that we aim to find all the solutions within the domain as soon as the computer arithmetic will allow it: in other words we are looking for certified solutions.
Our approach is to develop various operators that will be applied in sequence on a box:
exclusion operators: these operators determine that there is no solution to the problem within a given box
contractors: these operators may reduce the size of the box i.e. decrease the width of the allowed ranges for the variables
existence operators: they allow one to determine that there is a unique solution within a given box and are usually associated to a numerical scheme that enable to compute this solution in a safe way
If a given box is not rejected by the exclusion operators and is not modified by the other operators, then we will bisect one of the variables in order to create two new boxes that will be processed later on. Methods for choosing the bisected variable are also clearly within the scope of the project.
Our research aims to develop operators that can be used for any problem or are specific to a given class of problem, especially problems that are issued from application domains for which we have internally an expertise (such as mechanism theory and software engineering). Furthermore we will study symbolic computation based methods:
to develop an user-friendly interface that will automatically generate an executable program, run it and return the result to the interface
to analyze the semantic and syntax of the relations involved in a problem in order to generate automatically specific operators or to obtain a better interval evaluation of the relations (as interval arithmetic is very sensitive to the relations syntax)
to allow for the calculation of the solution with an arbitrary accuracy: certified interval solution will be obtained through a compiled program which is much more efficient than its symbolic computation equivalent and then a symbolic computation procedure will be used to calculate the solution up to the desired accuracy
While the methods developed in the project may be used for a very broad set of application domains (for example a difficult problem in quantum mechanic has been addressed this year), it is clear that the size of the project does not allow all of them to be addressed. Hence we have decided to focus our applicative activities on two domains for which we have already an expertise: mechanism theory (including robotics) and software engineering.
mechanism theory: our research focus on optimal design and geometrical modeling of mechanisms, especially for the machine-tool industry, automotive suspensions and medical robotics. As some problems in mechanism theory and molecular chemistry are basically equivalent, they will also be addressed
software engineering: our research focus on the automatic generation of test data sets i.e. to generate input data so that a given step in a software module will be executed
The ALIAS library (Algorithms Library of Interval Analysis for Systems) is a collection of procedures based on interval analysis for systems solving and optimization. Its development has started in 1998.
ALIAS is constituted of two parts:
ALIAS-C++: the C++ library (90 000 code lines) which is the core of the algorithms
ALIAS-Maple: the Maple interface for ALIAS-C++ (40 000 code lines). This interface allows one to specify a solving problem within Maple and to get the results within the same Maple session. The role of this interface is not only to generate automatically the C++ code, but also to perform an analysis of the problem in order to improve the efficiency of the solver. Furthermore using a distributed implementation of the algorithms is possible directly within the interface
Our effort this year has focused on:
improving the implementation of the 2B filtering method.
This method allows one
to possibly decrease the size of the ranges of the unknowns by
rewriting equations as an equality between a single left term involving
only one unknown x and a right term. Such equation has a solution only
if the interval evaluations of the left and right term has an
intersection and by using the inverse function of the left term we may
improve the interval for x. Up to now only
algebraic terms of degree 1 and 2 were considered as possible left
term. G. Chabert has
written an implementation that allows one to deal with almost any
expression for the left term.
linear interval system plays an important role in our algorithms and in practical applications. We have implemented an improved linear interval analysis solver (see section ) that has been incorporated into ALIAS
For dissemination purpose Y. Papegay has also written a web interface for two of our solvers. This on-line service, available since the end of September 2004, allows one to submit a set of up to 5 equations/inequalities to these solvers and to possibly get all the solutions of the system (see http://www-sop.inria.fr/coprin). Approximately 100 systems have been submitted in October 2004.
As mentioned previously our algorithms are well suited for a distributed implementation. The ALIAS-Maple interface already allows to use our algorithms in that way on a cluster by using the master-slaves paradigm and the message passing scheme PVM. But this paradigm is limited to a relatively small number of slaves. We have started a cooperation with the OASIS, APACHE and PARIS projects for using a grid-computing approach with as application example a difficult problem in quantum mechanic (see the corresponding section).
The current version of the ALIAS library is available through the Web page http://www-sop.inria.fr/coprin/logiciels/ALIAS.
ICOS solves rigorously nonlinear problems modeled with the AMPL mathematical programming language (see www.ampl.com). ICOS computes a small and safe box for each solutions of such systems. The constraint-solving algorithm of ICOS is based on a combination of interval analysis methods, constraint programming and linear programming relaxation techniques. ICOS has been evaluated on a variety of benchmarks from kinematics, mechanics and robotics. It outperforms interval methods as well as CSP solvers and it compares well with state-of-the-art optimization solvers. On-line implementation of ICOS will be soon available through the web site of the COPRIN project.
Our current software ALIAS and ICOS provide various interval analysis modules but lack of flexibility for constructing and testing new solvers.
Icosalias is a new platform developed by the project. It can be used as a classical interval-based problems solver, but it is also aimed at being a research support, by providing means to integrate and combine different algorithms via a simple language or interface. The platform has been augmented with the following features :
Filtering modules : 2B, 3B, univariate Newton and box consistency.
A library for solving interval linear systems (classical and extended methods, preconditioning)
Newton methods (and handling of over-constrained problems)
A more flexible interface that accepts any valid mathematical expression (including all trigonometric and hyperbolic functions).
Generic kernel structures to let the user choose between automatic evaluations (projection, differentiation) and hard-coded procedures.
An extension of the search process to deal with parameters, coefficients, inner boxes and border boxes for non 0-dimensional systems.
A Statistics tool for evaluating performances
Interesting results were obtained on several benchmarks. Icosalias was also used to solve significant problems, especially, a robot calibration problem based on real experimental data.
We have continued the development of the C++ INCOP library of incomplete methods for solving combinatorial optimization problems. This library offers classical local search methods such as simulated annealing, tabu search as well as a population based method, Go With the Winners. Several problems had been encoded, including Constraint Satisfaction Problems, graph coloring, frequency assignment.
We have this year extended the library into three directions :
A new simple local search metaheuristics named IDW (Intensification - Diversification Walk) has been developed. For selecting the next move, only a part of the neighborhood is explored and the first move leading to a better or equal configuration is chosen. When no such a move is found in that part of the neighborhood, the algorithm selects a worsening move. The main parameter of this algorithm is the size of the part of the neighborhood that is examined at each move (intensification) A second parameter defines two variants for escaping local minima when no better or equal neighbor has been found (diversification). IDW(any) chooses any worsening neighbor , and IDW (best) chooses the less worsening neighbor. This metaheuristics has given good results on different benchmarks (graph coloring, car sequencing, frequency assignment ...).
An automatic parameter tuning tool has been proposed : it can robustly tune the one or two parameters of simple local search algorithms such as simulated annealing, tabu search, IDW ,etc ...
New problems as the well known car sequencing problem defined in
the CSP-LIB, and weighted n-ary CSPs defined in wcsp format proposed
by de Givry
The new version 1.1 of this library, named INCOP, is now available at http://www-sop.inria.fr/coprin/neveu/incop.
Systems solving and optimization are clearly the core of our research activities. We focus on systems solving as many applications in engineering sciences require finding all isolated solutions to systems of constraints over real numbers. It is difficult to solve as the inherent computational complexity is NP-hard and numerical issues are critical in practice. For example, it is far from being obvious to guarantee correctness and completeness as well as to ensure termination. Overall complexity of our solvers cannot be estimated in general and consequently only extensive experiments allow to estimate their practical complexity which is in general quite different from the worst case exponential complexity.
Our research focus on the following axis:
developing new algorithms for local and global filtering, exclusion and existence operators. This is one of the main axis of our theoretical work. It involves numerical analysis, symbolic computation, constraints programming.
developing specific solvers for systems sharing the same type of structure (e.g. systems of distance equations or linear interval systems as mentioned later on). Here also a theoretical work allows to specialize the mathematical tools we are using according to the problem at hand for a better efficiency. In parallel specific data structure are used in the implementation
systems decomposition: the objective is to decompose large systems into sub-systems that are independent or loosely connected and are solved in sequence, allowing an important improvement of efficiency compared to general solver
developing our generic systems solving software IcosAlias. Existing solvers exhibit lack of flexibility: our objective is to develop a framework that will allow to modify easily the solving strategy, to test new algorithms and to develop solvers for specific systems
The theoretical work of this year addresses systems of geometrical constraints and distance equations (subjects on which we are working since the very beginning of the project), optimization (a theme that we have planned to consider for a long time as interval analysis is one of the very few methods that allows for global optimization) and linear systems (a topic that is important both for the applications and for the interval analysis algorithms)
The purpose of our research is to introduce and to study a new branch and bound algorithm called QuadSolver. The essential feature of this algorithm is a global constraint (called Quad) that works on a tight and safe linear relaxation of the polynomial relations of the constraint systems. More precisely, QuadSolver is a branch and prune algorithm that combines Quad, local consistencies and interval methods .
QuadSolver has been evaluated on a variety of benchmarks from kinematics, mechanics and robotics. On these benchmarks, it outperforms classical interval methods as well as CSP solvers and it compares well with state-of-the-art optimization solvers.
The relaxation of nonlinear terms is adapted from the classical the ``Reformulation-Linearisation Technique (RLT)'' linearisation method. The simplex algorithm is used to narrow the domain of each variable with respect to the subset of the linear set of constraints generated by the relaxation process. The coefficients of these linear constraints are updated with the new values of the bounds of the domains and the process is restarted until no more significant reduction can be done. We have demonstrated that the Quad algorithm yields a more effective pruning of the domains than local consistency filtering algorithms (e.g., 2B-consistency or box-consistency). Indeed, the drawback of classical local consistencies comes from the fact that the constraints are handled independently and in a blind way. For example, when dealing with quadratic constraints, classical local consistencies do not exploit the semantic of quadratic term; for reducing the domains of the variables. Conversely, linear programming techniques do capture most of the semantics of nonlinear terms (e.g., convex and concave envelopes of these particular terms). The extension of Quad for handling any polynomial constraint system requires to replace non-quadratic terms by new variables and to add the corresponding identities to the initial constraint system. However, a complete quadrification would generate a huge number of linear constraints. We have introduced a heuristics based on a good trade-off between a tight approximation of the non linear terms and the size of the generated constraint system.
A safe rounding process is a key issue for the Quad framework. The simplex algorithm is used to narrow the domain of each variable with respect to the subset of the linear set of constraints generated by the relaxation process but most implementations of the simplex algorithm are numerically unsafe. Moreover, the coefficients of the generated linear constraints are computed with floating point numbers. So, two problems may occur in the Quad-filtering process:
the whole linearisation may become incorrect due to rounding errors when computing the coefficients of the generated linear constraints ;
some solutions may be lost when computing the bounds of the domains of the variables with the simplex algorithm.
We propose a safe procedure for computing the coefficients of the generated linear constraints. The second problem has recently been addressed by Neumaier and Shcherbina which have proposed a simple and cheap procedure to get a rigorous upper bound of the objective function. The incorporation of these procedures in the Quad-filtering process allows us to call the simplex algorithm without worrying about possible lost solutions due to numerical round-off errors.
Most of the methods for solving constraints on variables with interval domains are based on a branch and prune technique; basically a combination of local consistencies and bisection. We propose a global pruning method and a strategy for splitting the domains of the variables :
In , we have introduced a global filtering algorithm for handling systems of distance relations. This new method, named QuadDist is derived from Quad, a global filtering algorithm for handling systems of quadratic equations and inequalities. Quad computes a tight linear relaxation of the terms of the quadratic equations and uses the simplex algorithm to reduce the domains of the variables. We propose a new linear approximation for handling distance relations. The key point of this new method is that the approximations are not generated for each quadratic terms but for each distance constraint. Thus, QuadDist defines a tighter approximation than Quad without the need to generate any additional variables. Experimental results are very promising.
In , we proposed a strategy, named SDD (Semantic Domain Decomposition), for choosing splitting points in the domains of the variables. These choices are defined by the monotonicity and convexity properties of the distance constraints and by the topology of the local solution spaces. Experimental results show that this heuristic improves the performances of the classical branching algorithm.
We have studied how to manage small uncertainties in the parameters of a system of distance equations. The main aim is to approximate the sub-spaces of solutions as precise as possible. Classical interval solvers have difficulties for describing it, because solutions are not isolated points but continuous subspaces. Our work is based on the following idea: we have the solutions of distance equation system without uncertainties (or we compute them), then we use these solutions to determine a dynamic splitting policy of the domains (using these solutions as N-dimensional points to compute a subspace like Voronoi diagrams). Filtering algorithms are then applied to each subspace to obtain boxes containing the solutions of the system with uncertainties.
We have studied the limitations of this method. Despite that the method is complete (no region with solutions is lost), we have found the following difficulties:
A given distance system without uncertainties may have no solution while introducing uncertainties allows one to get solutions. The difficulty is then to find the different continuous subspaces.
The solutions found when uncertainties are taken into account do not always correspond to an extension around the solution points without uncertainties. If new solution regions appear when the uncertainties are taken into account, the method is not able to isolate them.
When regions overlap, our splitting technique will separate two regions that will be each smaller than the extensions of solution points. A method to identify this case consists in searching for a solution in the hyperplane that separates the solutions. If no solution is found, then there is no overlapping between these regions.
We are now studying how to solve these difficulties and how to extend the method to other systems of equations.
This work has been performed in collaboration with Marta Wilczkowiak working in the MOVI project at INRIA Rhônes-Alpes.
In 2002 and 2003, we had designed and implemented a new approach to 3D scene modeling based on geometric constraints (published in 2003 in the main conferences in constraint programming and computer vision). Contrary to the existing methods, we can quickly obtain 3D scene models that respect exactly the given constraints. Our system can describe a large variety of linear and non-linear constraints in a flexible way.
In 2004, this work has been continued in two ways:
In computer vision, the tool was improved and a full description appears in the PhD thesis of Marta Wilczkowiak (defended in April 2004; this work represents about the half of the thesis contribution) and has been submitted to a journal in computer vision.
feedback on real-world constraint problems (modeling scenes) has encouraged us to better understand the properties of the main algorithm called GPDOF. GPDOF can be viewed as a general algorithm for decomposing geometric constraint systems and has been presented to the French community working in the CAD and geometric constraint fields .
The use of interval methods provides computational proofs of existence and location of global optima. These methods find the global optimum and provide bounds on its value and location.
Efficient global optimization software like BARON use linear relaxations to compute a lower bound of the objective function, and local search methods to obtain an upper bound of the optima. However, these software are not safe and may provide wrong solutions.
We have introduced an efficient and safe framework to find a global optimum and bounds on its value . Local search methods are combined with interval techniques to compute a safe upper bound. Consistency techniques are also used to speed up the initial convergence of the interval narrowing algorithms. A lower bound is computed on a linear relaxation of the constraint system and the objective function. This computation is based on a safe and rigorous implementation of linear programming techniques.
Since the beginning of the integration of CP methods in continuous domains, the achievement of arc consistency has been an open question. Arc-consistency gives a way to tighten the search space by removing incompatible values w.r.t to a constraint. While it is an underlying methods of the major part of the algorithms in finite CSPs, it was still not clear whether it could be applied to continuous CSPs or not. We have investigated this problem by focusing on very simple examples, in order to perform a thorough analysis of the failure cases. We proved the unfeasibility of a pure backtrack-free arc consistency filtering and deduced from our study a new and strong property, the box-set consistency, that can be enforced with the cost of a few choice points. We have dealt with practical issues, and particularly with a projection operator managing unions of intervals. We have implemented a solving process using a lazy variant of box-set filtering, on the Icosalias platform.
Solving an interval linear system of equations is crucial for two reasons:
a module solving this problem is a basic component of numerous interval analysis algorithms (such as interval Newton)
this problem appears in many applications (see the example in the robotics section)
We have implemented a C++ package which collects the classical methods known in the field of interval analysis (Gauss-Seidel, Gauss-elimination, Krawczyk's method with or without pre-conditioning) and also additional methods based on the constraint programming approach and linear programming. This package is connected to the libraries ALIAS and ICOSALIAS and is used within a specific solver of non-linear over-constrained systems of equations.
Solving linear interval systems of type AX = b consists
in determining a box that includes all the
solutions in X of such set of linear systems. But most of the time
the real problem
may be written as where is a set of parameters with interval values.
Classical interval
analysis method (such as Gauss elimination) usually overestimate
largely the box including all the solutions in X as the dependency
in
between the coefficients Aij, bk are not taken into
account.
We have shown that using the monotonicity
for improving the interval evaluations of the expressions used in
the Gauss elimination scheme (by
considering their derivatives with respect to )
may allow to drastically improve the box
for X. This algorithm has been incorporated into ALIAS and
has been used for a robotics application.
We have proposed an algorithm to compute inner and outer approximations of a linear All Exists (AE)-solution set under the form of skew boxes, i.e. hyper-parallelograms. This algorithm has two advantages:
skew boxes are more general than boxes, therefore the exact AE-solution sets is more accurately approximated by skew boxes than by boxes
convergence of the inner approximation operator is drastically improved by the introducing a post-conditioning ( or "right-preconditioning" ).
An article has been submitted to the journal Reliable Computing.
In our thesis, that shall soon be defended, we introduce a reformulation of modal intervals theory. This reformulation includes the main results of modal intervals theory and proposes a new extension, the Taylor AE-extension.
The core of our activity in robotics and mechanism theory is the optimal design of mechanism and the analysis of parallel robots. The following points have been addressed this year:
dimensional synthesis of the 3-dof 3R positioning device: a set of poses that have to be reached by the wrist of the robot are specified and the problem is to determine the geometries of all the robots that can reach all the poses in the set. We have proposed an improved algorithm to solve this problem that have no known solution up to now
trajectory planner for parallel robot: we have improved our trajectory planner that allows one to verify if a given almost arbitrary trajectory fully lie within the workspace of a parallel robot. This planner takes into account the uncertainties in the trajectory execution and in the robot modeling
Jacobian matrix of parallel robot: we have improved our algorithm that allows one to determine the minimal and maximal eigenvalues of the Jacobian matrix of a robot whose pose is constrained to lie within a given workspace. These values give precious indication on the maximal positioning error of the robot
optimal design of a parallel with respect to workspace and accuracy requirements: we have developed a design algorithm that allow one to compute almost all design solution for given workspace and accuracy requirement
forward kinematics of Gough platform with general geometry: we propose a fast and efficient algorithm to solve this difficult problem
wire interference in parallel robots: we have developed an algorithm to take into account wire interference for the workspace analysis of wire robots
parallel robot calibration: we have proposed algorithms based in interval analysis to solve and certify calibration equations while algebraic geometry has been used to provide more robust calibration equations
Our methodology for optimal design is to determine an approximation of
all the possible values of the n design parameters so that a design
requirements (or a set of them) is satisfied. Such approximation is
obtained as a set of boxes in the parameters space, an n dimensional
space in which each frame axis represent the value of one design
parameters. If an approximation Ai may be obtained for any design
requirements Ri in the set of requirements {R1, ..., Rm},
then the possible design parameters values will be
obtained as the intersection of all the Ri. This approach has the
following advantages over more classical approaches:
it allows to deal with imperative requirements i.e. requirements that must absolutely be satisfied by a design solution
it offers all the possible compromises between requirements that are antagonistic
it allow to deal with uncertainties: indeed the physical
instance of a theoretical solution will differ from it due to
manufacturing tolerances. In our approach the approximation includes
only boxes whose width is at least twice the manufacturing
tolerances i. For example if a solution is provided for the
design parameter D1 as the range [a1, b1], then we may choose as
manufacturing solution any value in the range
[a1 + 1, b1-1] so that we can guarantee that the
physical instance of D1 will lie in the range [a1, b1]
The difficulty in this approach is to calculate the approximation
Ri. Interval analysis is a tool of choice for such problem. We have
shown that it was possible to compute the Ri in a 6-dimensional
parameters space so that the robot workspace will include an
arbitrary large set of pre-defined poses, while the positioning
errors of the robot at these poses will not be larger than a
pre-defined threshold .
Recently we have also shown that it was possible to compute the Ri
in a 26-dimensional parameters space so that the positioning
errors of the robot over a given workspace are lower than
pre-defined threshold.
It may be thought that this "old" but difficult problem has been solved but this is not the case in practice. Indeed, with one exception (the combination of the FgB and RS software of Faugère and Rouillier of the SALSA project) the proposed algorithms either deal with special geometries of the robot or do not provide certified answers (solutions may be either wrong or lost) or are not fully automated. Furthermore it must be reminded that the real problem is not to determine all the solutions but only the one corresponding to the actual pose of the robot (and there is no known algorithm for determining which is the current pose among the set of all solutions).
We have developed within the ALIAS library a distance equations solver that may be used for this problem. Most of the theorems we are using within our general purpose solvers have been revisited for distance equations, allowing to get stronger versions of these theorems (for example the exclusion regions obtained with the Kantorovitch theorem or Neumaier exclusion theorem , that are guaranteed to include a unique solution, are usually about 20 times larger than obtained with the general purpose version).
Our tests have shown that while providing certified solutions (no solution will be lost and the solutions can be usually calculated with an arbitrary accuracy) our algorithm is just outperformed by FgB and RS when looking for all solutions. When the search space is restricted (as this will be the case in practice) the algorithm is the fastest available. In particular for real-time application the algorithm is almost as fast as the Newton scheme while ensuring to provide either the exact current pose (and this is not the case of the Newton scheme) or an emergency signal indicating that multiple solutions exist, in which case it is necessary to stop the robot as it is no more possible to control it.
Accurate identification of the kinematic parameters of a robot is difficult due to two main problems :
taking into account the influence of measurement noise on calibration results, if the error distribution is not given,
choosing an appropriate geometrical model of the robot that is sufficiently simplified for providing manageable calibration equations while still describing realistically the robot behavior
We propose to use interval methods (2B, 3B and a specific interval Newton method) which permit to bound the set of solutions. Additionally, this approach permits to check the validity of the calibration equations and also proposes possible correction for the robot modeling. This method has been successfully verified experimentally on a Deltalab Gough platform. This work was performed within the framework of the national Robea/MP2 project with the collaboration of N. Andreff of IFMA.
An interesting measurement device for calibration is the double-ball
bar mechanism (DBB) constituted of two ball-and-socket joints connected
by an extensible leg whose length is measured.
This device may be used for control or calibration of
a parallel robot by linking the joints to the base and platform of the
robot. Usually the internal state of a n d.o.f. parallel robots is measured
by exactly n sensors. Introducing at least one additional
measurement (for example with a DBB) allows theoretically the
calibration of the robot by using only its internal
sensors. But with only one additional measurement it
is difficult to obtain a numerically stable system of calibration
equations. We have proposed algebraic methods to construct such a
system. Moreover we have also studied the influence of the number of
additional DBB on the
numerical stability with regards to measurement noise.
The aim of this work is to propose a simple and robust calibration
scheme adapted to a deployable robot.
We intend to build a wire robot that will be used as a force-feedback haptic device for the workbench of the research unit. Although several wires robots have been developed in many laboratories we have the following objectives:
mechanical design: the wire actuation system should be designed to allow for fast and easy changes in the geometry of the robot, precise evaluation of the wire lengths (for an optimal measurement of the platform location) and of the force applied at the end-effector level
modularity: the performances of a parallel robot are very sensitive to the geometry of the robot. We intend to develop algorithms allowing to determine the optimal geometry (i.e. location of the actuation system) being given the task requirements. For that purpose it is necessary to better understand the specificities of wire robots and we have already addressed the workspace problem
Workspace is an important task criteria and a design strategy must
address this performance index. Compared to classical parallel robot
the workspace of wires parallel robot is deeply influenced by the
additional constraints that the
wire tension must be positive (to avoid slack wires) and lower than a
pre-defined threshold. The relationship between the wrench
applied on the end-effector, the location X of the
end-effector,
the wires tension and the
kinematics parameters defines an under-constrained system of
equations that is linear in the
wires tension and
in the end-effector
wrench. We have applied interval analysis methods to
determine an approximation of the allowed region for the end-effector
such that, being given the wrench , the wires tension
satisfied the tension constraints. This approximation is constituted
of a list of possible ranges for X, that allows the design of an
efficient trajectory verifier/planner.
Our work has focused on the mechanical design of a modular wires robot which should allow:
to change easily the location of the attachment points of the wires for adapting the robot's performances (workspace, accuracy, ...) to the task at hand
to drive and control the end-effector with a very good accuracy
to allow for the measurement of the forces/torques applied on the end-effector
to minimize the influence of internal and external disturbances on the positioning of the end-effector
Our design is based on independent reconfigurable driving modules that include electrical motor, modular hoist, elastic element, and tension sensor.
At the beginning of year 2004 we have been contacted to help solving a difficult problem in quantum mechanics related to the Kochen-Specker theorem that shows that the hidden variables hypothesis was not valid. For that purpose it is necessary to examine sets of 4D unit vectors (denoted 1,2,...,9,A,B,...), called KS vectors, that will be used in a diagram constituted of groups of four 4D vectors (although any finite dimension larger than 2 may be used for the vectors space). An example of such diagram is
FGHI,BCDE,789A,3456,1256,2ADE,49BC,18HI,37FG,CEGI
with groups FGHI, BCDE, .... All vectors in a group must be orthogonal to all the other vectors in the group and no vectors should be collinear or opposite. We may try to assign a state 0 or 1 to each vector so that one vector in each group has the state 0 while the three others have the state 1. Kochen was able to exhibit a system with 117 3D vectors for which such assignment was not possible i.e. at least one vector belong to one group that impose that its state is 0 while the same vector belong to another group in which its state should be 1.
Diagram with vectors having real valued components are called Kochen-Specker systems and the existence of such diagram is a key point in the proof of the Kochen-Specker theorem. But finding Kochen-Specker systems is also important for experiments as the vectors describe measurements that can be carried out on a finite dimensional quantum system for verifying the theorem. Note that different diagrams may correspond to the same measurement arrangements: they will be called isomorphic systems.
So far known Kochen-Specker systems have been found using approaches that rely on human ingenuity and the complexity of finding such system grows exponentially with increasing numbers of vectors and dimension for the vectors space.
A team constituted of Mladen Pavičić, University of Zagreb,
Brendan McKay, Australian National University, Norman Megill,
Boston Information Group and COPRIN has proposed another approach.
A software was designed to provide all non-isomorphic systems with a
given number a of vectors and a given number b of groups. Another
software is then used on the input to eliminate diagrams that admit a
0-1
state assignment for the vectors. The remaining diagrams are KS
systems candidate but it remains to show that the components of the
vectors admit at least one
real value so that the orthogonality, non collinearity and unitary
constraints are satisfied. Note that it is not necessary to determine
all possible solution vectors: one set is sufficient.
The problem is not well posed: if a set of
solutions vectors {S1, S2, ..., Sa} is found, then the set
{RS1, RS2, ..., RSa}, where R is an arbitrary rotation
matrix, is also a solution. This may be avoided by assuming that a
group is an orthonormal basis of R4. Hence remains
4(a-4) unknowns (the components of the vectors)
with the constraints that the vectors are unitary (a-4
constraints) and that the vectors in a group are mutually orthogonal: this
induces 6 constraint equations for a group and a total of 6(b-1)
equations. We end up with a system of 4(a-4) unknowns for
a-4 + 6(b-1) equations. Classical algebraic methods were initially used
for the solving but in general without success due to the size of the
system. It appears that interval analysis was a good candidate for the
solving: indeed as the unknowns are components of an unit vector their
values must lie in the range [-1,1]. Furthermore if S is a solution
vector, so is -S: this allow to restrict the range of one component
of each vector to [0,1]. Finally looking only for
one solution is convenient for interval analysis-based solving method.
Preliminary trials with ALIAS have shown that indeed the
approach was working well for reasonable values of a, b. But the
exhaustive generation was
not appropriate for large values as the
number of generated diagrams grows exponentially with the number of
vectors. For example for a = 18, b = 12 the exhaustive generation will
generate more than 2.9 1016 diagrams and would require more than
30 million years on a 2 GHz CPU. To solve this problem the generation
program has been
modified to create diagrams incrementally i.e. when a sequence of n
groups has been created all the diagrams which share this n groups
are generated before changing the n-th group. Then we have
implemented within the generation program a filter based on constraint
programming that in many cases allows to show that for the current
sequence the orthogonality, non-collinearity and unitary constraints
cannot be satisfied. Eliminating sequences early in the generation
process allows to drastically improve the generation time. For example
for a = 18, b = 12 the software generates only 100220 systems in less
than 30 minutes among which only 26800 cannot have a 0-1 state
assignment and are submitted to the solver.
Using this approach all 4D KS vector systems with up to 24 vectors and all 3D system with up to 30 vectors were generated. This exhaustive approach has lead us to solve approximately 200 106 non-linear systems (having between 30 and 200 equations), which constitute probably a world record.
Among the various results that we have found we may mention
that
Cabello's system 1234,4567,789A,ABCD,DEFG,GHI1,35CE,29BI,68FH with
a = 18, b = 9 is the smallest 4-dim Kochen-Specker real system. Further
results can be found in .
For larger values of a we have encountered in some cases solving
difficulties. For example we have not been able to solve a case with
a = 40, b = 20 that involves 108 unknowns and 150 equations. A
theoretical work has been started to better understand the structure
of the equations. At the same time we have started a collaboration
with the projects OASIS, APACHE and PARIS to determine if a
grid-computing approach was able to solve the problem.
The presented algorithms can easily be generalized beyond the Kochen-Specker theorem. One can use the diagrams to generate Hilbert lattice counterexamples, partial Boolean algebras, and general quantum algebras which could eventually serve as an algebra for quantum computers.
Industrial modeling and simulation processes are usually based on scientific theories, using formula for describing physical features and computation algorithms. Based on these formula, numerical methods are developed and numerical codes are implemented for simulation and visualization of these features. Due to the large number of parameters and equations involved in industrial models and to the diversity of physical contexts, it is an huge work to produce and test such numerical codes.
Our contribution to this domain consists in specifying, designing and implementing various methods and tools for automatic generation of numerical simulators from the symbolic formula. It is a joint activity with several departments of Airbus Aerospace company in Toulouse based on a former development of a framework for edition, communication, and documentation of models.
In 2004 we focused on
unit coherence analysis in large set of formulas,
automatic orientation of formulas for solving purposes,
model description and C code generation for real time simulation.
The COPRIN web pages offer now some practical examples of the use of interval analysis, some explanations on the basic methods used for constraints solving and a set of about 100 systems solving examples (among which the well known difficult Katsura solved for n=20).
We propose also an on-line solving service that allow to submit up to 5 equations/inequalities to our solver and to possibly get all the solutions of the system (see http://www-sop.inria.fr/coprin).
We have started at the end of 2002 a collaboration with Alcatel for the optimal design of a space-based observation telescope. The objectives of this study are to develop an instrument with a low inertia (to reduce the energy necessary for orienting the instrument and to allow fast orientation changes) while being deployable and to allow an active accurate control of the secondary mirror location.
In 2002/2003 we have proposed an innovative structure and we have completed a first optimal design study for a prototype. We have then been chosen by Alcatel as a sub-contractant for an ESA contract for the development of a small scale prototype. The Technical University of Braunschweig was chosen to provide a key component for this prototype. Unexpected difficulties with this component has delayed the tests of this prototype although some deployment test have succeeded. This mechanical difficulties have imposed important changes in the mechanical design (with the same optimal arrangement for the structure) and preliminary test will take place at the beginning of 2005. Alcatel has signed another contract with COPRIN for determining the optimal geometry of the measurement system (based on laser interferometers) that will be used to locate precisely the secondary mirror of the telescope.
To improve the production of numerical (flight) simulators from models of aerodynamics, Airbus France is interested in methods and tools like those described in .
In 2003, a two-years long contract has been set up for prototyping some code generation features. For confidentiality reasons, no further details can be given here.
We propose to study the calibration of a deployable mechanism used by a satellite for the positioning of an Earth Observation Telescope.
COPRIN is a member of the Grid5000 project that intend to explore the possible use of grid computing for application problems. Our contribution will be to provide realistic application problems and interval analysis-based algorithms to solve them.
This project, that is funded by the CNRS is a follow-up of the "MAX" project that has been completed on September 2003. The objectives is to improve the accuracy of complex mechanical systems. The partners are:
LIRMM (Montpelier)
LASMEA, IFMA (Clermont-Ferrand)
IRCCYN (Nantes)
Our contribution is the use of interval analysis based methods for performances analysis and for systems solving that arise when dealing with the design, control and calibration of such systems.
The purpose of this project is to make available within a unique software platform various software dealing with geometrical constraints, to develop exchange mechanisms for the communication between solvers, to build a geometric problems database, to compare these software on the same type of problems and to make available the results to the community.
The use of floating point numbers to represent real numbers is the root of an important amount of failures and potential faults in software for critical systems. The modeling of such systems, combined with model checking techniques, proof and test case generation techniques, enhances the quality of the development process and improves the reliability of systems which integrates pieces of software. Unfortunately, the currently available approaches, notations and techniques do not really take into account floating point numbers although the usual way to do computation over the real with a computer is to use floating point numbers. The main difficulty to get a correct account of floating point numbers comes from:
the poor properties of floating point number arithmetic,
the dependency of floating point number properties to the computer architecture (even if the floating point unit is IEEE 754 compliant).
The aim of the V3F ACI project is to provide tools required to evaluate the representation of reals by means of floating point numbers during the software validation and checking phases. More precisely, our aim is to develop a framework relying on CSP approaches for the validation of program computations with hypothesis coming from the modeling phase. Constraint methods have been successfully used in many applications related to software validation and checking. They already have shown their capabilities in automatic test case generation, in model checking as well as in code analysis. However, CSP techniques are then restricted to integer, rational and real numbers. Thus, the challenge is to provide the solving techniques to handle floating point numbers. We are developing solving techniques adapted to floating point numbers to validate and check critical software. We are also studying the use of such a solver in the processes of model checking, of automatic test case generation and of static code checking.
V3F ACI project is a joint research project with:
LIFC, Laboratoire d'Informatique de l'Université de Franche-Comté (CNRS-INRIA),
IRISA, Institut de Recherche en Informatique et Systèmes Aléatoires, Rennes,
CEA, Commissariat à l'Energie Atomique, Saclay, Paris.
The DANOCOPS project aims at exploring an innovating technique to automatically detect some non-conformities between the program and its specifications. Our approach is based on the use of constraint programming techniques: a CSP is build from the program while another CSP is build from the specifications. We will use these two CSPs in order to extract information which shows some non-conformities.
While constraint programming has shown its ability in the fields of test case generation, either structural test case generation or functional test case generation, to our knowledge, there has been no attempt to take advantage of constraint programming to check whether a program conforms to its specifications. If the latter seems to be out of the range of constraint programming, non-conformities detection appears as a more reachable aim.
The DANOCOPS RNTL project is a joint research project with:
THALES SYSTEMES AEROPORTES, Paris,
AXLOG ingénierie, Arcueil,
LIFC, Laboratoire d'Informatique de l'Université de Franche-Comté (CNRS - INRIA),
LSR, Laboratoire Logiciels Systèmes Réseaux (UMR 5526), Saint Martin d'Hères.
J-P. Merlet hold the position of CISC (Advisor for Science Information and Communication) of INRIA Sophia, is member of the INRIA Evaluation Board, has been a member of the CERT/ONERA Review Board, is suppleant member of the "commission de spécialistes" (61th section) of Nice University, is chairman of the IFToMM (International Federation on the Theory of Machines and Mechanisms) Technical Committee on "Computational Kinematics" and Chairman of the French section of IFToMM. As Chairman he has proposed during the IFToMM World Congress (the largest conference in the field of Mechanism Theory, that is held every 4 years) that France hosts the 2007 World Congress. This proposal has been accepted and J-P. Merlet will be the General Chairman.
He is an Associate Editor of IEEE Transactions on Robotics and has been member of the Program Committee of the IEEE Int. Conf. on Robotics and Automation conference, Parallel Kinematic Seminar (Chemnitz), IMG workshop (Genova) while being session Chairs for these conferences. He has been reviewer for ASME J. of Mechanical Design, Int. J. of Robotics Research, J. of Intelligent and Robotic Systems, Mechanism and Machine Theory, Robotics and Autonomous System, Robotica, European J. of Mechanics, Robotica, Int. J. for Numerical Methods in Engineering. He is a member of an informal Advisory Committee involving academics and industrial partners that is promoting nanobiotechnology at Sophia-Antipolis
C. Michel was local organizer of CP-AI-OR, International Conference on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems, April 20-22, 2004, Nice, France.
B. Neveu was member of the program committee of ``Journées Nationales sur la résolution pratique des problèmes NP-complets'' (JNPC 2004 conference), and reviewer for CP-AI-OR 2004 and CARI 2004.
Y. Papegay is member of the specialist commission, number 4 of the University of French Polynesia.
M. Rueher has been co-chair and member of the program committee of CP-AI-OR'04 and PDMC 2004 (3rd International Workshop on Parallel and Distributed Methods in verification), member of the committee of Specif thesis price 2004, member of the CS'27 committee (UNSA).
M. Rueher and J-P. Merlet are members of the Ensemble working group that promotes the use of interval analysis in the field of Control Theory
G. Trombettoni has been member of the CP-AI-OR'04 conference program committee (conference about constraint programming and operational research).
H. Batnini has participated to an ``AS contraintes géométriques'' Workshop, Strasbourg, France and to JNPC 2004, June 21-23, 2004, Amiens, France.
B. Neveu has participated to the Journées Nationales sur la résolution pratique des problèmes NP-complets'' (JNPC 2004) Angers June 21-23 and to GTMG 2004 (Working group on geometric modelization) March, 11-12, 2004, Lyon, France.
M. Rueher has participated to the ``ACI Sécurité'' Workshop, November 2004, Toulouse, France and to JFPL/JNPC (Angers, June)
G. Trombettoni has given a talk at the national conference GTMG 2004 (Working group about geometric modelization), March, 11-12, 2004, Lyon, France.
D. Daney has presented papers at the IFToMM World Congress and at The IEEE Int. Conf. on Robotics and Automation .
G. Chabert has participated to CP-AI-OR, April 20-22, 2004, Nice, France and to CP, September 27-October 1, 2004, Toronto, Canada.
J-P. Merlet has presented papers at the IFToMM World Congress , at the IEEE Int. Conf. on Robotics and Automation , at the History of Machines and Mechanisms workshop , at the Int. Conf. on Polynomial System Solving and at the Advances in Robot Kinematics workshop .
C. Michel has presented a paper at SCAN 2004, 11th GAMM - IMACS International Symposium on Scientific Computing, Computer Arithmetic, and Validated Numerics, October 4-8, 2004, Fukuoka, Japan
B. Neveu has participated to the Tenth International conference on principles and practice of constraint programming (CP 2004) Toronto Canada September 27-October 1, 2004, to the First International Conference, CP-AI-OR, Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems, April 19-22, 2004, Nice, France and to CORS-INFORMS Joint International Meeting, May 16-19, 2004, Banff, Canada.
Y. Papegay has been invited to give a presentation at the Mathematica Conference, Paris September 2004, about modeling and simulation with Mathematica and has presented a paper at the Wolfram 2004 Technology Conference, October 2004, Urbana Champaign, Illinois, USA.
M. Rueher has participated to CP-AI-OR, April 19-22, 2004, Nice, France and was invited speaker to the Franco-Japanese Workshop on Constraint Programming, October 2004, Tokyo, Japan,
G. Trombettoni has participated to CP-AI-OR, Nice and has presented a paper to the Symposium in operational research CORS/INFORMS, May 16-19, 2004, Banff, Canada.
H. Batnini is teaching assistant in computer science and algorithmic for students of D.E.U.G Mathématiques et Informatique 2ème année and teaching assistant in language theory and compilation for students of Licence Informatique.
G. Chabert has given Algorithmic course for undergraduate students (DEUG), UNSA (90h).
D. Daney has given Medical robotics course, DESS Génie Bio-médical, UNSA (15h).
C. Michel took part in the teaching of constraint programming to Master students (8h).
B. Neveu has participated to the AI course at ENTPE in Lyon (6 h).
O. Pourtallier has given lectures (9h) on game theory and optimization (12h) at Mastère OSE of Ecole des mines de Paris, lectures on optimization (12h) at DESS IMAFA of UNSA
M. Rueher, B. Neveu, G. Trombettoni and Y. Papegay have given lectures on constraint programming at the master on computer science of UNSA and ESSI (30 h).
M. Rueher taught in Constraint programming, Data Base and Logic programming and Prolog
G. Trombettoni is assistant professor in computer science at IUT GTR (telecoms and networks) of Sophia Antipolis.
J-P. Merlet has been jury member of 2 PhD and 1 HDR
B. Neveu
M. Rueher has been member of 2 HdR committees and member of the 2 PhD committees.
G. Trombettoni has been jury member of 1 PhD.
J-P. Merlet and D. Daney have acted as advisors for the postdoctorate of Y. Cheng.
M. Rueher is head of the Master Degree Program STIC (Spécialité ISI) and head of the 3rd year of ESSI.
Current PhD thesis:
H. Batnini, Contraintes globales sur le continu, University of Nice-Sophia Antipolis.
G. Chabert, Langage de pilotage et de paramétrage d'algorithmes de résolution de contraintes par intervalles, University of Nice-Sophia Antipolis.
A. Goldsztejn, Définition et Applications des Extensions des Fonctions Réelles aux Intervalles Généralisés. Révision de la Théorie des Intervalles Modaux et Nouveaux Résultats, University of Nice-Sophia Antipolis.
C. Grandon, Résolution de systèmes d'équations avec incertitudes, University of Nice-Sophia Antipolis.
A. Sanchez-Gonzales, Conception d'un robot parallèle à câbles modulaire, University of Nice-Sophia Antipolis.