The project-team activity is focused on the study of mobile robotic systems destined to accomplish complex tasks involving strong interactions with the system's environment. The underlying spectrum of research is vast due to the variety of devices amenable to automatization ( ground, underwater and aerial vehicles...), of environments in which these devices are vowed to operate (structured/natural, known/unknown, static/dynamic...), and of applications for which they have been designed (assistance to handicapped people, environmental monitoring, rescue deployment after natural disasters, observation and tactical support...).

A fundamental issue in autonomous mobile robotics is to build consistent representations of the environment that can be used to trigger and execute the robot's actions. In its broadest sense, perception requires detecting, recognizing, and localizing elements of the environment, given the limited sensing and computational resources of the robot. The performance of a mobile robotic system crucially depends on its ability to process sensory data in order to achieve these objectives in real-time. Perception is a fundamental issue for both the implementation of reactive behaviors (based on feedback control loops) and the construction of the representations which are used at the task level. Among the sensory modalities, artificial vision and range finder are of particular importance and interest due to their availability and extended range of applicability. They are used for the perception and modeling of the robot's environment, and also for the control of the robot itself. Sensor-based control refers to the methods and techniques dedicated to the use of sensor data and information in automatic control loops. Its mastering is essential to the development of many (existing and future) robotic applications and a corner-stone of the research on autonomous robotics.

Most tasks performed by robots rely on the control of their displacements. Research on robot motion control largely stems from the fact that the equations relating the
actuators outputs to the displacements of the robot's constitutive bodies are nonlinear. The extent of the difficulties induced by nonlinearity varies from one type of mechanism to another.
Whereas the control of classical holonomic manipulator arms has been addressed very early by roboticists, and may now be considered as a well investigated issue, studies on the control of
nonholonomic mobile robots are more recent. They also involve more sophisticated control techniques whose development participates in the extension of Control Theory. Another source of
difficulty is underactuation, i.e. when the number of independent means of actuation is smaller than the number of degrees of freedom of the robotic mechanism. Most marine and aerial vehicles
are underactuated. A particularly challenging case is when underactuation renders all classical control techniques, either linear or nonlinear, inoperative because it yields a system of
linearized motion equations which, unlike the original nonlinear system, is not controllable. Such systems are sometimes called
*critical*. Research in this area of automatic control is still largely open.

ARobASgenuinely tries to balance and confront theoretical developments and application-oriented challenges. In this respect, validation and testing on physical systems is essential, not only as a means to bring together all aspects of the research done in ARobAS–and thus maintain the coherence and unity of the project-team–, but also to understand the core of the problems on which research efforts should focus in priority. To this aim, a significant part of our resources is devoted to the development of experimentation facilities that are proper to the project and constitute an experimental workbench for the research done in the project. In parallel, we try to develop other means of experimentation in partnership research programs, for example with the Ifremerconcerning underwater robotics, and with the CenPRA of Campinas (Brazil), I.S.T. of Lisboa (Portugal), and Bertin Tech. Inc. for the control of unmanned aerial vehicles (blimps and drones).

**Ezio Malis**has defended his HDR entitled
*Méthodologies d'estimation et de commande à partir d'un système de vision*, Université de Nice-Sophia-Antipolis, March 2008.

The meaning of
*autonomy*in the context of mobile robotics covers a large variety of aspects, from the capabilities of moving safely and interacting with the environment, to planning, reasoning and
deciding at a high level of abstraction.
*
ARobAS
pursues a bottom-up approach with a sustained focus on autonomous navigation and the monitoring of interactions with unknown, variable, and complex environments.*

The project team is organized under the headings of two research themes :
*Perception and autonomous navigation*and
*control*. Nonetheless, it matters to keep in mind that the borderline between the themes is porous since several of the associated issues and tools to address them are clearly
interdependent and complementary. To highlight this interdependency, we have described in a separate section the transverse issues to the two vertical themes.

Autonomy in robotics largely relies on the capability of processing the information provided by exteroceptive sensors. Perception of the surrounding environment involves data acquisition, via sensors endowed with various characteristics and properties, and data processing in order to extract the information needed to plan and execute actions. In this respect, the fusion of complementary informations provided by different sensors is a central issue. Much research effort is devoted to the modeling of the environment and the construction of maps used, for instance, for localization, estimation, and navigation purposes. Another important category of problems concerns the selection and treatment of the information used by low-level control loops. Much of the processing must be performed in real-time, with a good degree of robustness so as to accommodate with the large variability of the physical world. Computational efficiency and well-posedness of the algorithms are constant preoccupations.

A key point is to handle the right compromise between the simplicity of the models and the complexity of the real world. For example, numerous computer vision algorithms have been proposed with the implicit assumptions that the observed surfaces are Lambertian and the illumination is uniform. These assumptions are only valid in customized environments. For applications such as the exploration of an outdoor environment the robustness of vision-based control schemes can be improved by using more realistic photometric models (including color information). Even though such models have already been used in the computer vision and augmented reality communities , their applicability to real-time robotic tasks has not been much explored.

In the same way that sensor models currently in use in robotics are often too simple to capture the complexity of the real world, the hypotheses underlying the geometrical structure in the scene are often restrictive. Most of the methods assume that the observed environment is rigid . For many applications like, for example, autonomous navigation in variable and dynamical environments, this assumption is violated. In these cases, distinguishing between the observed global (dominant) motion and the true motion, or even the deformations, of particular objects, is important.

More generally, the question is to estimate robustly and in real-time the information needed for the visual task.
*Real-time processing of a complete model of a deformable environment (i.e. the tri-dimensional shape, the deformations of the surfaces, textures and colors and other physical properties
that can be perceived by robotic sensors) has not yet been achieved*. Recent studies carried out on
*visual tracking*(i.e. tracking of visual clues in the image without feedback control of the camera pose), using a stereo pair of cameras
or a single camera
, are essentially concerned with parametric surfaces. To the best of our knowledge, the use of deformable visual
information for navigation or feedback control has been limited to deformable contours
, or simple articulated planar objects
.

In many applications, using only one sensor may not be the optimal way to gather the information needed to perform the robot task. Many exteroceptive sensors provide
complementary information (for example, unlike a single camera, a laser telemeter can directly measure the distance to an object), while proprioceptive sensors (odometry) are convenient to
estimate local displacements of a robot.
*We participate in the development of “intelligent” devices composed of several complementary sensors well-suited to the tasks involved in autonomous robotics.*Developing such sensors
requires to solve different aspects of the problem : calibration, data representation, estimation and filtering. A theory for the proper integration of multi-sensor information within a
general unified framework is still critically lacking.

Most of the applications involving mobile robotic systems (ground vehicles, aerial robots, automated submarines,...) require a reliable localization of the robot in its environment. The
problem of localization, given a map of the environment in the form of a set of landmarks or, conversely, the problem of constructing a map assuming that the vehicle's situation
(position+orientation) is known, has been addressed and solved using a number of different approaches. A more attractive problem is when neither the robot path nor the map is known.
Localization and mapping must then be considered concurrently. This problem is known as
*Simultaneous Localization And Mapping*. In this case, the vehicle moves from an unknown location in an unknown environment and proceeds to incrementally build up a navigation map of the
environment, while simultaneously using this map to update its estimated position. Two recent tutorials by Hugh Durrant-Whyte and Tim Bailey
,
describe some of the standard methods for solving the SLAM problem but also some more recent algorithms. More
recently, a new class of approaches has appeared based on
*graphical inference technique*which represents the SLAM problem as a set of links between robot and landmarks poses, and formulates a global optimization algorithm for generating a map
from such constraints
,
,
. Unfortunately, in the case of a robot explorating a large scale environment, such a method yields to
dramatically increase the state vector during the motion.
*We are investigating for well-founded methods which allow us to automatically introduce, if needed, a new local submap while preserving the consistency (in the sense of the probability) of
the global map*.

The use of vision in SLAM provides a rich perceptual information compared to lasers and yields a low level of data association ambiguity. However real-time visual SLAM has only become
possible recently with faster computers and ways of selecting sparse but distinct features. The main difficulty comes from the loss of the depth dimension due to the projective model of the
camera. Consequently, monocular vision yields to address the specific configuration of
*bearing-only slam*. In such a configuration, only the directions of sight of the landmaks can be measured. This leads to observability problems during the initialization. It is
well-known in the computer vision community that specific motions of the camera, or very distant landmarks, lead also to observability problems. To overcome this type of problem,
*delayed*landmark insertion techniques such as local bundle adjustment
or particle filtering
have been proposed. More recently
*undelayed*approaches
,
,
have been investigated. These approaches generally rely on a probabilistic model of the depth distribution along
the sight ray and require the use of particle filtering techniques or gaussian multi-hypothesis methods. Another approach relies on the use of dense representations instead of sparse ones
based on landmarks.
*We are applying these ideas to visual SLAM
by stating the problem in terms of the optimization of a warping function directly expressed in the image
space*. The function parameters capture not only the geometrical and the photometrical aspects of the scene but also the camera motion. Robustness is enhanced by using a dense approach
taking advantage of all the information available in the regions of interest instead of a sparse representation based on features like Harris or Sift points.

Nevertheless, solving the SLAM problem is not sufficient for guaranteeing an autonomous and safe navigation. The choice of the representation of the map is, of course,
essential. The representation has to support the different levels of the navigation process : motion planning, motion execution and collision avoidance and, at the global level, the
definition of an optimal strategy of displacement. The original formulation of the SLAM problem is purely metric (since it basically consists in estimating the Cartesian situations of the
robot and a set of landmarks), and it does not involve complex representations of the environment.
*However, it is now well recognized that several complementary representations are needed to perform exploration, navigation, mapping, and control tasks successfully. Alike several authors,
we proposed
to use composite models of the environment which mix topological, metric, and grid-based
representations.*Each type of representation is well adapted to a particular aspect of autonomous navigation : the metric model allows one to locate the robot precisely and plan
Cartesian paths, the topological model captures the accessibility of different sites in the environment and allows a coarse localization, and finally the grid representation is useful to
characterize the free space and design potential functions used for reactive obstacle avoidance. However, ensuring the consistency of these various representations during the robot
exploration, and merging observations acquired from different viewpoints by several co-operative robots, are difficult problems. This is particularly true when different sensing modalities
are involved. New studies to derive efficient algorithms for manipulating the hybrid representations (merging, updating, filtering...) while preserving their consistency are needed.

The exploration of an unknown environment relies on a robot motion strategy which allows to construct a complete representation of the environment in minimal time or,
equivalently, with displacements of minimal lengths. Few works have addressed these aspects so far. Most exploration approaches
,
use a topological representation like the
*Generalized Voronoï diagram (GVD)*. Assuming an infinite range for the sensors, GVD provides an aggregated representation of the environment and an elegant means to solve the optimality
problem. Unfortunately, the usual generalized Voronoï diagram, which is based on the
L_{2}metric, does not cope well with real environments and the bounded range of the sensors used in robotic applications. Building topological representations supporting exploration
strategies in real-time remains a challenging issue which is pursued in
ARobAS.

For large-scale environments and long-time survey missions, the SLAM process can rapidly diverge due to the uncertainties and the drift inherent to dead reckoning methods,
and the unavailability of absolute position measurements (as provided, for example, by a GNSS whose drawback is that it is not operational everywhere nor always). The problem of motion
control is rarely considered as a constitutive part of the SLAM problem. We advocate that autonomous navigation and SLAM should not be treated separately, but rather addressed in an unified
framework involving perception, modeling, and control. Reactive navigation and sensor-based control constitute the core of our approach. Sensor-based control, whose design relies on the
modeling of the interactions between the robot and its nearby environment, is particularly useful in such a case. We show in simulation and experimentally
that embedding the SLAM problem in a sensor-based control framework acts as adding constraints on the relative
pose between the robot and its local environment. In other words, the sensor-based control approach allows to guarantee, under certain observability conditions, a uniformly bounded estimation
error in the localization process.
*we pursue research issues on the design of navigation functions capable, at a reactive control level, to ensure collision-free robot motions and, at the navigation level, to implement a
(topologically) complete exploration of the environment in autonomous mode.*

Since robotic, or “robotizable”, mechanisms are structurally nonlinear systems which, in practice, need to be controlled in an efficient and
*robust*manner, the project
ARobAShas a natural interest and activities in the domain of Automatic Control related to the theory of control of nonlinear systems. Nonlinear
control systems can be classified on the basis of the stabilizability properties of the linear systems which approximate them around equilibrium points. Following
, an autonomous controllable nonlinear system is called
*critical*when the corresponding linearized systems are not asymptotically stabilizable (and therefore not controllable either). Whereas local stabilizers for non-critical systems can
often be derived from their linear approximations, one has to rely on other –truly nonlinear– methods in the case of critical systems.

For robotic applications, one is concerned in the first place with the design of feedback laws which stabilize state-reference trajectories in the sense of ensuring small tracking errors despite adverse phenomena resulting from modeling errors, control discretization, measurement noise,...

The set of critical systems strictly encompasses the one of controllable driftless systems affine in the control input (e.g. kinematic models of
*nonholonomic wheeled vehicles*). Most of the existing literature on the subject has focused on these latter systems due to their well delimited and understood structural properties. On
the other hand, nonlinear control-affine systems with a drift term which cannot be removed without rendering the system uncontrollable have been much less studied, whereas many locally
controllable
*underactuated mechanical systems*(e.g. manipulators with non-actuated degrees of freedom, hovercrafts, blimps, submarines,...) belong to this category of critical systems. However, there
exist also underactuated mechanical systems which are not critical in the sense evoked above. Such is the case of flying machines with vertical take-off capabilities (helicopters, VTOL
devices,...) whose linear approximations at an equilibrium are controllable due to the action of an external field of forces (the field of gravity, in the present case). Understandably, the
control techniques used for these systems heavily rely on this property eventhough, mathematically, the absence of such a field would not necessarily render the system itself (by opposition to
its linear approximation) uncontrollable. This latter observation is important because it means that not all the structural controllability properties of the system have been exploited in the
control design. This also implies that general control methods developed for critical systems could be applied to these non-critical systems, with their performance being less critically
dependent on the existence and modeling of an external “stabilizing” field. To our knowledge, this research direction has never been explored before.

*To summarize, the problem of control of critical nonlinear systems is relevant for most robotic devices other than fully-actuated holonomic manipulators. It is, of course, also relevant for
other physical systems presenting similar structural control properties (an example of which are induction electrical motors).*We have been advocating for a few years that it needs to be
investigated further by developing new control design paradigms and tools. In this respect, our conviction is based on a certain number of elements, a summary of which follows.

*Asymptotic stabilization of an equilibrium combining fast convergence (say exponential) and a degree of robustness similar to what can be achieved for linear systems (e.g. stability
against structured modeling errors, control discretization, time-delays, and manageable sensitivity w.r.t. noise measurement,...) has never been obtained.*Studies that we, and a few
other researchers, have conducted towards this goal
,
,
have been rewarded with mitigated success, and we strongly feel now that no solution exists: basically, for
these systems, fast convergence rules out robustness.

It is known from
that asymptotic stabilization of
*admissible*state trajectories (i.e. trajectories obtainable as solutions to the considered control system) is “generically” solvable by using classical control methods, in the sense
that the set of trajectories for which the linear approximation of the associated error system is controllable is dense. Although this is a very interesting result which can (and has been)
thoroughly exploited in practice, this is also a delusional result whose limitations have insufficiently been pondered by practitioners. The reason is that it tends to convey the idea that
all tracking problems can be solved by applying classical control techniques. The application of
*Brockett's Theorem*
to the particular case of a trajectory reduced to a single equilibrium of the system indicates that no smooth
pure-state feedback can be an asymptotical stabilizer, and thus clearly invalidates this idea. If an asymptotic stabilizer exists, it has to involve a non-trivial dynamic extension of the
initial system. Time-varying feedbacks that we have been first to propose
to solve this type of problem in the case of nonholonomic systems constitute an example of this.
*However, solving the problem for fixed equilibria still does not mean that “any” admissible trajectory can be asymptotically stabilized, nor that there exists a “universal” controller,
even a complicated one, capable of stabilizing any admissible trajectory –whereas simple solutions to this latter problem are well-known for linear systems. This lack of completude of the
results underlies severe practical implications which have not been sufficiently addressed.*

For instance, the non-existence of a “universal” stabilizer of admissible (feasible) trajectories has been proven in in the case of nonholonomic systems. This result is conceptually important because it definitively ruins the hope of finding a complete solution to the tracking problem (in the usual sense of ensuring asymptotic stabilization), even for the simplest of the critical systems.

*To our knowledge, the problem of stabilizing*non-admissible
*trajectories has never been addressed systematically, even in the case of fully-actuated nonholonomic systems, except by us recently.*A decade of active research devoted to the
control of these systems (in the 1990's) had left this issue wide-open, eventhough it was known that, for a nonholonomic driftless system, the property of local controllability implies that
any continuous non-admissible trajectory in the state space can be approximated with arbitrary good precision by an admissible trajectory. While several open-loop control methods for
calculating such an approximation have been proposed by various authors
,
,
*practical*stabilization of non-admissible trajectories –the feedback control version of the problem– seems to have been completely “occulted” by the problem of asymptotic
stabilization of admissible trajectories.

The range of feedback control design methods for nonlinear systems, especially those based on geometrical concepts, is limited and needs to be enlarged. Existing methods are often inspired by ideas and techniques borrowed from linear control theory. Whereas this makes good sense when the system is non-critical (including feedback linearizable systems), we contend that critical systems, being structurally different, call for revisiting and adapting the basic concepts and objectives on which control design methods lean. The notion of practical stabilization is an example of such an adaptation.

The objective of
*practical stabilization*is weaker than the classical one of asymptotic stabilization: any asymptotical stabilizer is a practical stabilizer –whereas the converse is not true. However,
this objective is not “much” weaker. In particular, instead of ensuring that the error converges to zero, a practical stabilizer ensures that this error is ultimately bounded by some number
which can be as small as desired (but different from zero). We assert that this “small” difference in the objective changes everything at the control design level in the sense that none of the
obstructions and impossibilities evoked previously holds any more: fast convergence to a set contained in a small neighborhood of the desired state can be achieved in a robust fashion,
universal practical stabilizers of state trajectories exist, and, moreover, these trajectories do not have to be admissible. Furthermore, by accepting to weaken the control objective slightly,
the set of control solutions is considerably enlarged, so that new control design methods can be elaborated. One of them is the
*Transverse Function*approach that we have initiated a few years ago and that we continue to develop. It is based on a theorem, first published in
, which states the equivalence between the satisfaction of the Lie Algebra Rank Condition (LARC) by a set of
vector fields and the existence of particular (bounded) periodic functions whose infinitesimal variations are
*transversal*to the directions associated with these vector fields. For control purposes, the time-derivatives of the variables on which such transverse functions depend can be used as
extra control inputs which facilitate the control of systems whose dynamics are either completely (the case of nonholonomic systems) or partially (the case of underactuated systems) driven by
the vector fields with which the transverse function is associated. In the case of mechanical systems, these new control inputs are directly related to the frequency of the “manœuvres” that the
system has to perform in order to track a given reference trajectory. With this interpretation in mind, one can say that the approach provides a way of adapting the frequency of the manœuvres
automatically.

We have first experimented feedback controllers derived with this approach on our laboratory unicycle-type mobile robot with the goal of tracking an omnidirectional vehicle (target) observed by a camera mounted on the robot (vision-based tracking). To our knowledge, this experiment is still unique in its kind. Results that we have obtained show a net improvement with respect to earlier attempts that we had made, based on the use of time-varying feedback techniques . Theoretically, the approach can be applied to any nonholonomic vehicle –car-like vehicles without or with trailers, in particular.

*More recently, we have adapted it to the problem of controlling nonholonomic mobile manipulators, i.e. manipulators mounted on nonholonomic mobile platforms, and have derived a general
methodology for the coordinated motion of this type of robot
.*It is based on the concept of
*omnidirectional companion frame*which basically allows to control the mobile platform as if it were omnidirectional. Feedback control laws devised with this methodology have properties
never demonstrated before, such as the possibility of ensuring the perfect execution of a manipulation task on a moving object whose motion is not known in advance, with the insurance that the
manipulator will never collide into its joint-limits.

*Even more recently, we have started to extend the approach to the control of critical underactuated mechanical systems, a problem which is more difficult than the control of fully-actuated
nonholonomic systems due to the necessity of including dynamical effects in the modeling equations of the system, yielding a drift term which cannot be treated as a perturbation which can be
pre-compensated.*For these systems, the objective is again to practically stabilize any desired trajectory (admissible, or not) defined in the configuration space. To our knowledge, this
problem had never been solved before, even for the simplest critical underactuated system (namely, the 3-dimensional second-order chained system). Although we have already much progressed on
this subject, and devised a control design method which applies to classical examples of critical underactuated mechanical systems involving a single rigid body
,
, many aspects of the problem have not been explored yet, or need to be studied further. Several are related to
the definition and exploitation of criteria to qualify and compare different implementations of the control design method, such as the property of making velocities tend to zero when the
reference trajectory is reduced to a fixed-point. Others concern the applicability and usefulness of the approach when the system is not critical (due to the action of dissipative/friction
forces combined with the gravity field, in particular).

Robustness is a central and vast issue for feedback control. Any feedback control design approach has to be justified in terms of the robustness properties which are associated with it. In the case of advanced robotic applications based on the use of exteroceptive sensors, robustness concerns in the first place the capacity of dealing with the imprecise knowledge of the transformations relating the space in which sensor signals live to the Cartesian space in which the robot evolves. A vast literature, including several chapters of and a large part of the publications on vision-based control, has addressed this issue in the case of fully actuated holonomic manipulators. Comparatively, very little has been done on this subject in the case of nonholonomic and underactuated mobile robots. We have thus initiated studies (constituting the core of a PhD work) in order to figure out i) how feedback control schemes based on the use of transverse functions can be adapted to the use of exteroceptive sensors when the above mentioned transformations are not known precisely, and ii) how robust the resulting control laws are. Initial results that we have obtained are encouraging , but the complexity of the analyses also tells us that future research efforts in this direction will have to rely much on simulation and experimentation.

Interacting with the physical world requires to appropriately address perception and control aspects in a coherent framework. Visual servoing and, more generally, sensor-based robot
control consists in using exteroceptive sensor information in feedback control loops which monitor the dynamic interactions between a robot and its environment. Since the beginning of the
1990's, a lot of works have been done in sensor-based control in the case of fully-actuated holonomic systems. The control of these systems is much simplified by the fact that instantaneous
motion along any direction of the configuration space is possible and can be monitored directly
.
*However, this assertion is not true in the case of critical or underactuated systems like most ground, marine or aerial robots. New research trends have to be investigated to extend the
sensor-based control framework to this kind of mechanisms.*

Robustness is needed to ensure that the controlled system will behave as expected. It is an absolute requirement for most applications, not only to guarantee the good
execution of the assigned tasks, but also for safety reasons, especially when these tasks involve direct interactions with humans (robotic aided surgery, automatic driving,...). A control law
can be called "robust" if it is able to perform the assigned stabilization task despite modeling and measurement errors. Determining the "size" of "admissible" errors is understandably
important in practice. However, carrying out this type of analysis is usually technically quite difficult. For standard vision-based control methods
, only partial results have been obtained in a limited number of cases
. Recently, we have studied the robustness of classical vision-based control laws (relying on feedback
linearization)
with respect to uncertainties upon structure parameters, and proved that small estimation errors on these
parameters can render the control laws unstable
. This study has been extended to central catadioptric cameras
.
*One of our objectives is to develop tools for the evaluation of robustness properties of sensor-based control schemes, for generic vision devices (by extending existing results).*

Sensor-based robot tasks were originally designed in the context of manipulation, with the control objective stated in terms of positioning and stabilizing the end-effector
of a manipulator with respect to a structured object in the environment. Autonomous navigation in an open indoor or outdoor environment requires the conceptualization and definition of new
control objectives. To this aim, a better understanding of the natural facilities that animals and human beings demonstrate when navigating in various and complex environments can be a source
of inspiration. Few works have addressed this type of issue with a focus on how to define navigation control objectives and formulate them mathematically in a form which can be exploited at
the control level by application of methods and techniques of Control Theory. Numerous questions arise. For instance, what is the right balance between planned (open-loop) and reactive
(feedback) navigation? Also, what is the relative importance of topological-oriented versus metric-oriented information during navigation? Intuitively, topological aspects encompassing the
accessibility of the environment seem to play an important role. They allow for a navigation which does not heavily rely on the knowledge of Cartesian distances. For example, when navigating
along a corridor, it is more important to have information about possibilities of access than calculating distances between walls precisely. The nature of the “percepts” at work in animal or
human autonomous navigation is still poorly known and understood. However, it would seem that the implicit use of an ego-centered reference frame with one of its axes aligned with the
gravitational direction is ubiquitous for attitude (heading and trim) control, and that specific inertial and visual data are somehow directly acquired in this frame. In
, we have exploited a similar idea for the automatic landing of an aerial vehicule by implementing a visual
feedback which uses features belonging to the plane at infinity (vanishing point and horizon line).
*It is also probable that the pre-attentive and early cognitive vision emphasized by the Gestalt theory provide useful inputs to the navigation process in terms of velocity, orientation or
symmetry vector fields. Each of these “percepts” contributes to the constitution of sub-goals and elementary behaviors which can be adaptatively inhibited or re-enforced during the navigation
process. Currently, little is known about the way animals and humans handle these different, and sometimes antagonistic, sub-goals to produce "effective" motions. Monitoring concurrent
sub-goals, within a unified sensor-based control framework, is still an open problem which involves both perception and control issues.*

Advanced robotics offers a wide spectrum of application possibilities entailing the use of mechanical systems endowed, to some extent, with capacities of autonomy and capable of operating in automatic mode : intervention in hostile environments, long range exploration, automatic driving, observation and surveillance by aerial robots,... without forgetting emerging and rapidly expanding applications in the domains of robotic domestic appliances, toys, and medicine (surgery, assistance to handicapped persons, artificial limbs,...). A characteristics of these emerging applications is that the robots assist, rather than compete with, human beings. Complementarity is the central concept. The robot helps the operator in taking decisions or extending his physical capacities. The recent explosion of applications and new scientific horizons is a tangible sign that Robotics, at the crossway of many disciplines, will play a ubiquitous role in the future of Science and Technology.

We are currently involved in a certain number of applications, a list of which follows. Our participation in these applications is limited to the transfer of methods and algorithms. Implementation and validation are left to our partners.

*Ground robotics :*Since 1995,
Inriahas been promoting research in the field of the intelligent transport systems. Our activity concern the domain of future transportation
systems, with a participation in the national Predit Project
MobiVIP. In this project, we address autonomous and semi-autonomous navigation (assistance to driving) of city cars by using information data
provided by visual or telemetric sensors. This is closely related to the problems of localization in an urban environment, path planning and following, subjected to stringent safety
constraints (detection of pedestrians and obstacles) within large and evolutive structured environments. The ANR project
CityVIPbeginning in 2008 follows the Predit project
*MobiVIP*, which ended in 2006.

We are also involved in the
ANR LOVewith Renault and Valeo as industrial partners. Associated with the
*Pôle de compétitivité System@atic*, this project aims at preventing pre-crash accidents by real-time vision-based detection and tracking of pedestrians and dynamic obstacles.

Finally, since 2004 we have participated in two projects conducted by the DGA(French Defense) in the field of military robotics. PEA MiniRocis a typical SLAM problem based on sensory data fusion, complemented with control/navigation issues. It addresses on-line indoor environment exploration, modeling and localization issues with a mobile robot platform equiped with multiple sensors (laser range-finder, omnidirectional vision, inertial gyrometer, odometry). More recently, PEA Tarotaddresses autonomy issues for military outdoor robots. Our contribution focuses on the transfer and adaptation of our results in real time visual-tracking for platooning applications to operational conditions.

*Aerial robotics*will grow in importance for us. Existing collaborations with the Robotics and Vision Group at CenPRA in Campinas (Brazil) and the Mechanical Engineering Group at IST
in Lisboa (Portugal) will be pursued to develop an unmanned airship for civilian observation and survey missions. Potential end-user applications for such vehicles are either civilian
(environmental monitoring, surveillance of rural or urban areas, rescue deployment after natural disasters...) or military (observation or tactical support...). The experimental setup
AURORA(
*Autonomous Unmanned Remote Monitoring Robotic Airship*) consists of a 9 meters long airship instrumented with a large set of sensors (GPS, Inertial Navigation System, vision,...)
located in Campinas. Vision-based navigation algorithms are also studied in the
FP6 STReP European Project Pegase, led by Dassault, which is devoted to the development of embarked systems for autonomous take-off and landing
when dedicated airport equipments are not available.

Aerial vehicles with vertical take-off and manœuvering capabilities (VTOLs, blimps) also involve difficult control problems. These vehicles are underactuated and locally controllable. Some of them are critical systems in the sense of the non controllability of their linearized equations of motion, even under the action of gravity (like blimps in the horizontal plane), whereas others are not due to this action (like VTOLs). Our objective is to propose control strategies well suited to these systems for different stabilization objectives (like e.g. teleoperation or fully autonomous modes). For example, a question of interest to us is to determine whether the application of control laws derived with the transverse function approach is pertinent and useful for these systems. The main difficulties associated with this research are related to practical constraints. In particular, strong external perturbations, like wind gusts, constitute a major issue for the control of these systems. Another issue is the difficulty to estimate precisely the situation of the system, due to limitations on the information that can be obtained from the sensors (e.g. in term of precision of the measures, or of frequency of the data acquisition). Currently, we address these issues in the ANR project SCUAV(Sensory Control of Unmanned Aerial Vehicles) involving several academic research teams and the french company BERTIN Technologies.

*Underwater robotics :*We have a long time collaboration with the
Ifremer's Robotics Center in Toulon in the field of underwater robotics. The objective of the
*Themis*project is to design an active stereovision head controlled via visual-servoing techniques, in order to build an accurate system of metrology for underwater environment.
Methodologies have been developed and prototyped using our robotic platform, and the transfer of the methodologies is currently under progress at the
Ifremer's Robotics Center in Toulon. An industrial device has been designed and first trials have been conducted in august 2006 on the Lucky
Strike site (depth: 1700m) in the Azores.

ESMTracking and Control Software. The software allows for visual tracking and servoing with respect to planar objects. It has been successfully tested on the Cycabs in a car platooning application. This sofware is distributed under license agreement.

The
Omnidirectional Calibration Toolboxis a Matlab software developed for the calibration of different types of single viewpoint omnidirectional
sensors (parabolic, catadioptric, dioptric), based on a new calibration approach that we have proposed. The toolbox is freely available over the Internet

Methodological solutions to the multi-faceted problem of robot autonomy have to be combined with the ever present preoccupation of robustness and real-time implementability. In this respect, validation and testing on physical systems is essential, not only as a means to bring together all aspects of the research done in ARobAS–and thus maintain the coherence and unity of the project-team–, but also to understand the core of the problems on which research efforts should focus in priority. The instrumented indoor and outdoor wheeled robots constitute a good compromise in terms of cost, security, maintenance, complexity and usefulness to test much of the research conducted in the project-team and to address real size problems currently under investigation in the scientific communauty. For the next few years, we foresee on site testbeds dedicated to ground robotic applications (figure ).

*HANNIBAL cart-like platform*

Succeeding to our old indoor mobile robot ANIS, a new cart-like platform, built by Neobotix, capable of moving on flat surfaces both indoor and outdoor, was acquired last year. This platform will be equiped with the various sensors needed for SLAM purposes, autonomous navigation and sensor-based control. Once its programming is further developed to become user-friendly, it should be one of the team's main testbeds for fast prototyping of perception, control and autonomous navigation algorithms.

*CyCab urban electrical car*

Two instrumented electrical cars of the
*CyCab*family are destined to validate researchs in the domain of
*Intelligent urban vehicle*.
*CyCabs*are used as experimental testbeds in several national projects.

The realization of complex robotic applications such as autonomous exploration of large scale environments, observation and surveillance by aerial robots requires to develop and combine methods in various research domains: sensor modeling, active perception, visual tracking and servoing, etc. This raises several issues.

To simplify the setup, it is preferable to integrate, as far as possible, methods in computer vision and robotic control in a unified framework. Our objective is to build a generic, flexible and robust system that can be used for a variety of robotic applications.

To facilitate the transfer of control methods on different systems, it is preferable to design control schemes which weakly rely on “a priori” knowledge about the environment. The knowledge about the environment is reconstructed from sensory data.

To get reliable results in outdoor environments, the visual tracking and servoing techniques should be robust against uncertainties and perturbations. In the past, lack of robustness has hindered the use of vision sensors in complex applications.

Many computer vision applications, including robot navigation, visual servoing, virtual and augmented reality, can take advantage of a camera with a large field of view. An effective way
to enhance the field of view is to use omnidirectional cameras which provide a 360
^{o}panoramic view of the scene. However, such devices possess non isotropic resolutions. As a consequence, standard computer vision methods designed for perspective cameras (i.e.
filtering operators) may produce very poor results. We aim at developing a new adequate formulation of these operators well adapted to catadioptric sensors. The difficulty of image processing
with these devices comes from the non-linear projection model resulting in changes of shape in the image that almost forbids the direct use of methods designed for perspective cameras. A
possible solution to avoid the deformation problem is to define the image processing on a sphere. Indeed, all the central catadioptric cameras can be described by a unified model that
includes a projection from a unit sphere. Thus, image processing operators must be designed on a sphere by using, for example, spherical harmonics theory. We have showed that spherical image
processing improves the accuracy and repeatability of feature extraction in omnidirectional images
(figure
).

Tracking is a fundamental step for various computer vision applications but very few articles on the subject have been published for catadioptric systems, being one of the rare examples. Parametric models such as homography-based approaches are well adapted to this problem. The results on ESM tracking from have been extended to all single viewpoint sensors, and the algorithm successfully applied to the precise motion estimation of a 6 degrees-of-freedom mobile robot from a calibrated omnidirectional camera . We have studied how to extend the approach to the uncalibrated visual tracking case. The main idea is to solve an optimization problem where all the parameters are unknowns. We found that the solution of the optimization problem allows one to obtain the true parameters (i.e. the homography and the parameters of the catadioptric camera) from only two images of the same planar object. Thus, the uncalibrated direct visual tracking algorithm can be used to self-calibrate the catadioptric system. We are currently working on the proof of the uniqueness of the solution to the optimization problem for any central camera (including perspective cameras).

The data association problem can be formulated as a parametric image registration. The aim is to find the parameters that align a current image to a reference one. However, this
formulation demands efficient optimization algorithms with a wide convergence domain. In this context, we proposed the Efficient Second-order Approximation Method
that allows faster convergence rate and wider convergence domain than standard optimization methods. The
parametric image registration has been initially proposed for tracking planar structures only
. Then, we have extended the approach for the tracking of rigid and deformable surfaces
and to handle generic illumination changes
. The efficiency of our approach for industrial applications has been showed in
. However, we have only considered gray-scale images so far. In many situations color cameras provide a much
richer information than gray-scale cameras. When considering color images the alignment problem becomes harder since illumination changes can also produce variations in color perception. To
simplify the problem, standard methods generally suppose that the scene is Mondrian

We propose an approach that exploits the results of the dense image registration performed by the visual tracking. The parameters involved in the image alignment process depend on
physical parameters (camera displacement, the 3D structure, ...). Once calculated these parameters can be used to drive a robot to a desired position in the cartesian space
. The precision of the positioning depends on the quality of the visual tracking. Simulation results show that
visual tracking using dense information can be accurate enough to perform robot navigation in relatively large scale environments. Instead of calculating physical parameters one can use
estimated parameters obtained via a teaching-by-showing approach (i.e. a reference image is given as input). In
we have proposed to use the output of the ESM visual tracking algorithm directly for visual servoing. The
proposed method is called “direct” visual servoing because (
i) pixel intensities are used without any feature extraction process (this means that all possible image information is exploited), and (
ii) the proposed control error as well as the control law are exclusively based on image data, i.e. no metric 3D measure is either required or explicitly estimated. We prove that the
proposed control error is globally isomorphic to the camera pose, that the error is motion- and shape-independent, and also that the derived control law is locally stable. Since only image
information is used to compute the error, the proposed scheme is robust to large camera calibration errors.

State observers are needed to make systems perform autonomously. Typically, such an observer is in charge of estimating the state of a robot from a set of measurements acquired by on-board sensors. Linear observers have been extensively studied in the literature, but they are not always best adapted to nonlinear robotic systems the state of which evolves on a Lie group. For example, the position and the orientation of a robot in the Cartesian space can be represented as elements of the Special Euclidean Group SE(3). It is important to design observers that take into account the non-linear properties of the state equations. Standard approaches are based on the linearization of the state equations. Working on Lie groups directly may present the distinct advantage of yielding global, or semi-global, stability results. This is important for several applications, like vision-based control of aerial robotic systems, because fast accelerations and unmodeled environmental perturbations may temporarily produce large estimation errors. Along this direction we have worked out an observer in the measure space of a vision sensor by identifying elements of the Special Linear group SL(3) to homographies measured by a camera observing a planar object. We have proved the semi-global stability of the observer when the relative camera/object velocity –in sl(3), the Lie algebra associated with the SL(3) group– is constant.

In the
*Simultaneous Localisation And Mapping*paradigm, the robot moves from an unknown location in an unknown environment and proceeds to incrementally build up a navigation map of the
environment, while simultaneously using this map to update its estimated position.

Standard methods for solving the SLAM problem are classicaly based on the Extended Kalman Filter (EKF-SLAM) or particle filter (FastSLAM). These methods aim to compute on-the-fly both a
current estimate of the robot pose
(
x
_{k})conjointly with a current estimate of the map
M_{k}= [
m_{0}...
m_{n}]
_{k}from the sensors data
z_{k}observed at time
(
k). Such approaches are local and prohibit the update of the past robot poses at
(
x
_{0}, ...
x
_{i}, ...
x
_{k-1})from the observations at
(
k). It is now well-known that they are not consistent and yield to an important drift in the trajectory estimation when the robot is moving in a large scale
environment. More recently, global approaches stated the estimation problem as an
*optimization*problem (often referred as SAM approaches (
*Smoothing and Mapping*) in the SLAM community or
*Bundle adjustement approaches*in the Computer Vision community): instead of computing
P(
x_{k},
M_{k}/
z_{k},
u_{k})we aim to compute
P(
x_{0}, ...
x_{i}, ...
x_{k},
M_{k}/
z_{0}, ...
z_{i}, ...
z_{k},
u_{0}, ...
u_{i}, ...
u_{k}). Even if global approaches are time and memory consuming, they look like less sensitive to inconsistencies than EKF-SLAM or FastSLAM. We studied and compared two
approaches : the GraphSLAM, a probabilistic method based on gaussian hypothesis, and the "interval SLAM" which is a deterministic approach based on a error-bounded framework
. A comparison of the algorithms is made in simulation, for the bearing-only case. Landmarks are 3D points
from which we measure the bearing and elevation angles. The results show the consistency of both algorithms when the errors are centered. In this case, if we look at the size of the belief
areas provided by the algorithms, GraphSLAM delivers better results than interval SLAM. Finally, we show that the GraphSLAM algorithm becomes inconsistent when input data are biased. In the
latter case, interval SLAM appears as a good alternative providing consistent results (figure
).

Safe and efficient navigation in large-scale unknown environments remains a key problem which has to be solved to improve the autonomy of mobile robots. Complexity of the global SLAM methods dramatically increases when the size of the environment is growing up. Conversely, accuracy and integrity of the estimation process cannot be guaranteed any more. Many of these problems arise from the intrinsic nonlinearity of the SLAM equations which have to be linearized around equilibrium points and consequently are only locally valid. We are investigating a new approach to reduce the complexity of the algorithms thanks to the definition of local submaps. In contrast with other approaches dealing with local maps, our main purpose is not only to reduce the computational cost but also uncertainties in the state estimation. We show that it is possible to improve the local covariance parameters by choosing a good point for conditioning the solution. We deduce from this result a theoretical well-founded criterion for starting a new map. This criterion which is based on the comparison between two covariance determinants permits to adapt the size of the map. Experiments show that this is relevant; for instance, it succeeds to separate two maps which are intuitively independent. We also provide a rule to propagate information of the previous map in the new submap. Finally, a method to construct consistent and accurate local maps was also established and validated both on simulation and with real data in the case of bearing-only SLAM (figure ).

Using monocular vision in SLAM provides a rich perceptual information but leads to unobservability problems due to the loss of the depth dimension. In , we have investigated how to improve visual SLAM using direct methods for image registration. Traditionally in monocular SLAM, interest features are extracted and matched in successive images. Outliers are rejected a posteriori during a pose estimation process, and then the structure of the scene is reconstructed. In this work, we depart from this paradigm and we propose a new approach to perform that core of monocular SLAM. The proposed technique computes simultaneously the 3D pose and the scene structure directly from image intensity discrepancies. In this way, instead of depending on particular features, all possible image information is exploited. In other words, motion and structure are directly used to align multiple reference image patches with the current image so that each pixel intensity is matched as closely as possible. Here, besides these global and local geometric parameters, global and local photometric ones are also included in the optimization process. This enables the system to work under illumination changes and to achieve more accurate alignments. In turn, the global variables related to motion directly enforce the rigidity constraint of the scene during the minimization. Hence, besides increasing accuracy, the technique becomes naturally robust to outliers.

The proposed technique is also different from the existing direct methods in many other aspects. Some standard strategies usually do not consider the strong coupling between motion and structure. Furthermore, we propose a suitable structure parameterization which enforces, during the optimization, its positive depth (cheirality) constraint. Moreover, we advocate the parameterization of the visual tracking by the Lie algebra, which further improves its stability and accuracy. In addition, it is well-known that representing a scene as composed by planes leads to an improvement of computer vision algorithms in terms of accuracy, stability and rate of convergence. For this reason, we suppose that any regular surface can be locally approximated by a set of planar patches. To respect real-time requirements, an appropriate selection of a subset is performed. Another contribution concerns the initialization of the visual SLAM. This is not a trivial issue since, at the beginning of the task, any scene can be viewed as composed by a single plane: the plane at infinity. The scene structure only appears when the translation of the camera becomes sufficiently large with respect to the depths of the objects in the scene. In this article , a new solution for initializing the system is proposed whereby the environment is not assumed to be non-planar. The experimental and simulation results demonstrate that the image regions are stable for larger camera motions and variations of illumination than in the case of traditional methods. Hence, by exploiting the same information in long periods of time, one avoids an early accumulation of drifts or even a total failure of the system.

A novel approach to 3D visual odometry has been proposed in which avoids explicit reconstruction of the structure in the scene.

An appearance based tracking method has been developed using image transfer of the quadri-focal tensors. In this way the stereo relationship between positions in a sequence may be obtained which describes the trajectory of the cameras in projective space. If the stereo pair is calibrated then the only unknown is the pose between two pairs of images and it is possible to develop an accurate nonlinear iterative minimization procedure to track the multi-focal tensor across the sequence. The advantages of this approach are multiple including that the 3D model of the scene is not explicitly reconstructed since it is encoded into the dense 2D correspondences between the stereo image pair. No initialisation is required between a 3D model and the camera since tracking is performed relatively to the the camera position. Only one single global estimation process is necessary, therefore eliminating sources of error and allowing robust statistics to be used. In the visual odometry approach the dense correspondences encoding the 3D structure are computed for each image pair without using knowledge of prior correspondences. This allows for faster computation but it does not enforce the constraints that the observed scene is rigid. We are currently studying how to efficiently enforce this constraint.

In the context of the
*FP6 STReP European Project Pegase*, we are in charge of developing novel vision-based embarked systems for autonomous take-off and landing when dedicated airport equipments are not
available
. The basic idea is to estimate the pose of the aircraft with respect to the runway during the final approach
from the computation of the homography between the current image and a georeferenced image taken above the touchdown point which has been previously acquired during a learning phase

The
*Tarot*project addresses autonomy issues for military ground vehicles with a peculiar emphasis on platooning applications. A classical scenario is a convoy of vehicles going across a
dangerous area (i.e. minefield) where each vehicle has to track the trajectory of the vehicle ahead perfectly. Such a task can be formulated as a visual tracking task and implemented using
the ESM visual tracker presented above. First experiments are currently carried out on the unmanned vehicle developped by Thalès equiped with a Pan-and-tilt unit (figure
).

The aim of the
*Love*project is to detect and track pedestrians seen by a camera mounted on a car. The trajectory of the pedestrian with respect to the car is estimated in order to decide if the
pedestrian may collide with the vehicle. We have used the ESM visual tracker presented above to estimate the pedestrian's motion . The experiments show that the tracker performs well when the
aspect of the pedestrian does not change too much. We are currently addressing the problem of continuing the tracking despite strong changes of pedestrian's aspect and despite temporary
occlusions.

This joint collaboration with
Ifremeraims at developing, implementing and testing an original robotic method to compute the 3D metric reconstruction in order to describe and
quantify the biodiversity in deep-sea fragmented habitats. The images used for the reconstruction are acquired from an underwater vehicle lying on the seafloor. A stereo rig is carried by a 6
DOF manipulator arm mounted on the vehicle. The images are subjected to several constraints related to the underwater environment. First, the observed scenes are not known in advance, and the
objects reconstructed from these scenes have a random texture and form. We only know that the objects are rigid and that they are roughly vertically shaped. Refraction combined with the
presence of particles, light absorption, and other lightning related problems considerably alter the quality of the images. From noisy images and a model of the object with many unknown
parameters, it is very difficult to obtain a precise 3D reconstruction. The idea is to constrain the image acquisition process thanks to a visual servoing approach in order to reduce the
number of unknown parameters in the reconstruction computation. It consists in capturing a reference image with the right camera at a given position, and then converging towards this position
with the left camera. The distance and the angle between the two cameras constrain the trajectory followed by the cameras by iterating the visual servoing (see figure
-left). Because the underwater cameras are not the same, and the intrinsic parameters may vary according to the environment,
we have focused our attention on an intrinsic-free visual servoing method
. The proposed visual servoing scheme has been validated experimentally on two different robots: the robot ANIS
at INRIA and the robot
*MAESTRO*at
IFREMER
. During the Momareto cruise in 2006 in the Azores, we have tested the method on the Ifremer's Remotely Operated
Vehicle (ROV)
*Victor6000*, with the stereovision rig hung from the tip of the robotic arm
*MAESTRO*(see figure
-right). Underwater images were acquired by the stereo rig on two different sites (Menez Gwen and Lucky Strike) deep down to
1700 meters.

For control design purposes, the Transverse Function (TF) approach applies directly to driftless systems which are invariant on a Lie group, with the complementary advantage of yielding (semi) global –by opposition to local– practical stabilization in this case. When the driftless system to be controlled is not invariant on a Lie group, one can think of several ways to adapt the approach. One consists in working with a controllable homogeneous approximation of the system (eventually complemented with a dynamic extension) which is invariant on a Lie group. Such an approximation always exists when the original system is locally controllable, so that this method is very general. However, it has the disadvantage of only ensuring local practical stability, under the constraint of using a "small" enough TF. Another possibility consists in synthesizing the control expression from an invariant feedback-equivalent system. In this case the domain of stability usually coincides with the domain of feedback-equivalence which is typically larger than the stability domain obtained with an homogeneous approximation. However, this method is not always applicable because the existence of such an invariant equivalent system is only granted for a small subclass of driftless systems. A third possibility consists in considering a non-invariant feedback equivalent system possessing "adequate" structural properties, and a complementary method for the TF design (based on an invariant local approximation, for instance). As in the previous case, the obtainable domain of practical stability may coincide with the one of feedback-equivalence (with the non-invariant system, this time) so that it can even be larger. But again, it only concerns a specific subclass of driftless systems.

The kinematic equations of a unicycle with
Ntrailers form a control system which can be used to illustrate these possibilities. In particular, the control solution obtained with the latter one yields the practical stabilization
with arbitrary pre-specified precision of any (not necessarily feasible) desired trajectory for the last trailer provided that the initial angle between two consecutive trailers is within the
interval
. Moreover, by a proper choice and tuning of the transverse function, the tracking errors converge to zero exponentially when the reference trajectory is feasible and makes the
linearized error equation controllable. Therefore, whereas the proposed TF control solution can handle situations for which conventional control schemes fail to operate properly, it achieves
the same performance as these schemes in their own specific domain of operation. The results of this study are reported in two papers. The first one
, submitted to a robotics journal, provides a survey of the TF approach applied to wheeled vehicles with
simulation results of the unicycle and car-like cases. The second one
, accepted for presentation at the 2008 IEEE CDC, provides a detailed generalization of the car case to the
N-trailers case with simulation results of the unicycle with two trailers (which is kinematically equivalent to a car or truck pulling a single trailer).

This work is the subject of a collaboration with Dr. M. Ishikawa from the University of Kyoto.

The basic trident snake mechanism, as described in
, is depicted on Figure
. It is composed of a triangular-shaped body and three actuated rotary articulations –located at the triangle's summits–
which connect this body to three arms equiped with (non-actuated) wheels at their extremities. The body is displaced by modifying the articulation angles, assuming that the wheels roll
without slipping. Like more classical wheeled mobile robots, this is a locally controllable nonholonomic system whose kinematics can be written in the form of a driftless control system. Its
configuration consists of the body's position/attitude complemented with the three actuated angles
_{1, 2, 3}. Its dimension (i.e. the dimension of the mechanism's configuration) is thus equal to six, and it has three control inputs. However, it cannot be controlled exactly like a unicycle or
car-like vehicle (with, or without, trailers) because its controllability algebra is not generated in the same manner. For instance, it cannot be transformed locally into the chained form.
Two other challenging features of this system –for its control– is that it is not invariant on a Lie group and that it has mechanical singularities (when the three wheels' axles are parallel
or cross at the same point) the passage thru which can be destructive –as in the well-documented case of parallel manipulators. With the monitoring of the body's motion, the avoidance of
these singularities is thus one of the control objectives. Initial results concerning the open-loop control of this nonholonomic system can be found in the above-mentioned reference but, to
our knowledge, the feedback control and stabilization problems have never been addressed before.

Beyond the sheer satisfaction of managing to control this "exotic" vehicle, we view this study as an opportunity to complement the development of the TF control approach and as the first step towards the control of other snake systems with this approach. First control attempts, the most succesful of which have so far been obtained by applying the TF approach to a controllable homogeneous approximation of the system, have already proved to be informative and rewarding in this respect. A conference paper reporting these results will be submitted in early 2009.

We have continued to work on the foundations of a generic Lyapunov-based control approach for the large family of thrust-propelled underactuated vehicles (either terrestrial, marine, or aerial). The results of this study are reported in a paper which has been accepted for publication in the IEEE Transactions on Automatic Control. When considering flying vehicles more specifically, an important complementary issue concerns the estimation of the pose (i.e. position and orientation) of these vehicles. While the problem is relatively well solved for heavy vehicles whose acceleration is small compared to the gravity acceleration, difficulties remain for small VTOL (Vertical Take-Off and Landing) vehicles due to a combination of factors: the use of low cost/low weight sensors which do not provide high quality measurements, the high sensitivity to wind gusts which can provoke important accelerations in both position and orientation, the low signal-to-noise ratio of (raw) GPS absolute position measurements in the case of (quasi) stationnary flight, etc. Of prime importance is the estimation of the attitude (i.e. orientation) for a certain number of vehicles for which the determination of the control inputs (forces and torques) intensities critically depends on this information. Attitude estimation is typically obtained by fusing GPS, magnetometers, and IMU (Inertial Measurement Unit) measurements. When the vehicle's linear acceleration is small, it is theoretically possible to reconstruct the vehicle's attitude by using only accelerometers and magnetometers measurements.To build attitude observers/filters, most existing (“classical”) methods rely on this small acceleration assumption. For many VTOL vehicles however, linear accelerations may be important and induce significant errors on the attitude estimation. In a recent paper , Martin and Salaun have proposed a new attitude observer which uses linear velocity measurements (obtained e.g. via a GPS) in order to take into account linear accelerations in the estimation algorithm. The proposed solution shows a significant improvement with respect to classical methods when linear accelerations are not negligible. However, no analysis of the observer's stability and convergence is provided. Motivated by Martin and Salaun's result, we have worked on a new attitude observer based on linear velocity, accelerometers, and magnetometers measurements, for which we have been able to prove stability and convergence properties. More precisely, we show –in the ideal case of unbiased and noiseless measurements– that, for any initial condition of the estimator outside a set of zero measure, the estimation errors converge to zero. We also show that the set of “bad” initial conditions (i.e. those for which convergence of the estimation errors cannot be granted) is unstable. Alike the solution proposed in , simulation results show a net improvement of the attitude estimation with respect to classical solutions when the small linear acceleration assumption does not apply. A conference paper reporting the results of this study will be submitted shortly.

This project aims at developping vision-based functions in the context of autonomous militarian terrestrial vehicles dedicated to survey missions. Among the various issues addressed by the
project, let us cite the detection and the tracking of natural or artificial landmarks, and the visual platooning. Developments are currently carried out in the context of the
*Programme d'Etude Amont: Tarot*funded by the
*DGA (Délégation Générale de l'Armement)*. Within this program,
ARobAS, jointly with the INRIA project team
Lagadic, is a subcontractor of the company Thalès.

The objective is to design an active stereovision head controlled via visual servoing techniques. An industrial device has been designed and first trials have been carried out in august 2006 on the Lucky Strike site (depth: 1700m) in the Azores. This work, initialy funded by a research contract, is currently pursued in the context of a PhD thesis funded by the Ifremer Institute and the PACA region.

Associated with the
*Pôle de compétitivité System@atic*, this project aims at preventing pre-crash accidents by real-time vision-based detection and tracking of pedestrians and dynamic obstacles. Our
partners are INRIA/
E-motion, INRIA/
Imara, INRETS/LIVIC, CEA/LIST, CNRS/IEF, CNRS/Heudiasyc, CNRS/LASMEA, ENSMP/CAOR, Renault, Valéo.

This project aims at studying the contribution of omnidirectional vision in aerial robotics. With a SLAM-like approach, we propose to develop methods and algorithms based on catadioptric omnidirectional vision in order to perform the mapping and 3D-modeling of an urban surrounding. Our partners are CNRS/CREA (Université de Picardie Jules Verne), CNRS/LAAS, CNRS/Le2i (Université de Bourgogne), INRIA/ Perception.

This project concerns the control of small underactuated Aerial Vehicles with Vertical Take-Off and Landing capabilities (systems also referred to as VTOL's). Our participation is more specifically dedicated to the development of feedback control strategies in order to stabilize the system's motion despite diverse adverse phenomena, such as modeling errors associated with the vehicle's aerodynamics or pertubations induced e.g. by wind gusts.

Our partners are I3S UNSA-CNRS (Sophia-Antipolis), IRISA/Lagadic (Rennes), CEA/LIST (Fontenay-aux-roses), Heudiasyc (Compiègne), and Bertin Technologies (Montigny-le-Bretonneux).

This CityVIP project, following the "Automated Road Scenario", focuses on low speed applications (<30 km/h) in an open, or partly open, urban context. The level of automatisation can vary from limited driving assistance to full autonomy. An important effort is devoted to the use of on-board vision for precise vehicle localization and for the urban environment modeling. Such a model is then used in automatic guidance by applying visual servoing techniques developed by the research partners.

Our partners are Lasmea (Clermont Ferrand), IRISA/Lagadic (Rennes), Heudiasyc (CompiÃ¨gne), LCPC (Nantes), IGN/Matis (Paris), Xlim (Limoges), BeNonad (Sophia Antipolis)

This project, led by Dassault, aims at developing embarked systems for autonomous take-off and landing when dedicated airport equipments are not available. We are in charge, jointly with the INRIA project team Lagadicand the IST/DEM project-teams, of developing visual-servoing solutions adapted to the flight dynamic constraints of planes. Our partners are Dassault, EADS, ALENIA, EUROCOPTER, IJS, INRIA/ Lagadic, INRIA/ Vista, CNRS/I3S, IST/DEM (Portugal), Universita di Parma (Italy), EPFL (Swiss), ETHZ (Swiss), Institut "Jozef Stefan" (Slovenie).

The project
AURORA(
*Autonomous Unmanned Remote Monitoring Robotic Airship*) led by the LRV/IA/CenPRA aims at the development of an airship dedicated to observation. Collaboration agreements on this theme
were signed between Inria, Brazilian CNPq and FAPESP, and Portugese GRICES. In such a context, Geraldo Silveira is carrying on a PhD thesis in the
ARobASteam with a funding from the national brezilian agency CAPES.

Since June 2005, P. Rives is Associated Editor of the journal IEEE International Transaction on Robotics (ITRO).

P. Rives has been a member of the Program Commitee of the following conferences: ICRA, IROS, Omnivis, MFI, RFIA. He was the Awards General Chairman of IROS08 conference in Nice.

P. Rives was a member of the French delegation and represented INRIA at the First France-Keio Co-Mobility Society Workshop at the invitation of Prof. H. Kawashima in Tokyo (japan). During his stay, he visited the AIST laboratory in Tsukuba and the Robotic Group of the Tokyo University at the invitation of Prof. Y. Nakamura.

E. Malis has been a member of the Program Commitee of the following conferences: ICRA, VISAPP, RFIA. He was Special Session chair at the IROS conference.

C. Samson spent two weeks in Japan in March, at the invitation of Prof. T. Sugie, Head of the Mechanical Control Lab of the University of Kyoto. During his stay, he initiated a collaboration with Dr. M. Ishikawa on the control of trident-snake systems, and gave lectures at this university and at the Tokyo Institute of Technology, at the invitation of Prof. M. Sampei.

P. Morin spent one week in Mexico in October, at the invitation of Dr. M. Maya mendez of the University of San Luis Potosi. During his stay, he gave two lectures.

ARobAS members have presented their work at the following conferences:

The International Conference on Computer Vision Theory and Applications, Funchal, Portugal, January 2008,

First France-Keio Co-Mobility Society Workshop, January, 2008,

IFAC World Congress, Seoul, South Corea, July 2008,

The 8th Workshop on Omnidirectional Vision, Camera Networks and Non-classical Cameras, in conjunction with ECCV, Marseille, France, October 2008.

IEEE/RSJ International Conference on Intelligent Robots Systems (IROS), Nice, France, October 2008,

IEEE Conference on Decision and Control (CDC), Cancun, USA, December 2008.

ARobAS members have presented their work at the following conferences:

RFIA, Amiens, France, January 2008,

Periodic meetings of work groups of the CNRS Research Program (GDR) in Robotics.

C. Samson is a member of the “Bureau du Comité des Projets” at INRIA Sophia-Antipolis.

P. Rives is a member of the “Comité de Suivi Doctoral de l'U.R. de Sophia Antipolis”.

P. Rives is a member of the
*61
^{e}Commission de Spécialistes de l'Université de Nice - Sophia Antipolis*.

*H.D.R.* : E. Malis, HDR title: “Méthodologies d'estimation et de commande à partir d'un système de vision”, Université de Nice-Sophia-Antipolis, March 2008.

*Ph.D. Graduates* :

G. Silveira, Thesis title: “Contributions aux méthodes directes d'estimation et de commande basées sur la vision”, Ecole des Mines de Paris, October 2008, supervisors : E. Malis, P. Rives.

V. Brandou, Thesis title: “Stéréovision Locale et Reconstruction 3D/4D”, Université de Nice-Sophia-Antipolis, December 2008, supervisors : P. Rives, E. Malis.

*Current Ph.D. Students* :

M.-D. Hua, Commande de systèmes mécaniques sous-actionnés, université de Nice-Sophia Antipolis, supervisors : P. Morin, T. Hamel, C. Samson.

G. Gallegos, Exploration et navigation autonome dans un environnement inconnu , Ecole des Mines de Paris, supervisor : P. Rives.

C. Joly, Conditionnement des méthodes de VSLAM en environnement extérieur , Ecole des Mines de Paris, supervisor : P. Rives.

A. Salazar, SLAM en environnement extérieur dynamique , Ecole des Mines de Paris, supervisor : E. Malis.

T. Ferreira-Goncalves, Contrôle d'un aéronef par asservissement visuel , Université de Nice-Sophia Antipolis, Universidade Tecnica de Lisboa, supervisors : P. Rives, J.R. Azineira (IST Lisboa)

*Current Postdoc Students* :

Hicham Hadj-Abdelkader, Low-level image processings with catadioptric cameras , Caviar Project, supervisors : P. Rives, E. Malis.

*Participation in Ph.D. and H.D.R committees* :

P. Rives has participated in six Phd defenses and four HDR juries.

P. Morin has participated in one Phd defense jury.

E. Malis evaluated one Phd work at the University of Zaragoza, Spain.

*Training periods* :

N. Alt, “Sélection de régions d'intérêts pour le suivi visuel”, 2 months, supervisor : E. Malis

Course on nonlinear control in the Master EEA of the university of Nice-Sophia Antipolis (P. Morin, 25 hours Eq. TD).

Course on linear control at the Ecole Polytechnique Universitaire of Nice (EPU), (P. Morin, 17 hours Eq. TD).

Lecture course on mobile robotics, Ecole Nationale des Ponts et Chaussées, (P. Rives, 3 hours).