The project-team activity is focused on the study of mobile robotic systems destined to accomplish complex tasks involving strong interactions with the system's environment. The underlying spectrum of research is vast, due to the variety of devices amenable to automatization ( ground, underwater and aerial vehicles...), of environments in which these devices are vowed to operate (structured/natural, known/unknown, static/dynamic...), and of applications for which they have been designed (assistance to handicapped people, environmental monitoring, rescue deployment after natural disasters, observation and tactical support...).
A fundamental issue in autonomous mobile robotics is to build consistent representations of the environment that can be used to trigger and execute the robot's actions. In its broadest sense, perception requires detecting, recognizing, and localizing elements of the environment, given the limited sensing and computational resources of the robot. The performance of a mobile robotic system crucially depends on its ability to process sensory data in order to achieve these objectives in real-time. Perception is a fundamental issue for both the implementation of reactive behaviors (based on feedback control loops) and the construction of the representations which are used at the task level. Among the sensory modalities, artificial vision and range finder are of particular importance and interest due to their availability and extended range of applicability. They are used for the perception and modeling of the robot's environment, and also for the control of the robot itself. Sensor-based control refers to the methods and techniques dedicated to the use of sensor data and information in automatic control loops. Its mastering is essential to the development of many (existing and future) robotic applications and a corner-stone of the research on autonomous robotics.
Most tasks performed by robots rely on the control of their displacements. Research on robot motion control largely stems from the fact that the equations relating the actuators outputs to the displacements of the robot's constitutive bodies are nonlinear. The extent of the difficulties induced by nonlinearity varies from one type of mechanism to another. Whereas the control of classical holonomic manipulator arms has been addressed very early by roboticists, and may now be considered as a well investigated issue, studies on the control of nonholonomic mobile robots are more recent. They also involve more sophisticated control techniques whose development participates in the extension of Control Theory. Another source of difficulty is underactuation, i.e. when the number of independent means of actuation is smaller than the number of degrees of freedom of the robotic mechanism. Most marine and aerial vehicles are underactuated. A particularly challenging case is when underactuation renders all classical control techniques, either linear or nonlinear, inoperative because it yields a system of linearized motion equations which, unlike the original nonlinear system, is not controllable. Such systems are sometimes called critical. Research in this area of automatic control is still largely open.
ARobASgenuinely tries to balance and confront theoretical developments and application-oriented challenges. In this respect, validation and testing on physical systems is essential, not only as a means to bring together all aspects of the research done in ARobAS–and thus maintain the coherence and unity of the project-team–, but also to understand the core of the problems on which research efforts should focus in priority. To this aim, a significant part of our resources is devoted to the development of experimentation facilities that are proper to the project and constitute an experimental workbench for the research done in the project. In parallel, we try to develop other means of experimentation in partnership research programs, for example with the Ifremerconcerning underwater robotics, and with the CenPRA of Campinas (Brazil) and I.S.T. of Lisboa (Portugal) on the control of unmanned aerial vehicles (drones and blimps).
Selim Benhimanehas received the ASTI Award in the category Recherche Appliquée Innovantefor his PhD Thesis entitled « Vers une approche unifiée pour le suivi temps-réel et l'asservissement visuel », supervisors : E. Malis, P. Rives.
The paper entitled « Accurate Quadrifocal Tracking for Robust 3D Visual Odometry »,by A. Comport, E. Malis and P. Riveshas been selected as one of the three finalists for the ICRA'07 Best Vision Paper Award at the last IEEE ICRA conference held in Roma in May 2007.
The meaning of autonomyin the mobile robotics context covers a large scope of different aspects, from the capabilities of moving safely and interacting with the environment, to planning, reasoning and deciding at a high level of abstraction. ARobAS pursues a bottom-up approach with a sustained focus on autonomous navigation and the monitoring of interactions with unknown, variable, and complex environments.
The project team is organized under the headings of two research themes : Perception and autonomous navigationand control. Nonetheless, it matters to keep in mind that the borderline between the themes is somewhat thin since several of the associated issues and tools to address them are clearly interdependent and complementary. To highlight this interdependency, we have described in a separate section the transverse issues to the two vertical themes.
Autonomy in robotics largely relies on the capability of processing the information provided by exteroceptive sensors. Perception of the surrounding environment involves data acquisition, via sensors endowed with various characteristics and properties, and data processing in order to extract the information needed to plan and execute actions. In this respect, the fusion of complementary information provided by different sensors is a central issue. Much research effort is devoted to the modeling of the environment and the construction of maps used, for instance, for localization, estimation, and navigation purposes. Another important category of problems concerns the selection and treatment of the information used by low-level control loops. Much of the processing must be performed in real-time, with a good degree of robustness so as to accommodate with the large variability of the physical world. Computational efficiency and well-posedness of the algorithms are constant preoccupations. Among the multitude of issues related to perception in Robotics, ARobAShas been addressing a few central ones with a more particular focus on visual and range sensing.
A key point is to handle the right compromise between the simplicity of the models and the complexity of the real world. Numerous computer vision algorithms have been proposed with the implicit assumptions that the observed surfaces are Lambertian and the illumination is uniform. These assumptions are only valid in customized environments. For applications such as the exploration of an outdoor environment the robustness of vision-based control schemes can be improved by using more realistic photometric models (including color information). Even though such models have already been used in the computer vision and augmented reality communities , their applicability to real-time robotic tasks has not been much explored. One of our objectives is to estimate in real-time both the structure of the scene and the position of the light sources, in order to bypass the standard “brightness constancy assumption”. This information can in turn be used at the control design level to overcome problems associated with specular reflection or shadows in the image.
In the same way that sensor models currently in use in robotics are often too simple to capture the complexity of the real world, the hypotheses underlying the geometrical structure in the scene are often restrictive. Most of the methods assume that the observed environment is rigid . For many applications like, for example, autonomous navigation in variable and dynamical environments, this assumption is violated. In these cases, distinguishing between the observed global (dominant) motion and the true motion, or even the deformations, of particular objects, is important. More generally, the question is to estimate robustly and in real-time the information needed for the visual task. Real-time processing of a complete model of a deformable environment (i.e. the tri-dimensional shape, the deformations of the surfaces, textures and colors and other physical properties that can be perceived by robotic sensors) has not yet been achieved. Recent studies carried out on visual tracking(i.e. tracking of visual clues in the image without feedback control of the camera pose), using a stereo pair of cameras or a single camera , are essentially concerned with parametric surfaces. To the best of our knowledge, the use of deformable visual information for navigation or feedback control has been limited to deformable contours , or simple articulated planar objects .
In many applications, using only one sensor may not be the optimal way to gather the information needed to perform the robot task. Many exteroceptive sensors provide complementary information (for example, unlike a single camera, a laser telemeter can directly measure the distance to an object), while proprioceptive sensors (odometry) are convenient to estimate local displacements of a robot. We intend to participate in the development of new “intelligent” devices composed of several complementary sensors well-suited to the tasks involved in autonomous robotics.Developing such sensors requires to solve different aspects of the problem : calibration, data representation, estimation and filtering. A theory for the proper integration of multi-sensor information within a general unified framework is still critically lacking. An example of such a multi-sensor system is a perspective or omnidirectional vision sensor, coupled with a laser scanner or radar, and also with odometry or GNSS/INS. A system of this type, consisting of a laser scanner coupled with a catadioptric vision sensor , , is currently under development in the project-team. Preliminary results based on this system are encouraging, and the acquisition and processing of composite data at video rate should be possible in the near future.
Most of the applications involving mobile robotic systems (ground vehicles, aerial robots, automated submarines,...) require a reliable localization of the robot in its environment. The problem of localization, given a map of the environment in the form of a set of landmarks or, conversely, the problem of constructing a map assuming that the vehicle's situation (position+orientation) is known, has been addressed and solved using a number of different approaches. A more attractive problem is when neither the robot path nor the map is known. Localization and mapping must then be considered concurrently. This problem is known as Simultaneous Localization And Mapping. In this case, the vehicle moves from an unknown location in an unknown environment and proceeds to incrementally build up a navigation map of the environment, while simultaneously using this map to update its estimated position. After the seminal articles by Smith, Self and Cheeseman , in the 1990's, stochastic mapping has become the dominant approach to SLAM. The term SLAM itself appeared a few years later . This problem is central to building autonomous robots and has focused a lot of research since the 1980's. A survey of mapping can be found in or in the more recent book dedicated to the subject . Two recent tutorials by Hugh Durrant-Whyte and Tim Bailey , describe some of the standard methods for solving the SLAM problem but also some more recent algorithms. They contain up-to-date references to online software and datasets.
Using vision in SLAM provides a rich perceptual information compared to lasers, which yields a low data association ambiguity. However real-time visual SLAM has only become possible recently with faster computers and ways of selecting sparse but distinct features. The main difficulty comes from the loss of the depth dimension due to the projective model of the camera. Consequently, monocular vision yields to address the specific configuration of bearing-only slam. In such a configuration, only the directions of sight of the landmaks can be measured, leading to unobservability problems during the initialization. It is also well known in the computer vision community that specific motions of the camera, or very distant landmarks, lead to observability problems. To overcome this type of problem, delayedlandmark insertion techniques such as local bundle adjustment or particle filtering , have been proposed. More recently undelayedapproaches , , have been investigated. These approaches generally rely on a probabilistic model of the depth distribution along the sight ray and require the use of particle filtering techniques or gaussian multi-hypothesis methods. More recently, a new approach has appeared based on graphical inference techniquewhich represents the SLAM prior knowledge as a set of links between robot poses, and formulates a global optimization algorithm for generating a map from such constraints , , . In the project team, we are applying these ideas to visual SLAM by stating the problem in terms of the optimization of a warping function directly expressed in the image space. The function parameters capture not only the geometrical and the photometrical aspects of the scene but also the camera motion. Robustness is enhanced by using a dense approach taking advantage of all the information available in the regions of interest instead of a sparse representation based on features like Harris or Sift points. This approach is currently extended to cope with complex urban scenes with difficult traffic conditions.
Nevertheless, solving the SLAM problem is not sufficient for guaranteeing an autonomous and safe navigation. The choice of the representation of the map is, of course, essential. The representation has to support the different levels of the navigation process : motion planning, motion execution and collision avoidance and, at the global level, the definition of an optimal strategy of displacement. The original formulation of the SLAM problem is purely metric (since it basically consists in estimating the Cartesian situations of the robot and a set of landmarks), and it does not involve complex representations of the environment. However, it is now well recognized that several complementary representations are needed to perform exploration, navigation, mapping, and control tasks successfully. Alike several authors, we have proposed to use composite models of the environment which mix topological, metric, and grid-based representations.Each type of representation is well adapted to a particular aspect of autonomous navigation : the metric model allows to locate the robot precisely and plan Cartesian paths, the topological model captures the accessibility of different sites in the environment and allows a coarse localization, and finally the grid representation is useful to characterize the free space and design potential functions used for reactive obstacle avoidance. However, ensuring the consistency of these various representations during the robot exploration, and merging observations acquired from different viewpoints by several co-operative robots, are difficult problems. This is particularly true when different sensing modalities are involved. New studies to derive efficient algorithms for manipulating the hybrid representations (merging, updating, filtering...) while preserving their consistency are needed.
The exploration of an unknown environment relies on a robot motion strategy which allows to construct a complete representation of the environment in minimal time or,
equivalently, with displacements of minimal lengths. Few works have addressed these aspects so far. Most exploration approaches
,
use a topological representation like the
Generalized Voronoï diagram (GVD). Assuming an infinite range for the sensors, GVD provides an aggregated representation of the environment and an elegant means to solve the optimality
problem. Unfortunately, the usual generalized Voronoï diagram, which is based on the
L2metric, does not cope well with real environments and the bounded range of the sensors used in robotic applications. Building topological representations supporting exploration
strategies in real-time remains a challenging issue which is pursued in
ARobAS.
For large-scale environments and long-time survey missions, the SLAM process can rapidly diverge due to the uncertainties and the drift inherent to dead reckoning methods, and the unavailability of absolute position measurements (as provided, for example, by a GNSS whose drawback is that it is not operational everywhere nor always). The problem of motion control is rarely considered as a constitutive part of the SLAM problem. We advocate that autonomous navigation and SLAM should not be treated separately, but rather addressed in an unified framework involving perception, modeling, and control. Reactive navigation and sensor-based control constitute the core of our approach. Sensor-based control, whose design relies on the modeling of the interactions between the robot and its nearby environment, is particularly useful in such a case. We show in simulation and experimentally that embedding the SLAM problem in a sensor-based control framework acts as adding constraints on the relative pose between the robot and its local environment. In other words, the sensor-based control approach allows to guarantee, under certain observability conditions, a uniformly bounded estimation error in the localization process. Moreover, assuming that the world is static and bounded, we can derive navigation functionscapable, at a reactive control level, to ensure collision-free robot motions and, at the navigation level, to implement a (topologically) complete exploration of the environment in autonomous mode. ARobAS will pursue research issues on reactive navigation with a special emphasis on the use of vision sensors for both indoor and outdoor environments. This research theme is closely related to the studies on robust perception and sensor-based control.
Since robotic, or “robotizable”, mechanisms are structurally nonlinear systems which, in practice, need to be controlled in an efficient and robustmanner, the project ARobAShas natural interests and activities in the domain of Automatic Control related to the theory of control of nonlinear systems. Nonlinear control systems can be classified on the basis of the stabilizability properties of the linear systems which approximate them around equilibrium points. Following , an autonomous controllable nonlinear system is called criticalwhen the corresponding linearized systems are not asymptotically stabilizable (and therefore not controllable either). Whereas local stabilizers for non-critical systems can often be derived from their linear approximations, one has to rely on other –truly nonlinear– methods in the case of critical systems.
For robotic applications, one is concerned in the first place with the design of feedback laws which stabilize state-reference trajectories in the sense of ensuring small tracking errors despite adverse phenomena resulting from modeling errors, control discretization, measurement noise,...
The set of critical systems strictly encompasses the one of controllable driftless systems affine in the control input (e.g. kinematic models of nonholonomic wheeled vehicles). Most of the existing literature on the subject has focused on these latter systems due to their well delimited and understood structural properties. On the other hand, nonlinear control-affine systems with a drift term which cannot be removed without rendering the system uncontrollable have been much less studied, whereas many locally controllable underactuated mechanical systems(e.g. manipulators with non-actuated degrees of freedom, hovercrafts, blimps, submarines,...) belong to this category of critical systems. However, there exist also underactuated mechanical systems which are not critical in the sense evoked above. Such is the case of flying machines with vertical take-off capabilities (helicopters, VTOL devices,...) whose linear approximations at an equilibrium are controllable due to the action of an external field of forces (the field of gravity, in the present case). Understandably, the control techniques used for these systems heavily rely on this property eventhough, mathematically, the absence of such a field would not necessarily render the system itself (by opposition to its linear approximation) uncontrollable. This latter observation is important because it means that not all the structural controllability properties of the system have been exploited in the control design. This also implies that general control methods developed for critical systems could be applied to these non-critical systems, with their performance being less critically dependent on the existence and modeling of an external “stabilizing” field. To our knowledge, this research direction has never been explored before.
To summarize, the problem of control of critical nonlinear systems is relevant for most robotic devices other than fully-actuated holonomic manipulators. It is, of course, also relevant for other physical systems presenting similar structural control properties (an example of which are induction motors).
We have been advocating for a few years that it needs to be investigated further by developing new control design paradigms and tools. In this respect, our conviction is based on a certain number of elements, a summary of which follows.
Asymptotic stabilization of an equilibrium combining fast convergence (say exponential) and a degree of robustness similar to what can be achieved for linear systems (e.g. stability against structured modeling errors, control discretization, time-delays, and manageable sensitivity w.r.t. noise measurement,...) has never been obtained.Studies that we, and a few other researchers, have conducted towards this goal , , have been rewarded with mitigated success, and we strongly feel now that no solution exists: basically, for these systems, fast convergence rules out robustness.
It is known from that asymptotic stabilization of admissiblestate trajectories (i.e. trajectories obtainable as solutions to the considered control system) is “generically” solvable by using classical control methods, in the sense that the set of trajectories for which the linear approximation of the associated error system is controllable is dense. Although this is a very interesting result which can (and has been) thoroughly exploited in practice, this is also a delusional result whose limitations have insufficiently been pondered by practitioners. The reason is that it tends to convey the (widespread) idea that all tracking problems can be solved by applying classical control techniques. The application of Brockett's Theorem to the particular case of a trajectory reduced to a single equilibrium of the system indicates that no smooth pure-state feedback can be an asymptotical stabilizer, and thus clearly invalidates this idea. If an asymptotic stabilizer exists, it has to involve a non-trivial dynamic extension of the initial system. Time-varying feedbacks that we have been first to propose to solve this type of problem in the case of nonholonomic systems constitute an example of this. However, solving the problem for fixed equilibria still does not mean that “any” admissible trajectory can be asymptotically stabilized, nor that there exists a “universal” controller, even a complicated one, capable of stabilizing any admissible trajectory –whereas simple solutions to this latter problem are well-known for linear systems. This lack of completude of the results underlies severe practical implications which, in our opinion, have not been sufficiently recognized.
As a matter of fact, the non-existence of a “universal” stabilizer of admissible (feasible) trajectories has been proven in in the case of nonholonomic systems. This result is conceptually important because it definitively ruins the hope of finding a complete solution to the tracking problem (in the usual sense of ensuring asymptotic stabilization), even for the simplest of the critical systems.
To our knowledge, the problem of stabilizingnon-admissible trajectories has never been addressed systematically, even in the case of fully-actuated nonholonomic systems, except by us recently .This is, in our opinion, a “gap” that a decade of active research devoted to the control of these systems (in the 1990's) has left wide-open. Indeed, for a nonholonomic driftless system, the property of local controllability implies that any continuous non-admissible trajectory in the state space can be approximated with arbitrary good precision by an admissible trajectory. While several open-loop control methods for calculating such an approximation have been proposed by various authors , , practicalstabilization of non-admissible trajectories –the feedback control version of the problem– seems to have been completely “occulted” by the problem of asymptotic stabilization of admissible trajectories.
The range of feedback control design methods for nonlinear systems, especially those based on geometrical concepts, is limited and needs to be enlarged. Existing methods are often inspired by ideas and techniques borrowed from linear control theory. Whereas this makes good sense when the system is non-critical (including feedback linearizable systems), we contend that critical systems, being structurally different, call for revisiting and adapting the basics concepts and objectives on which control design methods lean. The notion of practical stabilization is an example of such an adaptation.
The objective of practical stabilizationis weaker than the classical one of asymptotic stabilization: any asymptotical stabilizer is a practical stabilizer, whereas a practical stabilizer is not necessarily an asymptotical stabilizer. However, this objective is not “much” weaker. In particular, instead of ensuring that the error converges to zero, a practical stabilizer ensures that this error is ultimately bounded by some number which can be as small as desired (but different from zero). Nevertheless, we assert that this “small” difference in the objective changes everything at the control design level in the sense that none of the obstructions and impossibilities evoked previously holds any more: fast convergence to a set contained in a small neighborhood of the desired state can be achieved in a robust fashion, universal practical stabilizers of state trajectories exist, and, moreover, these trajectories do not have to be admissible. Furthermore, by accepting to weaken the control objective slightly, the set of control solutions is considerably enlarged, so that new control design methods can be elaborated. One of them is the Transverse Functionapproach whose development has been initiated by us a few years ago .
The basics of this approach and its application to nonholonomic systems are described in . The fact that the approach is best formulated in the context of systems which are invariant on Lie groups and by using (classical) tools of differential geometry is an indication of the mathematical background which allows one to get acquainted with it easily. Since we view this geometrical flavor as an asset of the approach, we hope that a time will come when it will no longer be a handicap for the diffusion of the approach near to the Automatic Control and Robotics communities, and more particularly near to practitioners.
The approach is based on a theorem, first published in , which states the equivalence between the satisfaction of the Lie Algebra Rank Condition (LARC) by a set of vector fields and the existence of particular (bounded) periodic functions whose infinitesimal variations are transversalto the directions associated with these vector fields. For control purposes, the time-derivatives of the variables on which such transverse functions depend can be used as extra control inputs which facilitate the control of systems whose dynamics are either completely (the case of nonholonomic systems) or partially (the case of underactuated systems) driven by the vector fields with which the transverse function is associated. In the case of mechanical systems, these new control inputs are directly related to the frequency of the “manœuvres” that the system has to perform in order to track a given reference trajectory. With this interpretation in mind, one can say that the approach provides a way of adapting the frequency of the manœuvres automatically.
We have first experimented feedback controllers derived with this approach on our laboratory unicycle-type mobile robot Aniswith the goal of tracking an omnidirectional vehicle (target) observed by a camera mounted on the robot (vision-based tracking) . To our knowledge, this experiment is still unique in its kind. Results that we have obtained show a net improvement with respect to earlier attempts that we had made, based on the use of time-varying feedback techniques . This application has also served as a guide to point out and study complementary aspects of the control design: tuning of the transverse function parameters, optimization of initial transient phases, coupling with state estimation and prediction of the target's velocity, etc. We have also shown, for an important sub-class of nonholonomic systems (chained systems), how the approach can be used to derive practical stabilizers which are also asymptotical stabilizers when the conditions for asymptotic stabilization are met. This has been accomplished via the definition of generalizedtransverse functions . Theoretically, the approach can be applied to any nonholonomic vehicle –car-like vehicles without or with trailers, in particular.
More recently, we have adapted the approach to the problem of controlling nonholonomic mobile manipulators, i.e. manipulators mounted on nonholonomic mobile platforms, and have derived a general methodology for the coordinated motion of this type of robot .It is based on the concept of omnidirectional companion framewhich basically allows to control the mobile platform as if it were omnidirectional. Feedback control laws devised with this methodology have properties never demonstrated before, such as the possibility of ensuring the perfect execution of a manipulation task on a moving object whose motion is not known in advance, with the insurance that the manipulator will never collide into its joint-limits.
Even more recently, we have started to extend the approach to the control of critical underactuated mechanical systems, a problem which is more difficult than the control of fully-actuated nonholonomic systems due to the necessity of including dynamical effects in the modeling equations of the system, yielding a drift term which cannot be treated as a perturbation which can be pre-compensated.For these systems, the objective is again to practically stabilize any desired trajectory (admissible, or not) defined in the configuration space. To our knowledge, this problem had never been solved before, even for the simplest critical underactuated system (namely, the 3-dimensional second-order chained system). Although we have already much progressed on this subject, and devised a control design method which applies to classical examples of critical underactuated mechanical systems involving a single rigid body , , many aspects of the problem have not been explored yet, or need to be studied further. Several are related to the definition and exploitation of criteria to qualify and compare different implementations of the control design method, such as the property of making velocities tend to zero when the reference trajectory is reduced to a fixed-point. Others concern the applicability and usefulness of the approach when the system is not critical (due to the action of dissipative/friction forces combined with the gravity field, in particular).
Robustness is a central and vast issue for feedback control. Any feedback control design approach has to be justified in terms of the robustness properties which are associated with it. In the case of advanced robotic applications based on the use of exteroceptive sensors, robustness concerns in the first place the capacity of dealing with the imprecise knowledge of the transformations relating the space in which sensor signals live to the Cartesian space in which the robot evolves. A vast literature, including several chapters of and a large part of the publications on vision-based control, has addressed this issue in the case of fully actuated holonomic manipulators. Comparatively, very little has been done on this subject in the case of nonholonomic and underactuated mobile robots. We have thus initiated studies (constituting the core of a PhD work) in order to figure out i) how feedback control schemes based on the use of transverse functions can be adapted to the use of exteroceptive sensors when the above mentioned transformations are not known precisely, and ii) how robust the resulting control laws are. Initial results that we have obtained are encouraging , but the complexity of the analyses also tells us that future research efforts in this direction will have to rely much on simulation and experimentation.
Interacting with the physical world requires to appropriately address perception and control aspects in a coherent framework . Since the beginning of the 1990's, the Icareand Lagadic(in Rennes) project teams have acquired much experience in sensor-based control applied to holonomic mechanisms like eye-in-hand systems. The control of these systems is much simplified by the fact that instantaneous motion along any direction of the configuration space is possible and can be monitored directly. A consequence of this is that “generic” sensor-based control laws can be designed independently of the mechanism specificities. However, this assertion is not true in the case of critical or underactuated systems like most ground, marine or aerial robots. New research trends have to be investigated for extending the sensor-based control framework to this kind of mechanisms.
Visual servoing and, more generally, sensor-based robot control consists in using exteroceptive sensor information in feedback loops to monitor dynamic interactions between a robot and its environment. Combining artificial vision and control is not a trivial task. In particular, it cannot be reduced to selecting a set of computer vision algorithms on the one hand, and control algorithms on the other, and making them run in parallel. This does not work well in general. Vision processing and control design have to be tailored with their respective specificities and requirements taken into account.This theme has motivated a lot of research in the last 30 years. It initially focused on the problem of controlling the pose of a camera, assuming that the camera can (locally) move freely in all directions (either in SE(2) or in SE(3)). Such is the case, for instance, when the camera is mounted on an omnidirectional mobile robot, or on the end-effector of a classical manipulator endowed with (at least) 6 degrees of freedom , . This is equivalent to viewing the camera as an ideal Cartesian motion device. The control part of the problem is then simplified and most vision-based control techniques have been proposed in this context . One of our contributions has been to propose a class of vision-based control methods for which the robustness of the stability property with respect to different types of uncertainties and modeling errors can be demonstrated . The case of robotic vision-carriers subjected to either nonholonomic constraints (like car-like vehicles) or underactuation (like most aerial vehicles) raises a new set of difficulties which open up the research on sensor-based robot control further.
Robustness is needed to ensure that the controlled system will behave as expected. It is an absolute requirement for most applications, not only to guarantee the good execution of the assigned tasks, but also for safety reasons, especially when these tasks involve direct interactions with humans (robotic aided surgery, automatic driving,...). A control law can be called "robust" if it is able to perform the assigned stabilization task despite modeling and measurement errors. Determining the "size" of "admissible" errors is understandably important in practice. However, carrying out this type of analysis is usually technically quite difficult. For standard vision-based control methods , only partial results have been obtained in a limited number of cases . Recently, we have studied the robustness of classical vision-based control laws (relying on feedback linearization) with respect to uncertainties upon structure parameters, and proved that small estimation errors on these parameters can render the control laws unstable , . This study has been extended to central catadioptric cameras . One of our objectives is to develop tools for the evaluation of robustness properties of sensor-based control schemes, for omnidirectional robotic devices (by extending existing results) and also for nonholonomic and underactuated ones.
Sensor-based robot tasks were originally designed in the context of manipulation, with the control objective stated in terms of positioning and stabilizing the end-effector of a manipulator with respect to a structured object in the environment. Autonomous navigation in an open indoor or outdoor environment requires the conceptualization and definition of new control objectives. To this aim, a better understanding of the natural facilities that animals and human beings demonstrate when navigating in various and complex environments can be a source of inspiration. Few works have addressed this type of issue with a focus on how to define navigation control objectives and formulate them mathematically in a form which can be exploited at the control level by application of methods and techniques of Control Theory. Numerous questions arise. For instance, what is the right balance between planned (open-loop) and reactive (feedback) navigation? Also, what is the relative importance of topological-oriented versus metric-oriented information during navigation? Intuitively, topological aspects encompassing the accessibility of the environment seem to play an important role. They allow for a navigation which does not heavily rely on the knowledge of Cartesian distances. For example, when navigating along a corridor, it is more important to have information about possibilities of access than calculating distances between walls precisely. The nature of the “percepts” at work in animal or human autonomous navigation is still poorly known and understood. However, it would seem that the implicit use of an ego-centered reference frame with one of its axes aligned with the gravitational direction is ubiquitous for attitude (heading and trim) control, and that specific inertial and visual data are somehow directly acquired in this frame. In , we have exploited a similar idea for the automatic landing of an aerial vehicule by implementing a visual feedback which uses features belonging to the plane at infinity (vanishing point and horizon line). It is also probable that the pre-attentive and early cognitive vision emphasized by the Gestalt theory provide useful inputs to the navigation process in terms of velocity, orientation or symmetry vector fields. Each of these “percepts” contributes to the constitution of sub-goals and elementary behaviors which can be adaptatively inhibited or re-enforced during the navigation process. Currently, little is known about the way animals and humans handle these different, and sometimes antagonistic, sub-goals to produce "effective" motions. Monitoring concurrent sub-goals, within a unified sensor-based control framework, is still an open problem which involves both the Perception and autonomous navigation and Control themes.
Methodological solutions to the multi-faceted problem of robot autonomy have to be combined with the ever present preoccupation of robustness and real-time implementability. In this respect, validation and testing on physical systems is essential, not only as a means to bring together all aspects of the research done in ARobAS–and thus maintain the coherence and unity of the project-team–, but also to understand the core of the problems on which research efforts should focus in priority. The instrumented indoor and outdoor wheeled robots constitute a good compromise in terms of cost, security, maintenance, complexity and usefulness to test much of the research conducted in the project-team and to address real size problems currently under investigation in the scientific communauty. For the next few years, we foresee on site testbeds dedicated to ground robotic applications.
ANIS II cart-like platform
A new cart-like platform, built by Neobotix, capable of moving on flat surfaces both indoor and outdoor, was acquired this year. This platform will be equiped with the various sensors needed for SLAM purposes, autonomous navigation and sensor-based control. Conceived to be user-friendly, it will be the main testbed for fast prototyping of perception, control and autonomous navigation algorithms.
CyCab urban electrical car
Two instrumented electrical cars of the CyCabfamily are destined to validate researchs in the domain of Intelligent urban vehicle. CyCabsare used as experimental testbeds in several national projects.
Advanced robotics offers a wide spectrum of application possibilities entailing the use of mechanical systems endowed, to some extent, with capacities of autonomy and capable of operating in automatic mode : intervention in hostile environments, long range exploration, automatic driving, observation and surveillance by aerial robots,... without forgetting emerging and rapidly expanding applications in the domains of robotic domestic appliances, toys, and medicine (surgery, assistance to handicapped persons, artificial limbs,...). A characteristics of these emerging applications is that the robots assist, rather than compete with, human beings. Complementarity is the central concept. The robot helps the operator in taking decisions or extending his physical capacities. The recent explosion of applications and new scientific horizons is a tangible sign that Robotics, at the crossway of many disciplines, will play a ubiquitous role in the future of Science and Technology.
We are currently involved in a certain number of applications, a list of which follows. Our participation in these applications is limited to the transfer of methods and algorithms. Implementation and validation are left to our partners.
Ground robotics :Since 1995, Inriahas been promoting research in the field of the intelligent transport systems. Our activity concern the domain of future transportation systems, with a participation in the national Predit Project MobiVIP. In this project, we address autonomous and semi-autonomous navigation (assistance to driving) of city cars by using information data provided by visual or telemetric sensors. This is closely related to the problems of localization in an urban environment, path planning and following, subjected to stringent safety constraints (detection of pedestrians and obstacles) within large and evolutive structured environments. The ANR/Predit project CityVIPbeginning in 2008 follows the Predit project MobiVIP, which ended in 2006.
We are also involved in the ANR-Predit LOVewith Renault and Valeo as industrial partners. Associated with the Pôle de compétitivité System@atic, this project aims at preventing pre-crash accidents by real-time vision-based detection and tracking of pedestrians and dynamic obstacles.
Finally, since 2004 we have participated in two projects conducted by the DGA(French Defense) in the field of military robotics. PEA MiniRocis a typical SLAM problem based on sensory data fusion, complemented with control/navigation issues. It addresses on-line indoor environment exploration, modeling and localization issues with a mobile robot platform equiped with multiple sensors (laser range-finder, omnidirectional vision, inertial gyrometer, odometry). More recently, PEA Tarotaddresses autonomy issues for military outdoor robots. Our contribution focuses on the transfer and adaptation of our results in real time visual-tracking for platooning applications to operational conditions.
Aerial roboticswill grow in importance for us. Existing collaborations with the Robotics and Vision Group at CenPRA in Campinas (Brazil) and the Mechanical Engineering Group at IST in Lisboa (Portugal) will be pursued to develop an unmanned airship for civilian observation and survey missions. Potential end-user applications for such vehicles are either civilian (environmental monitoring, surveillance of rural or urban areas, rescue deployment after natural disasters...) or military (observation or tactical support...). The experimental setup AURORA( Autonomous Unmanned Remote Monitoring Robotic Airship) consists of a 9 meters long airship instrumented with a large set of sensors (GPS, Inertial Navigation System, vision,...) located in Campinas. Vision-based navigation algorithms are also studied in the FP6 STReP European Project Pegase, led by Dassault, which is devoted to the development of embarked systems for autonomous take-off and landing when dedicated airport equipments are not available.
Aerial vehicles with vertical take-off and manœuvering capabilities (VTOLs, blimps) also involve difficult control problems. These vehicles are underactuated and locally controllable. Some of them are critical systems in the sense of the non controllability of their linearized equations of motion, even under the action of gravity (like blimps in the horizontal plane), whereas others are not due to this action (like VTOLs). Our objective is to propose control strategies well suited to these systems for different stabilization objectives (like e.g. teleoperation or fully autonomous modes). For example, a question of interest to us is to determine whether the application of control laws derived with the transverse function approach is pertinent and useful for these systems. The main difficulties associated with this research are related to practical constraints. In particular, strong external perturbations, like wind gusts, constitute a major issue for the control of these systems. Another issue is the difficulty to estimate precisely the situation of the system, due to limitations on the information that can be obtained from the sensors (e.g. in term of precision of the measures, or of frequency of the data acquisition). Currently, we address these issues in the ANR project SCUAV(Sensory Control of Unmanned Aerial Vehicles) involving several academic research teams and the french company BERTIN.
Underwater robotics :We have a long time collaboration with the Ifremer's Robotics Center in Toulon in the field of underwater robotics. The objective of the Themisproject is to design an active stereovision head controlled via visual-servoing techniques, in order to build an accurate system of metrology for underwater environment. Methodologies have been developed and prototyped using our robotic platform, and the transfer of the methodologies is currently under progress at the Ifremer's Robotics Center in Toulon. An industrial device has been designed and first trials have been conducted in august 2006 on the Lucky Strike site (depth: 1700m) in the Azores.
ESMTracking and Control Software. The software allows for visual tracking and servoing with respect to planar objects. It has been successfully tested on the Cycabs in a car platooning application. Benoit Vertut is in charge of developing a real time library implementing the ESM functionalities.
The
Omnidirectional Calibration Toolboxis a Matlab software developed for the calibration of different types of single viewpoint omnidirectional
sensors (parabolic, catadioptric, dioptric), based on a new calibration approach that we have proposed in
. The toolbox is freely available over the Internet
The realization of complex robotic applications such as autonomous exploration of large scale environments, observation and surveillance by aerial robots requires to develop and combine methods in various research domains: sensor modeling, active perception, visual tracking and servoing, etc. This raises several issues.
To simplify the setup, it is preferable to integrate, as far as possible, methods in computer vision and robotic control in a unified framework. Our objective is to build a generic, flexible and robust system that can be used for a variety of robotic applications.
To facilitate the transfer of control methods on different systems, it is preferable to design control schemes which weakly rely on “a priori” knowledge about the environment. The knowledge about the environment is reconstructed from sensory data.
To get reliable results in outdoor environments, the visual tracking and servoing techniques should be robust against uncertainties and perturbations. In the past, lack of robustness has hindered the use of vision sensors in complex applications.
Many computer vision applications, including robot navigation, visual servoing, virtual and augmented reality, require a camera with a quite large field of view. The conventional camera has a very limited field of view. One effective way to enhance the field is the use of omnidirectional cameras (vision systems providing a 360 opanoramic view of the scene). However, such devices possess non isotropic resolutions. Thus, standard computer vision methods designed for perspective cameras (i.e. filtering operators) may produce very poor results. The aim of our work is then to develop a new adequate formulation of these operators well adapted to catadioptric sensors.
The difficulty of image processing with these devices comes from the non-linear projection model resulting in changes of shape in the image that makes nearly impossible the direct use of methods designed for perspective cameras. A possible solution to avoid the deformation problem is to define the image processing on a sphere. Indeed, all the central catadioptric cameras can be described by a unified model that include a projection from a unit sphere. Thus, image processing operators must be designed on a sphere for example using spherical harmonics theory. To date, we have obtained encouraging preliminary results that shows the improvements over standard techniques.
Tracking is a fundamental step for various computer vision applications but very few articles on the subject have been published for catadioptric systems, being one of the rare examples. Parametric models such as homography-based approaches are well adapted to this problem. The results on ESM tracking from were extended to all single viewpoint sensors. The algorithm was successfully applied to the precise 6 degrees-of-freedom motion estimation of a mobile robot from a calibrated omnidirectional camera . We have studied how to extend the approach to the uncalibrated visual tracking. The main idea is to solve an optimization problem where all the parameters are unknowns.
In many applications, using only one sensor may not be the optimal way to gather the information needed to perform the robot task. Many exteroceptive sensors provide complementary information. For example, unlike a single camera, a laser telemeter can directly measure the distance to an object. A system of this type, consisting of a laser scanner coupled with a catadioptric vision sensor, is currently under development in the project-team. Calibration aspects have been considered in view of the fusion of sensory data . Preliminary results based on this system are encouraging, and the acquisition and processing of composite data at video rate should be possible in the near future.
The data association problem can be formulated as a parametric image registration. The aim is to find the parameters that align a current image to a reference one. However, this formulation demand efficient optimization algorithms with a wide convergence domain. In this context, we proposed the Efficient Second-order Approximation Method that allows faster convergence rate and wider convergence domain than standard optimization methods. As image registration becomes more and more central also to many biomedical imaging applications, the efficiency of the algorithms becomes a key issue. We have shown in that the ESM provides interesting theoretical roots to the different variants of Thirion demons algorithm. This analysis predicts a theoretical advantage to the symmetric forces variant of the demons algorithm.
The parametric image registration has been initially proposed for tracking planar structures only . Then, we have extended the approach to handle generic illumination changes and for tracking rigid and deformable sufaces.
In , we investigated how to improve the robustness of visual tracking methods with respect to generic lightning changes. We proposed a new approach to the direct image alignment of either Lambertian or non-Lambertian objects under shadows, inter-reflections, glints as well as ambient, diffuse and specular reflections which may vary in power, type, number and space. The method is based on a proposed model of illumination changes together with an appropriate geometric model of image motion.
In , we propose a parameterization that is well adapted either to track deformable objects orto recover the structure of 3D objects. Furthermore, the formulation leads to an efficient implementation that can considerably reduce the computational load thus making it more adapted to real-time robotic applications.
In
we have proposed to use directly the output of the ESM visual tracking algorithm for visual servoing. In this
case, the output of the visual tracking is a homography linking the current and the reference image of a planar target. Using the homography, we have designed a task function isomorphic to
the camera pose. We propose a new image-based control law which does not need any measure of the 3D structure of the observed target (e.g. the normal to the plane). We provide the
theoretical proof of the existence of the isomorphism between the task function and the camera pose and the theoretical proof of the stability of the control law. We have extended the
visual servoing technique to any object
. The method is direct in the sense that: (
i) pixel intensities are promptly used without any feature extraction process. This means that all possible image information is exploited; (
ii) the proposed control error as well as the control law are fully based on image data, i.e. no metric 3D measure is either required or explicitly estimated. We provide the
theoretical proof that the proposed control error is globally isomorphic to the camera pose, as well as that such an error is motion- and shape-independent, and also that the derived
control law is locally stable. Since only image information is used to compute that error, the proposed scheme is robust to large camera calibration errors. Furthermore, the proposed
control error allows for a simple, smooth, physically valid, singularity-free path planning associated with a very large domain of convergence.
In the context of the FP6 STReP European Project Pegase, we are in charge of developing novel vision-based embarked systems for autonomous take-off and landing when dedicated airport equipments are not available. The basic idea is to estimate the pose of the aircraft with respect to the runway during the final approach from the computation of the homography between the current image and a georeferenced image taken above the touchdown point which has been previously acquired during a learning phase.
The Tarotproject addresses autonomy issues for military ground vehicles with a peculiar emphasis on platooning applications. A classical scenario is a convoy of vehicles going across a dangerous area (i.e. minefield) where each vehicle has to track the trajectory of the vehicle ahead perfectly. Such a task can be formulated as a visual tracking task and implemented using the ESM visual tracker presented above. First experiments are currently carried out on the unmanned vehicle developped by Thalès equiped with a Pan-and-tilt unit.
This joint collaboration with Ifremeraims at developing, implementing and testing an original robotic method to compute the 3D metric reconstruction in order to describe and quantify the biodiversity in deep-sea fragmented habitats. The images used for the reconstruction are acquired from an underwater vehicle lying on the seafloor. A stereo rig is carried by a 6 DOF manipulator arm mounted on the vehicle. The images are subjected to several constraints related to the underwater environment. First, the observed scenes are not known in advance, and the objects reconstructed from these scenes have a random texture and form. We only know that the objects are rigid and that they are roughly vertically shaped. Refraction combined with the presence of particles, light absorption, and other lightning related problems considerably alter the quality of the images. From noisy images and a model of the object with many unknown parameters, it is very difficult to obtain a precise 3D reconstruction. The idea is to constrain the image acquisition process thanks to a visual servoing approach in order to reduce the number of unknown parameters in the reconstruction computation. It consists in capturing a reference image with the right camera at a given position, and then converging towards this position with the left camera. The distance and the angle between the two cameras constrain the trajectory followed by the cameras by iterating the visual servoing (see figure -left). Because the underwater cameras are not the same, and the intrinsic parameters may vary according to the environment, we have focused our attention on an intrinsic-free visual servoing method . The proposed visual servoing scheme has been validated experimentally on two different robots: the robot ANIS at INRIA and the robot MAESTROat IFREMER . During the Momareto cruise in 2006 in the Azores, we have tested the method on the Ifremer's Remotely Operated Vehicle (ROV) Victor6000, with the stereovision rig hung from the tip of the robotic arm MAESTRO(see figure -right). Underwater images were acquired by the stereo rig on two different sites (Menez Gwen and Lucky Strike) deep down to 1700 meters.
In the Simultaneous Localisation And Mappingparadigm, the robot moves from an unknown location in an unknown environment and proceeds to incrementally build up a navigation map of the environment, while simultaneously using this map to update its estimated position.
Using monocular vision in SLAM provides a rich perceptual information but leads to unobservability problems due to the loss of the depth dimension. In we have investigated how to improve visual SLAM using direct methods for image registration. Traditionally in monocular SLAM, interest features are extracted and matched in successive images. Outliers are rejected a posteriori during a pose estimation process, and then the structure of the scene is reconstructed. In this work, we depart from this paradigm and we propose a new approach to perform that core of monocular SLAM. The proposed technique computes simultaneously the 3D pose and the scene structure directly from image intensity discrepancies. In this way, instead of depending on particular features, all possible image information is exploited. In other words, motion and structure are directly used to align multiple reference image patches with the current image so that each pixel intensity is matched as closely as possible. Here, besides these global and local geometric parameters, global and local photometric ones are also included in the optimization process. This enables the system to work under illumination changes and to achieve more accurate alignments. In turn, the global variables related to motion directly enforce the rigidity constraint of the scene during the minimization. Hence, besides increasing accuracy, the technique becomes naturally robust to outliers.
The proposed technique is also different from the existing direct methods in many other aspects. Some standard strategies usually do not consider the strong coupling between motion and structure. Other methods, though using a unified framework, suppose very slow camera motions in order to use an Extended Kalman Filter. On the other hand, the proposed unified approach is in fact based on the efficient second-order approximation procedure propose in . Thus, higher convergence rates and larger domains of convergence are obtained. Furthermore, we propose a suitable structure parameterization which enforces, during the optimization, its positive depth (cheirality) constraint. Moreover, we advocate the parameterization of the visual tracking by the Lie algebra, which further improves its stability and accuracy. In addition, it is well-known that representing a scene as composed by planes leads to an improvement of computer vision algorithms in terms of accuracy, stability and rate of convergence. For this reason, we suppose that any regular surface can be locally approximated by a set of planar patches. To respect real-time requirements, an appropriate selection of a subset is performed. Another contribution concerns the initialization of the visual SLAM. This is not a trivial issue since, at the beginning of the task, any scene can be viewed as composed by a single plane: the plane at infinity. The scene structure only appears when the translation of the camera becomes sufficiently large with respect to the depths of the objects in the scene. In this article, a new solution for initializing the system is proposed whereby the environment is not assumed to be non-planar. The experimental and simulation results demonstrate that the image regions are stable for larger camera motions and variations of illumination than by using traditional methods. Hence, by exploiting the same information in long periods of time, one avoids an early accumulation of drifts or even a total failure of the system.
A novel approach to 3D visual odometry has been proposed in which avoids explicit reconstruction of the structure in the scene.
An appearance based tracking method has been developed using image transfer of the quadri-focal tensors. In this way the stereo relationship between positions in a sequence may be obtained which describes the trajectory of the cameras in projective space. If the stereo pair is calibrated then the only unknown is the pose between two pairs of images and it is possible to develop an accurate nonlinear iterative minimization procedure to track the multi-focal tensor across the sequence. The advantages of this approach are multiple including that the 3D model of the scene is not explicitly reconstructed since it is encoded into the dense 2D correspondences between the stereo image pair. No initialisation is required between a 3D model and the camera since tracking is performed relatively to the the camera position. Only one single global estimation process is necessary, therefore eliminating sources of error and allowing robust statistics to be used. Furthermore, the use of second order minimisation techniques provides faster convergence rates. In the visual odometry approach the dense correspondences encoding the 3D structure are computed for each image pair without using knowledge of prior correspondences. This allows for faster computation but it does not enforce the constraints that the observed scene is rigid. We are currently studying how to efficiently enforce this constraint.
Figure shows two examples of the reconstructed trajectories. The first example shows the trajectory of a car moving in Versailles superposed to a bird-eye view of the city taken from GoogleEarth. The second example illustrates the estimation of the 6dofs trajectory of an airship.
Much progress has been made recently, in the SLAM community, on the application of filtering methods to the map building of large-scale environments. However, sensors limitations still reduce the complexity of the environments that can be processed. We attempt to address some of the issues in map building and motion estimation by a novel combination of sensors: an omnidirectional camera and a 2D laser range finder. In particular, the problems of reliable data association, observability and loop closing are simplified by adding visual information. Figure illustrates a detected loop closure situation. Figure shows how vision can help to constrain the translation in a corridor-like environment.
This work addresses the stabilization of admissible reference trajectories generated with constant inputs for driftless systems on Lie groups. It is commonly believed that for these
systems non-stationary reference trajectories (by opposition to fixed points) can be asymptotically stabilized by applying classical control techniques. The reason is that the linearized
system along such trajectories may be controllable, whereas the linearization of the system at a fixed point is never controllable (nor stabilizable). In particular, it follows from Sontag's
results
that for any controllable driftless system and “almost all” reference control inputs, the linearized system
along any associated reference trajectory is “controllable”. This is a very interesting result, but it is important to interpret it correctly and not underestimate the importance of the
“small” set of inputs and trajectories for which the controllability of the linearized system does not hold. For example, in the case of a car for which
v1denotes the vehicle's driving velocity, the difficulty is “moderately” acute because, roughly speaking, the controllability of the linearized system is satisfied whenever the reference
velocity
vr, 1is a smooth function not identically equal to zero. We have addressed this controllability issue from a general viewpoint by deriving the general linear approximation of the tracking
error system in terms of the system's constants of structure. Based on this expression, a necessary condition for the controllability of the linearized system has been obtained. It is
specified in terms of the growth of the filtration of Lie Algebra generated by the system's vector fields. From a mathematical standpoint, most driftless systems do not verify this condition.
For example, by contrast with nonholonomic mobile robots whose kinematic equations can be transformed into the chained form, the linearized system associated with the rolling sphere is never
controllable. We have discussed consequences of this lack of controllability as for the stabilization problem, and addressed the case of the rolling sphere more specifically. Whereas the
asymptotic stabilization of admissible trajectories for this system is still an open problem, we have designed practical stabilizers based on the transverse function approach which yield
ultimately bounded and arbitrarily small tracking errors, even when the reference trajectory is not admissible. These results have been submitted to next year's IFAC World Congress
.
Another application of the work described above concerns the adaptation of practical stabilizers derived with the transverse function control approach for classical (unicycle-type and car-like) wheeled mobile robots in order to achieve asymptotic stabilization of admissible trajectories. For these systems the task is facilitated by the fact that the linearized error system is controllable under mild “persistent excitation” conditions upon the reference inputs (see previous section). We show that the problem can be solved by “shaping” the transverse function involved in the control expression adequately. As a complement to the property of ensuring the practical stabilization of any (not-necessarily admissible) reference trajectory, the resulting control laws asymptotically stabilize most non-stationary admissible trajectories. We believe that this unique combination reinforces the practicality of these control laws significantly. A paper on this subject will be submitted to a robotics journal.
We have pursued the development of a general nonlinear control design method for underactuated vehicles initiated last year. The class of considered systems consists of rigid bodies actuated with one thrust control in a body-fixed direction, and as many independent torque controls as rotational degrees of freedom (i.e. one for vehicles evolving in 2D Cartesian space, and three for systems evolving in 3D space). This class covers most underactuated physical systems encountered in applications (e.g., ships, submarines, aeroplanes, blimps, rockets, VTOLs, etc). The control objective is the stabilization of attitude, velocity, or position reference trajectories. The main assumption on which the control design relies is that the vehicle is submitted to a non zero “apparent acceleration field” which depends on external reaction forces exerted by the environment (gravity, hydraulic, aerodynamic) and also on the considered reference trajectory. Basically, the satisfaction of this assumption corresponds to the controllability of the linearized error system along the considered reference trajectory. For example, in the case of a VTOL stationary flight, its satisfaction is a consequence of the gravity acceleration. In the case of a blimp, for which the effect of gravity is compensated for by buyancy, it requires the presence of persistent wind. When this assumption is not satisfied other techniques, such as the transverse function approach , must be considered, and their adaptation to realistic physical situations will be the subject of future studies.
The work done this year can be decomposed in two parts. First, the development of the control design method for vehicles moving in 2D space has been completed . Then, the approach has been extended to the 3D case. Eventhough this extension is conceptually simple, it required to address a few extra technical issues. We have also provided a control solution that takes into account the (asymmetric) constraint of unidirectional thrust. The method so obtained allows to address a wide range of stabilization objectives: stabilization of reference velocities and positions, rejection of constant bias resulting from e.g. imprecise estimation of actuator forces, or unmodeled external forces and torques produced by the environment (wind, sea-current), etc. Among the main specificities of the approach, let us mention that the control design is based on a crude model of the vehicle. In particular, the precise (and difficult) modeling of aero/hydro-dynamical forces is not necessary. Also, the control design for attitude, velocity, and position tracking objectives is addressed incrementally, with a control structure reminiscent of classical linear controllers. In this way control modifications induced by changing the control objective are limited. Finally, strong robustness/stability results which exploit the dissipativity of aero/hydro-dynamical forces have been proven. The robustness of the proposed control laws has also been verified through extensive simulations of a VTOL vehicle.
This project aims at developping vision-based functions in the context of autonomous militarian terrestrial vehicles dedicated to survey missions. Among the various issues addressed by the project, let us cite the detection and the tracking of natural or artificial landmarks, and the visual platooning. Developments are currently carried out in the context of the Programme d'Etude Amont: Tarotfunded by the DGA (Délégation Générale à l'Armement). Within this program, ARobAS, jointly with the INRIA project team Lagadic, is a subcontractor of the company Thalès.
The objective is to design an active stereovision head controlled via visual servoing techniques. An industrial device has been designed and first trials have been carried out in august 2006 on the Lucky Strike site (depth: 1700m) in the Azores. This work, initialy funded by a research contract, is currently pursued in the context of a PhD thesis funded by the Ifremer Institute and the PACA region.
The field of intelligent transport systems, and more specifically the development of intelligent vehicles with fully automated driving capabilities, is becoming a promising domain of application and technology transfer from robotics research. The MobiVIP project, following the "Automated Road Scenario", focuses on low speed applications (<30 km/h) in an open, or partly open, urban context. The level of automatisation can vary from limited driving assistance to full autonomy. In all cases, an accurate (< 1m) and frequently updated ( 10 Hz) localization of the vehicle in its environment is necessary. With the GPS (differential an/or dynamic RTK) it is now possible to reach such an accuracy in open environments, but the problem has not yet been solved in dense urban areas (urban canyon). In the MobiVIP project, an important effort is devoted to the use of on-board vision for precise vehicle localization and for the urban environment modeling. Such a model is then used in automatic guidance by applying visual servoing techniques developed by the research partners.
Experiments are carried out on the CyCab, a small electric vehicle, equiped with stereo cameras, differential hybrided GPS and inertial sensors (gyrometer, odometers).
Associated with the Pôle de compétitivité System@atic, this project aims at preventing pre-crash accidents by real-time vision-based detection and tracking of pedestrians and dynamic obstacles. Our partners are INRIA/ E-motion, INRIA/ Imara, INRETS/LIVIC, CEA/LIST, CNRS/IEF, CNRS/Heudiasyc, CNRS/LASMEA, ENSMP/CAOR, Renault, Valéo.
This project aims at studying the contribution of omnidirectional vision in aerial robotics. With a SLAM-like approach, we propose to develop methods and algorithms based on catadioptric omnidirectional vision in order to perform the mapping and 3D-modeling of an urban surrounding. Our partners are CNRS/CREA (Université de Picardie Jules Verne), CNRS/LAAS, CNRS/Le2i (Université de Bourgogne), INRIA/ Perception.
This project concerns the control of small underactuated Aerial Vehicles with Vertical Take-Off and Landing capabilities (systems also referred to as VTOL's). Our participation is more specifically dedicated to the development of feedback control strategies, in order to stabilize the system's motion despite diverse adverse phenomena, such as modeling errors associated with the vehicle's aerodynamics or pertubations induced e.g. by wind gusts.
Our partners are I3S UNSA-CNRS (Sophia-Antipolis), IRISA/Lagadic (Rennes), CEA/LIST (Fontenay-aux-roses), Heudiasyc (Compiègne), and Bertin Technologies (Montigny-le-Bretonneux).
This project, led by Dassault, aims at developing embarked systems for autonomous take-off and landing when dedicated airport equipments are not available. We are in charge, jointly with the INRIA project team Lagadicand the IST/DEM project-teams, of developing visual-servoing solutions adapted to the flight dynamic constraints of planes. Our partners are Dassault, EADS, ALENIA, EUROCOPTER, IJS, INRIA/ Lagadic, INRIA/ Vista, CNRS/I3S, IST/DEM (Portugal), Università di Parma (Italy), EPFL (Swiss), ETHZ (Swiss), Institut "Jozef Stefan" (Slovenie).
The project AURORA( Autonomous Unmanned Remote Monitoring Robotic Airship) led by the LRV/IA/CenPRA aims at the development of an airship dedicated to observation. Collaboration agreements on this theme were signed between Inria, Brazilian CNPq and FAPESP, and Portugese GRICES. In such a context, Geraldo Silveira is carrying on a PhD thesis in the ARobASteam with a funding from the national brezilian agency CAPES.
The objective of the PAI Picasso project entitled "Robustness of vision-based control for positioning and navigation tasks" is to design robust vision-based control laws for positioning tasks. Our contributions concern the analytical study of the decomposition of a homography (from which we can extract the camera displacement) that can be estimated from two views of a planar object. This analysis will allow us to find bounds on estimation errors as a function of calibration errors and improve the robustness of vision-based control laws. The project was conducted in collaboration with the University of Seville in Spain.
C. Samson has served as a member of the Reading Committee for the SMAI (Société de Mathématiques Appliquées et Industrielles) book Collection on “Mathematics and Applications” for seven years. His term has terminated in November this year.
Since June 2005, P. Rives is Associated Editor of the journal IEEE International Transaction on Robotics (ITRO).
P. Rives has been a member of the Program Commitee of the following conferences: IEEE-ICRA 2007, IEEE-ICRA 2008, IEEE-IROS 2007, IEEE-IROS 2008, Omnivis 2007, IAV2007, JNRR 2007.
E. Malis has been a member of the Program Commitee of the following conferences: RSS 2007, VISAPP 2007, ORASIS 2007.
ARobAS members have presented their work at the following conferences:
SEI&TI Systèmes Electronique, Informatique et Traitement de L'Information, Casablanca, Maroc, January 2007 (Invited talk)
IEEE International Conference on Robotics and Automation (ICRA), Roma, Italy, May 2007,
IEEE Computer Vision and Pattern Recognition (CVPR), Minneapolis,USA, June 2007,
IEEE/RSJ International Conference on Intelligent Robots Systems (IROS), San Diego, USA, October 2007,
IEEE Conference on Decision and Control (CDC), New Orleans, USA, December 2007.
ARobAS members have presented their work at the following conferences:
Intech “Robotique” April 2007, Sophia Antipolis
Journées Orasis, June 2007, Obernai,
JNRR Journées Nationales de la Recherche en Robotique, October 2007, Obernai
C. Samson is a member of the “Bureau du Comité des Projets” at INRIA Sophia-Antipolis.
P. Rives is a member of the “Comité de Suivi Doctoral de l'U.R. de Sophia Antipolis”.
P. Rives is a member of the 61 eCommission de Spécialistes de l'Université de Nice - Sophia Antipolis.
E. Malis is a member of the “Commission de Développements Logiciels de l'U.R. de Sophia Antipolis”.
Ph.D. Graduates :
C. Mei, « Cartographie et navigation autonome dans un environnement dynamique », École des Mines de Paris, supervisor : P. Rives.
M. Maya Mendez, « Commande référencée capteur des robots non holonomes », École des Mines de Paris, supervisors : P. Morin, C. Samson.
S. Benhimane, « Vers une approche unifiée pour le suivi temps-réel et l'asservissement visuel », supervisors : E. Malis, P. Rives.
Current Ph.D. Students :
M.-D. Hua, « Commande de systèmes mécaniques sous-actionnés », université de Nice-Sophia Antipolis, supervisors : P. Morin, T. Hamel, C. Samson.
G. Silveira, « Application de l'asservissement visuel au contrôle d'un drone aérien », École des Mines de Paris, supervisors : P. Rives, E. Malis.
V. Brandou,« Stéréo locale et reconstruction 3D/4D », université de Nice-Sophia Antipolis, supervisors : E. Malis, P. Rives.
G. Gallegos, « Exploration et navigation autonome dans un environnement inconnu », École des Mines de Paris, supervisor : P. Rives.
C. Joly, « Conditionnement des méthodes de VSLAM en environnement extérieur », École des Mines de Paris, supervisor : P. Rives.
A. Salazar, « SLAM en environnement extérieur dynamique », École des Mines de Paris, supervisor : E. Malis.
T. Ferreira-Goncalves, « Contrôle d'un aéronef par asservissement visuel », Université de Nice-Sophia Antipolis, Universidade Tecnica de Lispoa, supervisors : P. Rives, J.R. Azineira (IST Lisboa)
Current Postdoc Students :
Andrew Comport, « Vision-based Navigation in Urban environments », MobiVIP Project, supervisors : P. Rives, E. Malis.
Hicham Hadj-Abdelkader, « Low-level image processings with catadioptric cameras », Caviar Project, supervisors : P. Rives, E. Malis.
Youssef Rouchdy, « Real-time visual tracking of articulated and/or deformable objects », LOVe Project, supervisor : E. Malis.
Participation in Ph.D. and H.D.R committees :
P. Rives has participated in five Phd defense and two HDR juries.
C. Samson has participated in one Phd defense jury.
Training periods :
J.H. Garcia Sanchez, “Stabilisation pratique d'un dirigeable avec l'approche des fonctions transverses”, 3 months, supervisor : P. Morin.
Course on nonlinear control in the Master EEA of the university of Nice-Sophia Antipolis (P. Morin, 25 hours Eq. TD).
Course on linear control at the Ecole Polytechnique Universitaire of Nice (EPU), (P. Morin, 15 hours Eq. TD).
Lecture course on mobile robotics, Ecole Nationale des Ponts et Chaussées, (P. Rives, 3 hours).