EN FR
EN FR


Section: Research Program

Design of cobotic systems

Architectural design

Is it necessary to cobotize, robotize or assist the human being? Which mechanical architecture meets the task challenges (a serial cobot, a specific mechanism, an exoskeleton)? What type of interaction (H/R cohabitation, comanipulation, teleoperation)? These questions are the first requests from our industrial partners. For the moment, we have few comprehensive methodological answers to provide them. Choosing a collaborative robot architecture is a difficult problem [27]. It is all the more when the questions are approached from both a cognitive ergonomics and robotics perspective. There are indeed major methodological and conceptual differences in these areas. It is therefore necessary to bridge these representational gaps and to propose an approach that takes into consideration the expectations of the robotician to model and formalize the general properties of a cobotic system as well as those of the ergonomist to define the expectations in terms of an assistance tool.

To do this, we propose a user-centered design approach, with a particular focus on human-system interactions. From a methodological point of view, this requires first of all the development of a structured experimental approach aimed at characterizing the task to be carried out through a “system” analysis but also at capturing the physical markers of its realization: movements and efforts required, ergonomic stress. This characterization must be done through the prism of the systematic study of the exchange of information (and their nature) by humans in their performance of the considered task. On the basis of these analyses, the main challenge is to define a decision support tool for the choice of the robotic architecture and for the specifications of the role assigned to the robot and the operator as well as their interactions.

The evolution of the chosen methodology is for the moment empirical, based on the user cases regularly treated in the team (see sections on contracts and partnerships).

It can be summarized for the moment as:

  • identify difficult jobs on industrial sites. This is done through visits and exchanges with our partners (manager, production manager, ergonomist...);

  • select some of them, then observe the human in its ecological environment. Our tools allow us to produce a motion analysis, currently based on ergonomic criteria. In parallel we carry out a physical evaluation of the task in terms of expected performance and an evaluation of the operator by means of questionnaires.

  • Synthesize these first results to deduce the robotic architectures to be initiated, the key points of human-robot interaction to be developed, the difficulties in terms of human factors to be taken into account.

In addition, the different human and task analyses take advantage of the different expertise available within the team. We would like to gradually introduce the evaluation criteria presented above. However, the team has already worked on the current dominant approach: the use of a virtual human to design the cobotic cell through virtual tools. However, the very large dimensions of the problems treated (modelling of the body's ddl and the constraints applied to it) make it difficult to carry out a certified analysis. We then choose to go through the calculation of the body's workspace, representing its different performances, which is not yet done in this field. The idea here is to apply set theory approaches, using interval analysis and already discussed in section 3.1.2. The goal is then to extend to intervals the constraints played in virtual reality during the simulation. This would allow the operator to check his trajectories and scenarios not only for a single case study but also for sets of cases. For example, it can be verified that, regardless of the bounded sets of simulated operator physiologies, the physical constraints of a simulated trajectory are not violated. Thus, the assisted design tools certify cases of use as a whole. Moreover, the intersection between the human and robot workspaces provides the necessary constraints to certify the feasibility of a task. This allows us to better design a cobotic system to integrate physical constraints. In the same way, we will look for ways in which human cognitive markers can be included in this approach.

Thus, we summarize here the contributions of the other research axes, from the analysis of human behavior in its environment for an identified task, to the choice of a mechanical architecture, via an evaluation of torque and interactions. All the previous analyses provide design constraints. This methodological approach is perfectly integrated into an Appropriate Design approach used for the dimensional design of robots, again based on interval analysis. Indeed, to the desired performance of the human-robot couple in relation to a task, it is sufficient to add the constraints limiting the difficulty of the operator's gesture as described above. The challenges are then the change of scale in models that symbiotically consider the human-robot pair, the uncertain, flexible and uncontrollable nature of human behavior and the many evaluation indices needed to describe them.

Control design

The control of collaborative robots in an industrial context gives rise to two main issues. The first is related to the macroscopic adaptation of the robot's behavior according to the phases of the production process. The second is related to the fine adaptation of the degree and/or nature of the robot's assistance according to the ergonomic state of the operator. If this second problem is part of a historical dynamic in robotics that consists in placing safety constraints, particularly those related to the presence of a human being, at the heart of the control problem [20], [31], [24], it is not approached from the more subtle point of view of ergonomics where the objective cannot be translated only in terms of human life or death but rather in terms of long-term respect for their physical and mental integrity. Thus, the simple and progressive appropriation by a human operator of the collaborative robot intended to assist him in his gesture requires a self-adaptation in the time of the command. This self-adaptation is a fairly new subject in the literature [39], [40]. It must exist at several levels: the level of the mission and its macroscopic description (the plan) and the level of the task being executed.

For the first level, the task plan to be performed for a given industrial operation can be represented by a finite state machine. In order not to increase the human's cognitive load by explicitly asking him to manage transitions for the robot, a high-level controller can ensure these transitions from one task (and the associated assistance mode) to another based on an online estimate of the current state of the human-robot pair. From the control point of view, it is then a question of using the richness of the multi-tasking control formalism under constraints in order to ensure a continuous transition from one control mode to another while guaranteeing compliance with a certain number of control constraints resulting from ergonomic specifications. Indeed, the reactive nature of the mission assigned to this type of robot implies the need to check at all times that the constraints intrinsic to any robot are respected: stops, control saturations, non-interpenetration of the bodies as well as those resulting from a complete ergonomic analysis. This analysis can be formally synthesized by an interval analysis approach. The guarantee of formal compliance with all these constraints at all times is strictly necessary. Indeed, if a certain number of guarantees can be provided a priori via interval analysis, compliance with the constraints resulting from it as well as with the intrinsic constraints cannot be ensured a priori. In fact, these constraints are potentially dependent on the state of the robot and its movement is subject to that of the human operator, which, by nature, is difficult to predict accurately. The control architecture to be developed must therefore allow both to specify potentially multi-tasking control problems under stress while integrating new constraints of an ergonomic nature, such as those resulting from interval analysis.

A fundamental work must also be carried out to show how the approaches generally envisaged for the control of robots interacting with humans (impedance control, active compliance, passivity, force amplification, gravity compensation, etc.) can be formulated in a generic way on the basis of an appropriate definition of tasks and safety constraints in the sense of multi-task control formalisms under constraints.

For the second level, the adaptation in question amounts to modulating the robot's involvement in the joint task according to the value of the robotic and ergonomic performance indicators determined to be relevant at the given time. The associated scientific challenge is complex because this adaptation requires establishing a link between the robot's level of involvement and a situation. If the nature of the link between the level of involvement and the control parameter for a robot acting as an effort amplifier seems quite simple, this is far from being the case for all possible forms of collaboration: mutual exclusion, coexistence, subordination, assistance, cooperation,... An approach that seeks to establish an analytical model between ergonomic situation and control law parameters is doomed to failure. Instead, we propose an incremental approach to learning this complex relationship and evolving it over time. This requires first identifying the general and relevant variables of the command law to conduct this learning in an efficient and reusable way, regardless of the particular method of calculating the order.

Moreover, a purely reactive adaptation of the control law would make no sense given the slow dynamics of certain physiological phenomena such as fatigue. The project therefore aims to formulate the order problem as a predictive problem where the impact of the order decision at a time t is anticipated at different time horizons. This requires a prediction of human movement and knowledge of the motor variability strategies it employs. This prediction is possible on the basis of the supervision at all times of the operational objectives (task in progress) in the short term. However, this prediction requires the use of a virtual human model and possibly a dynamic simulation to quantify the impact of these potential movements in terms of performance, including ergonomics. It is unthinkable to use a predictive command with simulation in the loop with an advanced virtual manikin model. The central idea is then to adapt the prediction horizon and the complexity of the corresponding model in order to guarantee a reasonable computational complexity.

More generally, the current challenges of predictive control in robotics are related to the high non-linearity of the models as well as their dimensionality. While the use of very simplified models can be justified at the trajectory generation scale, it is not really feasible from the point of view of real-time control. Indeed, it is necessary to guarantee the existence of a solution to the control problem at each moment of the considered horizon by ensuring that the state of the system is maintained in a viable zone of the state (which can lead to very conservative control decisions) while guaranteeing a form of optimality on the horizon of control decisions. This is a major challenge and the work on this theme will again consist in developing a method for automatically simplifying the robot model that takes into account a maximum level of complexity and dimensionality. This will ensure that order decisions are fine-tuned in the very short term and that the same decisions are optimized overall in the longer term. This part of the project is ambitious but the associated research perspectives are rich and with a high potential scientific impact. Alternatively and in the shorter term, a method that does not reduce the dimensionality of the model (and thus make it possible to account for stresses at an articular level) can be explored. It would consist in using locally linear models of the robot and discrete transitions from one model to another. This would allow the formulation of a linear predictive control problem that could be solved online.

The planned developments require both an approach to modeling human sensory-motor behavior, particularly in terms of accommodating fatigue via motor variability and validating related models in an experimental framework based on observation of movement and quantification of ergonomic performance. Experimental developments must also focus on the validation of proposed control approaches in concrete contexts. To begin with, the Woobot project related to gesture assistance for carpenters (Nassim Benhahib's thesis) and a collaboration currently being set up with SAFRAN on assistance to operators in shrink-wrapping tasks (manual knotting) in aeronautics are rich enough background elements to support the research conducted.