The main context of our research activities concerns the simulation of complex systems. Indeed, our research topics deal with lighting simulation, mechanical simulation, control of dynamic systems, behavioral simulation, real time simulation and modeling of virtual environments.
Our studies are focusing on the following topics:
Computer Graphics: our main works concern the design and integration of models, the design of new algorithmsand of the complexityof the proposed solutions.
Simulation: our main goal is to be able to compare the results produced by our simulation algorithms with real data in order to experimentally validateour approaches.
Systemic approach: in order to validate the two previous points, we have to be able to treat real industrial test casesthrough the use of realistic implementation of our solutions.
More precisely, our studies deal with three complementary research themes:
lighting simulation: realistic image synthesis algorithms give high quality results by the use of physical based illumination models in order to evaluate the light / material complex interactions.
physical system simulation: first, our approach concerns the computation schemes needed to produce the state equations of the system (symbolic and/or numeric computation). Second, we are concerned by the control of these physical systems (virtual characters,...). In this field, we focus our attention on computer animation and simulation.
behavioral modeling and simulation: in order to simulate the behavior of living beings in specific tasks, we design tools dedicated to the specification and simulation of dynamics entities (autonomous or semi-autonomous). Our behavioral models integrate continuous and discrete aspects, in one hand to be able to control motor capabilities of the entity and, in other hand, to take into account cognitive capabilities. We also focus our research activity on the virtual environment modeling process. In this field, we integrate, in the modeling process, geometrical information as well as topological and semantic information.
Two transverse topics are also very active:
Virtual Reality: this field deals with some of our research topics such as lighting simulation, animation or simulation. Our approach adresses real industrial problems and proposes new solutions using our research results. The objective is to adapt the simulation of complex systems to the haptic constraints induced by the interaction by human beings.
OpenMASK software simulation platform: the need of integration of our different research activities has produced a real time and distributed Virtual Reality and Simulation environment. This software is distributed according to an Open Source model (see http://www.openMASK.org).
The Siames team works on the simulation of complex dynamic systems and the need of 3D visual restitution of the results. These results could be producted in real time or in batch, depending on the nature of the simulated phenomena. Our scientific activity concerns the following points:
motion of dynamic models for animation and simulation: in this field, our work deals with the modeling of physical systems, the control of these systems and all kind of interaction that could occur during the simulation. Special attention is given to contact and collision algorithms.
behavioral simulation of autonomous entities: this topic concerns both the interaction between entities and the perception, by the entity, of the surrounding environment. The geometrical information is too poor to take into account the potential relationships between a behavioral entity and its static and dynamic environment. In order to provide high level interaction, topological information on space organization and on the objects of the environment is added to data structures.
lighting simulation: in complex architectural environments, light propagation and interaction with object material generate a big amount of computation using a lot of memory. Our work on this subject concerns the use of a standard workstation or a network of workstations in order to provide the simulation results. This simulation also has to provide tools for the visual characterization of the quality of the results from the human perception point of view.
Models and algorithms that produce motion accordingly to the animator specification.
Animation models which take into account the physical laws in order to produce motion
dynamic system resulting of the composition of a part which is differential and continuous and a part which is a discrete event system.
data vector representing the system at time
t, example: position and velocity.
As for realistic image synthesis, the physically based animation introduces physical laws in algorithms. Furthermore, natural motion synthesis (living beings) needs to take into account complex phenomena such as mechanics, biomechanics or neurophysiology in order to treat aspects like planning and neuro-musculo activation.
The generation of motion for 3D objects or virtual characters needs to implement dedicated dynamic models depending on different application contexts: natural motion simulation, animation for multimedia production or interactive animation.
The mathematical model of the motion equations and the algorithmic implementation are based on the theory of dynamic systems and use tools coming from mechanics, control and signal analysis. The general structure of the dynamic model of the motion is a hybrid one, where two parts interact. The first one is a differential part while the second one is a discrete event system:
In this equation, the state vector
xis the concatenation of discrete and continuous state parameters,
uis the command vector and
tthe time.
For example, the contact and collision mechanical computation is performed using an hybrid system. Physically, a collision is a discontinuity in the state vector space (impulse = velocity discontinuity).
In the context, some emerging topics appear:
using a high level specification language, the challenge consists in producing both the hybrid dynamic model and the control algorithm.
a synthetic model is always difficult to produce off-hand. A new method consists in observing real systems using structural and parametric identification tools in order to determine it.
this tendency is essential in order to treat complex models and can be applied to solve geometric complexity but also mechanical complexity.
direct and indirect illumination computation.
computation of an image of a virtual world as seen from a camera.
subdivision of a 3D model into cells.
a server contains complex 3D scenes, a client sends requests for objects to the server.
an object is represented with a mesh at different resolutions.
A global illumination model describes the light transport mechanism between surfaces, that is, the way each surface interacts with the others. Therefore, the global illumination model is a key problem when accuracy is needed in the rendering process (photorealism or photosimulation). As global illumination is a computation intensive process, our research consists in making it tractable even for large and complex environments.
Another objective is to propose a new navigation system built upon our client-server framework named Magellan. With this system one can navigate through 3D models or city models (represented with procedural models) transmitted to clients over a network. Regarding procedural models, their geometry is generated on the fly and in real time on the client side. These procedural models are described using an enhanced and open version of the L-system language we have developed. The navigation system relies on different kinds of preprocessing such as space subdivision, visibility computation as well as a method for computing some parameters used to efficiently select the appropriate level of detail of objects.
To attain realism in computer graphics, two main attempts have been adopted. The first one makes use of empirical and ad-hoc illumination models. The second one makes use of the fundamental physical laws governing the interaction of light with materials and participating media. It integrates characteristics of the human visual system, in order to produce images which are exact representations of the real world. Our work follows this second approach and relies on the real aspects of materials and on the real simulation of global lighting using physics-based reflection and transmission models as well as a spectral represention of the emitted, reflected and refracted light powers. Unfortunately, global illumination is still a demanding process in terms of memory storage and computation time. Our objective is to rely on the radiance caching mechanism and on the performance of the new graphics cards to make interactive global illumination possible even for complex scenes.
In case of real-time remote navigation, transmission and real-time visualization of massive 3D models are constrained by the networks bandwidth and the graphics hardware performances. These constraints have led to two research directions that are progressive 3D models transmission over Internet or local area network and real-time rendering of massive 3D models.
In regard to progressive 3D models transmission, one can suggest the use of geometric levels of detail (LODs). Indeed, as soon as one LOD is selected according to its distance from the viewpoint, the finer LOD is prefetched over the network. In the same spirit, one can select the LOD of 3D objects to be transmitted based on the available bandwidth, the client's computational power and its graphics capabilities. Our work makes use of these two approaches.
As for real time rendering of massive 3D models on a single computer, one can find many solutions in the literature. The most commonly used solution consists in subdividing the scene into cells and computing a potentially visible set (PVS) of objects for each view cell. During walkthrough, only the PVS of the cell containing the current viewpoint is used for rendering. Our system for interactive building walkthrough follows this approach.
Application fields of our research mainly concern the activities where intensive relationships exist between the simulation of physical systems and 3D visualization of the results.The concerned application fields are:
architectural and urban environments
energy propagation
virtual actors and biomechanics
virtual reality and augmented reality
Our activity in this field mainly concerns the multi-modality of human interaction. We focus our works on hap-tic and pseudo-haptic interaction, on local or distant cooperative work in the context of industrial application. We are also concerned by the production of innovative software solutions.
Human motion is a very challenging field. We try to increase the knowledge by producing parametric models of human movements. Indeed, by the use of motion capture systems and simulation of our models we can access to internal state of parameters. We could not access to them on real human. Consequently, we are able to produce virtual experiment in order to validate scientific hypothesis on natural motion. We also work on the analysis-synthesis loop in order to produce very efficient motion models with motion blending, real time constraint management, etc.
Virtual prototyping deals with the use of simulation results in order to validate specific functional features during the design process. In the field, we use an optimization technique based on evolutionary algorithms and results coming from CAD process.
In order to validate our scientific results, we develop prototypic softwares with the capacity to treat industrial problems. The softwares presented in this section are all used in industrial cooperations.
OpenMASK(Open Modular Animation and Simulation Kit) is the federative platform for research developments in the Siames team. It is also recommended by PERF-RV (French National RNTL project on Virtual Reality). Technology transfer is a significant goal of our team.
OpenMASK is a software platform for the development and execution of modular applications in the fields of animation, simulation and virtual reality. The unit of modularity is the simulated object. It can be used to describe the behavior or motion control of a virtual object as well as input devices control like haptic interfaces. Building a virtual environment with OpenMASK consists of selecting and configuring the appropriate simulated objects, and choosing an execution kernel fulfilling the application needs. Of course, new classes of simulated objects have to be built first if they do not exist. But they can be reused in other applications.
OpenMASK comes with multi-site (for distributed applications : distributed virtual reality, distributed simulation ...) and/or multi-threaded (for parallel computations) kernels. These kernels enable off-line simulation as well as interactive animation. Visualization can be powered by Performer (Sgi) or by OpenSG (Fraunhofer Institute).
OpenMASK provides an Open C++ API dedicated to simulated object development and execution kernel tailoring. An OpenMASK application is made of kernels and simulated objects.
Hosting: creation and destruction of simulated objects.
Naming: simulated objects, classes and attributes are named.
Activating: regular activation (each object can have its own frequency) and/or occasional (on event reception) for simulated objects.
Communicating:
using data flows between simulated objects
using signal diffusion in the environment
using events between objects
thanks to the provided data-types or specialized data-types created for the application
with adaptation to the different activation models using interpolation and extrapolation
Time management: automatic data dating and unique time-stamp during computation.
Distributing: presently powered by Parallel Virtual Machine (PVM). Distribution is transparent to the programmer but could be controlled by the operator.
Mono or multi-pipes visualization, adapted for reality centers and workbenches. Multiple views and stereo-vision.
Support of all geometrical file formats supported by Performer or by OpenSG.
Component extensibility to take new animation primitives into account (available : quaternions, rotations, translations, matrices).
X11 event or GLUT events captures and owner forwards.
2D or 3D picking and subscribers forwards.
We provide a set of simulated object classes which could be reused by new applications:
visualizer as described previously
interaction services over 3D virtual scene
realtime animation of virtual humans
management of specialized sounds
management of VRPN devices
management of force feedback devices
physical scene simulation
Our technology transfer initiative is based on industrials partners and supported by Open-Source distribution. We are supported by INRIA with dedicated resources (ODL 2001/02, ODL 2003/05 and software development engineer 2005/06). First, we provided the platform which is of general interest. Now, we are delivering simulated objects dedicated to Virtual Reality, most of them with an Opensource licence: interactors, virtual human, force feedback processor, collisions manager, VRPN peripherical abstractions. OpenMASK is available on Irix, Linux and Windows systems.
We have developed a framework for animating human-like figures in real-time, based on captured motions. This work was carried-out in collaboration with the Lab. of Physiology and Biomechanics of Physical Exercise (LPBEM) from University Rennes 2. The first part of this work deals with the reconstruction of captured motion files. It is done offline with a software that imports motions in most usual formats like C3D (Vicon) or BVH (BioVision) and exports them in a morphology-independent file format which allows to replay the same motion on any avatar in a scene.
This format is based on a simplified skeleton which normalizes the global postural informations. This new formalism allows the motion to be adapted automatically to a new morphology in real-time (cf figure ). This is done by taking kinematic constraints into account. This approach dramatically reduces the post production and allows the animators to handle a general motion library instead of one library per avatar. In order to facilitate the design of constraints, we have developed a xml-based language and a friendly user-interface. Hence, a user can add and edit constraints that are intrinsically linked to the motion, such as ensuring foot-contact with the ground, reaching targets for grasping motions...
The second part of the framework provides an animation library which blends several kinematic parametrized models and adapts them to the environment and the avatar's morphology. The library proposed motion synchronization, blending and adaptation to the skeleton and to constraints. All those processes are performed in real-time in an environment that can change at any time, unpredictably. As the constraints are associated to time interval during which their weight evolves continuously, the system can solve them at each time without requiring the knowledge of all the sequence. An inverse kinematic and kinetic solver was developed, based on the morphological-independent representation of posture introduced in . Using inverse kinetics enables to impose a position to the character's center of mass in order to deal with balance or dynamics (limited to the center of mass mechanical system).
This library has been used in several applications, for example in a virtual museum or a presentation for imagina 2002. It has been improved in the RIAM project "AVA-Motion", which ended in june 2004, to become a complete, "ready to use", library for industrial companies. It has also took part of the RIAM project "Semocap" (which will end in december 2005) that involves our partner: LPBEM, University Rennes 2. It currently runs on Windows and Linux with different viewers and it has been also integrated in two different software architectures: AVA from the Daesign company and OpenMASK, our own platform. It has been presented in SIGGRAPH 2005 exhibition at the INRIA's booth.
HPTS++ is a platform independent toolkit to describe and handle the execution of multi-agent systems. It provides a specific object oriented language encapsulating C++ code for interfacing facilities and a runtime kernel providing automatic synchronization and adaptation facilities.
HPTS++ is the last evolution of the HPTS model. Firstly designed for behavioural animation, it provides a generic and platform independent framework to describe multi-agent systems. It is composed of a language allowing agent description through finite state machines and a runtime environment handling parallel state machine execution and offering synchronization facilities.
The language provides functionalities to describe state machines (states and transitions) and to inform them with user specific C++ code to call at a given point during execution. It is object oriented: state machines can inherit of other state machines and/or C++ classes to provide easy interfacing facilities. States and transition can be redefined in the inheritance hierarchy and the state machines can be augmented with new states and transitions. Moreover, state machines are objects that can provide a C++ interface (constructor/destructor/methods) for external calls. The compilation phase translates a state machine in a C++ class that can be compiled separately and linked through static or dynamic libraries. The runtime kernel handles parallel state machine execution and provides synchronization facilities. It includes a recent research work on automatic behaviour synchronization. Each state of a state machine is informed with a set of resources (or semaphores) to specify mutual exclusions between state machines. Each state machine is informed with a priority function specifying its importance at each simulation time step. Each transition is informed with a degree of preference allowing to describe possible adaptations in regard with resource availability or need. Those three properties are combined by a scheduling algorithm in order to automatically and consistently adapt state machines execution with respect to their respective priorities and resource conflicts. Moreover, this algorithm provides an automatic dead lock avoidance mechanism. This property enables independent state machine description and ensures consistent execution without knowledge of their description and without explicit hand coded synchronization. Moreover, the kernel supports dynamic state machine construction and dynamic resource declaration.
This toolkit runs under Windows (Visual C++ 6.0 et .NET), Linux (g++ 2.96 - 3.2) and IRIX systems (CC). It has been used in different research fields such as behavioural animation, scenario description and automatic cinematography. Its scheduling system provides new paradigms for multi-agent systems description while ensuring the overall consistency of the execution.
In 2005, the GVT project, whose context is presented at the ``industrial contracts'' section, has lead up to the first official release of the GVT software. Giat-Industries is now ending the marketing process, and GVT1.0 software is ready for sell.
Our models and engineering developments have been validating by the Giat-Industries quality service. Datas (VRML models, behaviors and scenarii) have been produced on our models by an industrial partner named Virtualis. GVT is using the latest release of OpenMASK and contributing in future releases and functionalities.
The aim of GVT1.0 software is to offer personalized VR training sessions for industrial equipments. The most important features are
the human and equipment security in the VR training, in opposition to the real training,
the optimization of the learning process,
the creation of dedicated scenarii,
multiple hardware configurations: laptop computer, immersion room, distribution on network, etc.
We work on the importation of huge digital mock-ups in Virtual Reality. This work is the subject of the PhD thesis of Jean-Marie Souffez supervised by Georges Dumont, and part of the RNTL SALOME 2 project. It is based on OpenMASK for the Virtual Reality applications, and SALOME platform for the production of digital mock-ups. The goal is to interactively handle these models in a VR scene, to allow their virtual prototyping.
The Product Development Process has taken benefit from advances in design, simulation and validation processes. Digital mock-ups have thus become too complex to be straightforwardly handled by graphics hardware, the associated mesh and computational results being usually huge and not fitting in-core.
In this context, the use of Virtual Reality as a tool for Virtual Prototyping can provide an easier analysis of meshes and of computational results, by ensuring interactive manipulation of the model. This, in particular, allows to test more parameters for the scientific computations, and ensures easier collaborative design.
As the digital mock-ups are too big to be straightforwardly handled by a single PC hardware, it is necessary to implement a level-of-detail (LOD) framework, that will control the size of the model at run-time.
The solution we implemented is a multi-resolution framework that provides easy out-of-core management of the whole model and ensures direct access to the original mock-up. It is based on a partition of the input mesh into several sub-meshes and on the dual graph of the partition. Several under-samplings are generated for each sub-mesh. Computational results can then be loaded on particular sub-meshes at run-time, allowing fast and easy analysis of the whole model, and allowing local analysis of the model at its original resolution.
In comparison to state-of-the-art algorithms, our method is based on a graph-partitioning algorithm (rather than space-subdivision algorithms). The graph partitioning algorithms allow to partition the model with regard to the attributes of the mesh (such as vertex colors, face normals, etc.). This permits to partition the input model with regards to the computational results that are associated to it.
The multi-resolution framework we propose handles both polygon-based and polyhedron-based models. Results of the pre-processing of a model is shown in Figure . Screenshots from the interactive, out-of-core analysis of meshes are shown in Figures and .
The application of the method to polygon-based has been published in and , and in another paper (submitted for review). The application to volumetric meshes is also submitted for review.
Dealing with three dimensional frictional contact with impacts is a key point for the applications with haptic feedback. The work aims at adapting the outstanding methods in computational mechanics to the real-time constraints induced by Virtual Reality. For efficiency reasons, our work is based on the Non Smooth Contact Dynamics(NSCD) framework introduced by Moreau (1988). Two major advantages of the method can be exhibited for the real-time context: the method uses a time-stepping numerical scheme without explicit event-handling procedure and an unilateral contact impact formulation associated with the 3D Coulomb's friction law.
Most of the existing algorithms are based on an event-driven approach. In this context, a constraint based approach and an impulse based approach are widespread and have proven their efficiencies. Major drawbacks of these approaches remain the treatment of accumulation of events (Zeno-behavior) and a large number of bodies in closed contact. More, the real-time constraint has not been taken into account.
As an alternative to the previous drawbacks, we have chosen to adapt tools of computational mechanics for the real-time simulation of multibody systems, based on the Non Smooth Contact Dynamics (NSCD) framework. The time-stepping scheme is not handicapped by the change of contact status during the simulation. One of the important points is to have a unified treatment of collisions as well as potential, sticking or sliding contacts. It is not necessary to determine the interval of time for which the change of status occurs. Thus each time-step depends only on the geometry, the boundary conditions and the possible nonlinear behavior of the smooth dynamics. Consequently, the time step can be constant and large enough to ensure fast computations. The theoretical results on the convergence of such schemes is also a strong point for this time-integration scheme. Moreover the general character of such a formulation allows to use a large panel of numerical methods for the time-discretized problem.
In a virtual environment, a model of solid is built around two entities. As in CAD, the first model is a geometrical one. The second one is the rigid body model that drives the motion through space. So the treatment of interactions (contact/impact) is composed of two parts: the first concerns geometrical detection, the second concerns the resolution of equations of motion. Once a geometrical interaction has been detected, one has to modify the resolution of the equations of motion for taking this interaction into account and for ensuring non penetration of the concerned solids. The geometrical detection collision algorithm that we have implemented deals with spheres and bricks. It is not a general one because our purpose is to deal with contact and impact phenomena. Improvements are planned aiming at using more general detection algorithms.
The typical algorithm for a contact/impact resolution is presented in figure .
When using a "time-stepping" algorithm, contacts and impacts treatments are unified in the velocities space.The equation of motion is then written as as discontinuous equation expressed in velocities:
where
dtis a Lebesgue measure,
is the measure of representing the acceleration and
dis a positive measure for which
earns a measure density and
is a density of impulsion. This leads to introduce on a time interval
]
t
i,
t
i+ 1], the unknown that represents a mean contact impulse:
Once this equation of motion is written, one has to determine the impact law. The Signorini contact condition stands for this impact law and is formulated in terms of relative normal velocities. This condition states that, for two initially non-contacting object (at time
t0, the gap
(
t0)is greater than zero:
(
t0)
0) and interpenetrating at time
t(
(
t)
0), the relative normal velocity (
vn) of the two objects is positive or that the normal contact force (
Rn) is positive for each
tin the time interval
I(repulsion):
The Coulomb friction is modeled by a classical law, which states that the norm of the tangential impulsion is always smaller than the norm of the normal impulsion multiplied by the friction coefficient :
These equations are solved by using an iterative Gauss-Seidel algorithm .
The main advantages of these methods (time-stepping) is that no backwards steps are performed and that the discrimination between contact and impact is no more necessary.
In Figure , we propose a simulation of masonry structures where the objects pack without any vibration.
In Figure , we reproduce the emergence of the big stone in a set of smaller stones, by applying a vibration to the external box.
This work was done by Mathieu Renouf, researcher in post doctoral position, and was equally supported by SIAMES project and BIPOP project (INRIA Rhône/Alpes). We plan to work on the adaptation of the developed algorithms to the haptic interaction with objects in the virtual world.
Our aim is to be able to generate plausible motions for virtual humans. To do so, we develop a generic mechanical representation of the skeletons and propose an algorithm that matches motion on a target skeleton. By using morphological tables and regression equations, the dynamic parameters of the body links are automatically calculated with respect to the gender of the virtual human. To do so we use anthropological tables and regression equations, and finally we estimate support phases giving us the possibility of dynamics calculations.
The purpose is to represent motion for virtual humans. As the motion obtained by methods using kinematics or kinetics are often a bit jerky, we try to use dynamics to adapt acquired motions (on real subjects) on modeled humans. Generic motion is represented by angular parameters expressed on a mechanical skeleton. The first step is to load a real captured motion or a file exported from our kinematic interpolator (see below). As motion acquisition protocols are not standard, we can, by analyzing the motion, create additional virtual markers (by replacement, renaming or average) for this motion. The description of the mechanical skeleton is based on the Denavit-Hartenberg modified notation and is extracted from the motion and the landmark configuration. This Denavit-Hartenberg notation seems to be well suited to the description of skeletons. Four parameters are used to describe each degree of freedom (DOF) in the kinematic chain. Figure shows a set of real landmarks and the representation of the a human kinematical chain for which the parameters are automatically extracted.
Forward kinematics on this chain leads to the three-dimensional positions of the articulations. At this point, some treatments may be applied to the motion as smoothness by splines or global motion reorientation. According to the markers labels, the automatic identification of the limbs is then performed. This association may also be specified by the user. This identification allows us to use morphological tables for computing dynamic parameters of the limbs. The obtained skeleton is constituted of rigid bodies representing the limbs. The mass or inertia due to the soft tissues (muscles)is taken into account, but the deformation of these muscles is not embedded in our model.
Now we use a simplified inverse kinematics algorithm for identifying of the skeleton parameters. This algorithm treats iteratively the articulation systems by adjusting the DOFs of each joint in order to obtain a position as close as possible to the original position.
In order to solve dynamics equations, we need to know external forces applied on the kinematic chain. In the case of walk motion, these forces are only the ground reaction forces. We compute the norms of these forces from the motion, in particular by using the acceleration of the center of gravity. We need to access to the support phases and non-support phases for each effector in order to determine if there is a ground reaction (support phase) or not. To this end, we have compared different approaches with a large choice of parameters and discussed the best values to automatically determine the support phase. The reference for this identification is the visual identification of the support phase. It seems that the best choice is to use speed inversion with threshold to determine these phases. The markers (real) representing the extremity (foot) of the chain is also very important. The best results are obtained with a complete set of landmarks on the feet (talus and foot). These evaluations allow to perform the dynamics calculations leading to forces and moments between links. Figure shows the user interface that controls the presented algorithms.
As previously mentioned, the original data of motion are either files issued from motion capture or files exported from our kinematic interpolator. This interpolator was developed within the framework of an ATIP CNRS grant dedicated to the morphological and stance adaptation of model of locomotion for virtual humans. We have so developed a computer tool for testing hypotheses and generating a plausible walk according to anatomical knowledge. To do so, we introduced an interpolation method based on morphological data, and both stance and footprint hypotheses . This interpolation is combined with an inverse kinematics solver in order to produce motion ensuring the respect of joint limits, the minimisation of the rotational kinetic energy and the respect of the posture of reference . The main applicative field of this study was the anthropology, contributing to draw a plausible walk for early hominids using their anatomical and osteological data. We worked especially on the Australopithecus AfarensisLucy (A.L. 288-1) skeleton.
We have designed and implemented a software for creating cities using procedural models based on our new scripting language.
Modeling large virtual environments addresses several issues related to the complexity of the model and the data volume. Indeed, such models are as delicate to acquire or design than to use. In terms of modeling, these models require heterogenous data such as GIS information, terrain elevation, traffic network, buildings geometry, etc. Furthermore, considering the multiple scales in play and the diversity of the objects involved, few modeling techniques are suitable for such a task as designing a virtual city. Moreover, large models suggest large amounts of data. Thus, these models need large storage capacities, are hard to maintain (and/or update), and can be difficult to render in real-time; let alone through a network.
We proposed a functional extention to L-systems, namely FL-systems. L-systems offer a powerful mechanism for biologically-motivated modeling. It has been proved useful for plants, trees and street networks description. Despite numerous extensions, L-systems remain essentially used for plant-modeling purpose. FL-systems have been proved suitable to model urban features such as buildings, street networks, street lamps, etc. This representation is interesting for several reasons. First, as a grammar-based mechanism, it acts as a data compression scheme. It is therefore easier to transfer through a network as well as simple to evaluate in a lazy fashion. Second, it operates as a data amplifier: a single FL-system can generate a large diversity of models if provided different sets of parameters or using probabilistic rules. Finally, it offers a new modeling technique.
We proposed a new caching mechanism for FL-systems: the FL-systems Intelligent Cache. This new cache operates during the rewriting process of an FL-system. Based on dynamic dependancy calculus, this cache takes in account the formal properties of the FL-system using it. This cache is therefore able to compute which rules will generate similar results upon rewriting. While rewriting, it checks in the previously rewritten terms if there is one similar and if so, uses it. This cache has two main goals: it fastens the rewriting process of FL-systems and allows an implicit procedural instancing of objects. The instancing of geometric objects makes the rendering of the scenes faster whereas its implicit nature takes its responsability off the modeler's shoulders.
Such methods are currently being integrated inside a modeling software dedicated to urban environments such as shown in figure . (F)L-systems, as well as other rewriting methods allow a novel modeling process. Furthermore, this multi-scale design process is enhanced by automatic generation capabilities and real-time visualization techniques.
We have designed and implemented a software for interactive global illumination using programmable graphics hardware.
Computing global illumination amounts to solve the rendering equation which is an integral equation. Unfortunately, this equation does not have an analytic solution in general. Consequently, Monte Carlo integration is the method of choice for solving it. However, Monte Carlo integration requires the computation of many samples, which makes it demanding in terms of computation time. Our objective is to propose an algorithm which allows interactive global illumination.
Our approach makes use of ray tracing, Monte Carlo integration and caching. It aims at extending the ``irradiance caching'' algorithm. Note that this algorithm is based on the observation that the diffuse component of radiance, reflected on a diffuse surface and due to indirect illumination, changes very slowly on this surface. This allows to sparsely sample and cache the incoming radiance, then to reuse the cached samples to estimate the incoming radiance at nearby points. This method is computationally efficient since the sampling is sparse. However this method is limited to indirect diffuse lighting computation.
We focus on extending the irradiance caching approach to indirect glossy global illumination. Our algorithm relies on ``radiance caching'' (cf figure ). It is based on the caching of directional incoming radiances. We have first designed a new set of basis functions defined on the hemisphere to represent directional incoming radiance and BRDFs. This representation along with a new gradient-based interpolation method are the bases of our radiance caching-based algorithm.
The ``radiance cache splatting'' algorithm allows to compute global illumination using programmable graphics hardware. Using a reformulation of irradiance and radiance caching, our method relies on the capabilities of GPUs to perform radiance interpolation. Moreover, we developed an efficient, GPU-based method to avoid the need of ray tracing (cf figurefig:gpurend). Our approach yields an overall speedup of 30-40×compared to the Radiance software, considered as the reference for irradiance caching.
For now, we only considered static scenes for our computations: the irradiance and radiance caching methods suffer from flickering artifacts when it comes to dynamic environments. Therefore, we are now aiming at extending our work for fast, high quality global illumination in dynamic scenes.
We have designed and implemented a software for interactive rendering of natural objects such as grass and trees.
Grass and other natural objects on the Earth's surface make up most of the natural 3D scenes. Real-time realistic rendering of grass (or natural objects) has always been difficult due to the excessive number of grass blades (or leaves, flowers, etc.). Overcoming this geometric complexity usually requires many coarse approximations to provide interactive frame rates when rendering. However, the performance comes at the cost of poor lighting quality and lack of detail. We are interested in developing a grass (and different natural objects) rendering technique (also valid for any natural object) that allows better lighting and parallax effect while maintaining real-time performance. We use a novel combination of geometry and lit volume slices, composed of Bidirectional Texture Functions (BTFs), to achieve the high fidelity requirement. BTFs, generated using a fast pre-computation step, provide an accurate, per-pixel, lighting of the grass, leaves, etc. Our approach combines surface rendering and volume rendering. Levels of detail are also accounted for. Our method allows the rendering of a soccer field, containing approximately half a billion grass blades, with dynamic lighting in real-time (cf figure ).
Our goal is to offer better interaction possibilities to end-users of 3D virtual environments. We first explore the different interaction possibilities in the fields of multi-users collaboration and multi-modal interactions, then try to provide generic tools to enable interactivity with virtual objects: to make virtual objects interactive ; to encapsulate physical virtual reality device drivers in an homogeneous way.
This work uses the OpenMASK environment to validate concepts, to create demonstrators, and to offer interaction solutions for all OpenMASK users.
Interaction distribution between several sites relies upon the distributed mechanisms offered by OpenMASK: referentials and mirrors.
Multi-users and multi-modal interactions use the data-flow communication paradigm supported by OpenMASK, allowing data transfer from outputs towards inputs, and facilitate the fusion of the inputs coming concurrently from several outputs. They also use the sending event communication paradigm of OpenMASK that allows to send events even to objects that are located on distant sites.
During this year, we worked upon:
we provide adapters to make simulated objects interactive. These adapters are divided into several classes to realize three tasks:
the first task is to teach a simulated object the communication protocol useful to talk with an interactor.
the second task is to dynamically create new inputs in order to use the interaction data provided by an interaction tool.
the third task is to provide a way to connect an interactive object to an interaction tool, in order to be able to dynamically change the interaction behavior of an interactive object during a simulation.
It is possible to combine all these tasks in a modular way to obtain a great number of interaction possibilities.
This work is based on Design Patterns and software architectural models, to allow a good software reuse. These concepts and an associated implementation methodology have been presented this year as a tutorial for the IHM'2005 conference .
We studied the possibility to make the virtual objects migrate from one process to another during a collaborative simulation.
It can be useful when there are network problems, when we want to interact as efficiently as possible with an interactive object, or simply when we want to withdraw a process without losing the objects it handles.
Distant interactions between a local interactor and a distant interactive object (located on another process on another site) can seem strange to a user because there will be allways a small time lag between the evolution of the interactor and the evolution of the object in interaction with it. This divergence will increase with network latency, which can even lead to unusable interactions if the network latency is too high or not stable enough. So it can be useful to be able to make an interactive object migrate (maybe temporarily) to the same process than the interactor that is controlling it. It is also very useful to make some objects migrate to the process of an interactor, when we know that this interactor will have to interact with them and if we can predict that there will be some network problems during the time of these interactions. Of course, it does not solve network problems during simultaneous multi-user interactions with a shared object.
During a collaborative simulation, when we want to withdraw a process on a distant site, we need to be able to make some objects migrate if we want to allow the persistency of the virtual universe.
A first version of migration of virtual objects has been implemented within the OpenMASK kernel, but we are still working to improve it. All this work is detailed in .
we have proposed a new interaction tool to facilitate 3D manipulations of 3D virtual objects, dedicated to immersive interactions.
The idea is to offer end-users a ``natural'' way to move and rotate collaboratively some virtual objects, allowing several users to grab various parts of an object, and managing their efforts to merge theirs actions to propose new positions and orientations for the interactive object, as illustrated in figure .
First we designed and implemented a new software protocol for interaction, then made some experiments to evaluate the efficiency of our new mechanism, involving 48 users, comparing our mechanism to classical collaborative mechanisms. The task the users had to realize was to move in a collaborative way one interactive object within a labyrinth presented in figure .
Somes snapshots of these experiments in figures and , show the physical environment used: the users are immersed in a Reality Center, they can talk together, each of them has its own viewpoint upon the universe and can not see the other user's viewpoint. As each viewpoint is designed to facilitate the manipulations in a different area of the labyrinth and to make it hard for another area, the two users must absolutely cooperate to manage the task successfully.
The first results have shown that our cooperative paradigm works and is at least as efficient as the more classical collaborative paradigms, we now have to examine more precisely the data collected during the experiments in order to obtain a better validation of its efficiency.
Haptic interaction consists in providing the user of a Virtual Reality system with the sensations involved by touch (i.e. tactile and force feedback) during the manipulation of virtual objects. We describe hereafter our recent results in the field of haptic interaction which concern: (1) perception issues (the influence of Control/Display ratio on perception of manipulated objects), (2) interaction techniques with haptics (the "A4" technique and the "Bubble" technique), and (3) a vocational simulator (the "Virtual Technical Trainer").
Haptic interaction consists in providing the user of a Virtual Reality system with the sensations involved by touch (i.e. tactile and force feedback), mainly during the manipulation of virtual objects. Historically, the development of haptic interfaces originates from tele-operation. Indeed, the first force-feedback interfaces were developed for tele-operations within hazardous environments. But nowadays, a larger number of applications has been foreseen for haptic interaction in Virtual Reality. These applications belong to various fields: Medicine (chirurgical simulators, rehabilitation), Education (display of physical or mathematical phenomena), Industry (virtual prototyping, training, maintenance simulations), Entertainment (video games, theme parks), Arts and Creation (virtual sculpture, virtual instruments), etc. Thus, the field of "haptics" concerns an increasing number of researchers and companies specialized in Virtual Reality.
The integration of haptic feedback within a virtual environment raises many problems at different levels - including the hardware and software issues. Furthermore, a current major limitation for the design of haptic interfaces is our poor knowledge concerning human haptic perception. It is indeed fundamental to take into account the psychological and physiological issues of haptic perception when designing the technology and the use of virtual environments based on haptics. We therefore concentrated our work on both the perception issues and the implementation issues. We present hereafter our recent results in the field of haptic interaction in virtual reality:
the study of the influence of the Control/Display ratioon the perception of mass of manipulated objects in VR,
the Bubble technique: a novel interaction technique for large VE using haptic devices with limited workspace,
the A4 technique: a novel interaction paradigm for contact rendering when using under-actuated haptic devices,
the Virtual Technical Trainer: a virtual environment dedicated to the technical training of milling machines in VR.
In order to reach and manipulate virtual objects, VE generally provide the user with a virtual cursor (for instance a "virtual hand") which reproduces the movements of his/her real hand. The ratio between the amplitude of movements of the user's real hand and the amplitude of movements of the virtual cursor is called the Control/Display ratio (or C/D ratio). Our objective was here to study the influence of the Control/Display ratio on the perception of mass of manipulated objects in Virtual Environments (VE).
Thus, we have conducted a series of two experiments. In both experiments, a discrimination task was used in which participants were asked to identify the heavier object between two virtual balls (see Figure ). Participants could weigh each ball via a haptic interface and look at its synthetic display on a computer screen. Unknown to the participants, two parameters varied between each trial: the difference of mass between the balls and the C/D ratio used in the visual display when weighing the comparison ball. The data collected demonstrated that the C/D ratio significantly influenced the result of the mass discrimination task and sometimes even reversed it. The absence of gravity force largely increased this effect.
These results suggest that if the visual motion of a manipulated virtual object is amplified when compared to the actual motion of the user's hand (i.e. if the C/D ratio used is smaller than 1), the user tends to feel that the mass of the object decreases. Thus, decreasing or amplifying the motions of the user in a VE can strongly modify the perception of haptic properties of objects that he/she manipulates. Designers of virtual environments could use these results for simplification considerations and also to avoid potential perceptual aberrations.
These results were published at IEEE Virtual Reality 2005 . This work was achieved as a collaboration with CPNI Lab., University of Angers.
The objective of this work was twofold. First, it aimed at positioning the performance of under-actuated haptic devices as compared to fully-actuated haptic devices and unactuated devices (i.e. input devices) in virtual reality. Second, it proposed a technique - called "A4" (Automatic Alignment with the Actuated Axes of the haptic device) - to improve the perception of contacts when using an under-actuated haptic device in virtual reality.
The A4 technique focuses on point-based haptic exploration. When a contact occurs in the simulation, we suggest to rotate the virtual scene in order to align the contact normal with the direction of the actuated axis (or axes) of the haptic device (see Figure ). With this technique, the virtual scene moves itself automatically to provide a more "realistic" sensation of contact.
An experimental evaluation showed first that the performance of under-actuated force-feedback was located between the no-haptic condition (worst performance) and the full-haptic condition (best performance). Second, the use of the A4 technique decreased strongly the "penetration" inside virtual objects, and thus globally improved the performance of the participants in situation of under-actuation.
These results were published at World Haptics Conference 2005 .
Haptic interfaces were shown to greatly enhance interaction with Virtual Environments (VE). Using such interfaces enables to touch, grasp and feel physical properties of virtual objects. However, in the case of grounded interfaces such as the VIRTUOSE force feedback arm, these devices allow a haptic interaction only inside their limited physical workspace. Therefore, the user can not reach and interact with virtual objects located outside this workspace easily.
The "Bubble" technique is thus a novel interaction technique to interact with large Virtual Environments (VE) using a haptic device with a limited workspace. It is based on a hybrid position/rate control which enables both accurate interaction and coarse positioning in a large VE (see Figure ).
The haptic workspace is displayed visually using a semi-transparent sphere (looking like a bubble) that surrounds the manipulated cursor. When the cursor is located inside the bubble, its motion is position-controlled. When the cursor is outside, it is rate-controlled. The user may also "feel" the inner surface of the bubble, since the spherical workspace is "haptically" displayed by applying an elastic force-feedback when crossing the surface of the bubble.
This technique proved very useful to interact with a large VE using a haptic device with a limited workspace. The Bubble technique was presented to the Haption Company and it was much appreciated. It will be available in the next release of the VIRTUOSE API (the commercial haptic programming interface of Haption).
These results were published at World Haptics Conference 2005 . This work was achieved as a collaboration with CPNI Lab., University of Angers.
Hundreds of people are trained to the use of milling machines in AFPA centers each year. Learning with a milling machine is a long and complex process. It is expensive since it requires a large amount of material and it implies maintenance costs.
Therefore, we have proposed a new system called the Virtual Technical Trainer (VTT). This system is dedicated to the technical training of milling in Virtual Reality. VTT simulates more specifically the milling activity. VTT provides milling trainees with a real haptic feedback using a PHANToM force-feedback arm. This force feedback is used to simulate resistances, when the tool mills the material.
We have also investigated the use of pseudo-haptic feedback to simulate force feedback within VTT. A pseudo-haptic feedback is incorporated in the VTT environment by using a passive input device - a SpaceMouse - which is associated with the visual motion of the tool on the screen. It is possible to simulate different sensations of resistance, by modifying appropriately the visual feedback of the tool's motion.
The last version of VTT can use a haptic device which was specifically designed for the purpose of our pedagogical application (see Figure ). Furthermore, realistic audio feedback (which was recorded in real situations) and additional visual assistances could also be added, in order to increase the perception and understanding of the milling task.
A preliminary evaluation of VTT showed that this simulator could be used by vocational trainers successfully. It could help them to teach the basic principles of machining at the first stages of vocational training courses on numerically-controlled milling machines.
This work was published at VRIC 2004, EuroHaptics 2004, and more recently at IEEE Virtual Reality 2005 . This work was achieved as a collaboration with a consortium of industrial and academic partners : CLARTE (Centre Lavallois de Ressources Technologies), AFPA (Association Nationale pour la Formation Professionnelle des Adultes), and University of Paris 5. It was also related to the RNTL french platform for Virtual Reality "PERF-RV".
Brain-Computer Interaction consists in using the cerebral activity of a person to control directly a machine (e.g. a robot, a computer, or a Virtual Reality simulation). We describe hereafter our recent results in this field: a virtual environment called OpenViBE (Open platform for Virtual Brain Environment) for the 3D visualisation - in virtual reality - of the whole brain activity in real time, using an EEG (Electro-Encephalo-Graphy) acquisition machine.
When the physiological activity of the brain (e. g., electroencephalogram, functional magnetic resonance imaging, etc.) is monitored in real-time, feedback can be returned to the subject and he/she can try to exercise some control over it. This idea is at the base of research on Neurofeedback and Brain-Computer Interfaces. Current advances in the speed of microprocessors, graphics cards and digital signal processing algorithms allow significant improvements of these methods. More meaningful features from the continuous flow of brain activation can be extracted and feedback can be more informative.
Borrowing technology so far employed only in Virtual Reality, we have created Open-ViBE (Open Platform for Virtual Brain Environments). Open-ViBE is a general purpose platform for the development of 3D real-time virtual representation of brain physiological and anatomical data. Open-ViBE is a flexible and modular platform that integrates modules for brain physiological data acquisition, processing, and volumetric rendering.
When input data is the electroencephalogram, Open-ViBE uses the estimation of intra-cranial current density to represent brain activation as a regular grid of 3D graphical objects. The color and size of these objects co-vary with the amplitude and/or direction of the electrical current. This representation can be superimposed onto a volumetric rendering of the subject's MRI data to form the anatomical background of the scene. The user can navigate in this virtual brain and visualize it as a whole or only some of its parts (see Figure and ). This allows the user to experience the sense of presence ("being there") in the scene and to observe the dynamics of brain current activity in its original spatio-temporal relations.
The platform is based on publicly available frameworks such as OpenMASK and OpenSG and is open source itself. In this way we aim to enhance the cooperation of researchers and to promote the use of the platform on a large scale.
This work was published this year in Journal of Neurotherapy , with Mr. C. Arrouet and Dr. M. Congedo.
Virtual reality was previously used in several domains to train people performing a costly and complex task. In such applications, metaphors are generally used to interact with virtual objects and the subjects consequently do not react exactly as in the real world. In these applications, the feeling of being there (called presence) that ensures realism can thus only be analyzed through questionnaires. In sports, realism and presence can also be evaluated through the gestures performed by the subjects. Let us consider the thrower and goal-keeper hand-ball duel. Previous results demonstrated that real goalkeepers react realistically to virtual opponents. We also verified that a small modification in the opponents' gestures engendered modifications in the goalkeepers' parry whereas no modification was found for a same throw repeated two times. In neuroscience and sports science, anticipation is a control skill involved in duels between two players. According to elements considered in an opponent's gestures, people are able to predict events that will occur in a near future. In , we demonstrated that this phenomenon is also recovered in a virtual environment. In this environment, the opponents' gestures are animated through a kinematic model that could engender unrealistic trajectories. Nevertheless, the animation module, even if it is based on simplifications, seems to reproduce the visual elements considered by goalkeepers.
A ViconMX motion capture system was bought in January 2005. This system is able to capture the motion of reflective markers in real-time (with an acceptable time shift). Thanks to this system, we are able to capture the motion of handball goalkeepers and react to their gestures in real-time. First, we have developed a real-time collision checker that states if the virtual ball is intercepted by the real goalkeeper's avatar (see figure ).
In addition, the point of view is recalculated at each time step according to the subjects' head position. We are currently designing a protocol in collaboration with LPBEM of University Rennes 2 and UMR 6152 "Mouvement et Perception" in Marseille in order to make our system be applicable in neuroscience. The main applications are sports (providing performance indicators to trainers), neuroscience (understanding perception and decision-making) and virtual reality (understanding how making interactions in virtual worlds be felt realistic). Future works will tend to develop more complex interactions, involving several other players. We also wish to investigate how coupling this approach to more classical eye-tracking techniques.
Past years, we proposed a new formalism to model human skeletons and postures. This formalism is not linked to morphology and allows very fast motion retargeting and adaptation to geometric constraints that can change in real-time. Captured motions are consequently stored using this formalism. However motion is not limited to a sequence of postures but also takes intrinsic constraints into account, such as ensuring foot-contacts or reaching targets while grasping objects. We have proposed a xml-based language to design such constraints off-line. A user can then use a graphics interface to edit those constraints and define their beginning, end and properties while playing the captured motion. Those constraints can deal with points of the body or of the environment that both can change during real-time animation. Several types of constraints are addressed with this language: contacts and distances between points, restricted and authorized subspaces for a given point and orientation in space for a given body segment. All those constraints are converted into a unique formalism that enables to solve them thanks to a unique solver.
This solver offers inverse kinematics and inverse kinetics capabilities. Indeed, the control of the center of mass position allows preventing from some unrealistic postures although all the other geometric constraints are verified. For example, if geometric constraints are placed far in front from the character, he could take a posture that does not verify balance. In order to ensure balance, the user can ask the system to impose that the center of mass is placed on a vertical line going through its initial posture. In that case, we assume that balance is verified in the original captured motion. Our inverse kinematics and kinetics module is based on an improvement of the Cyclic Coordinate Descent method. In this last method, the body segments are rotated individually to solve geometric constraints, leading to unrealistic postures when numerous body segments are used. To overcome this limitation, we gathered some body segments into groups, leading to the use of the minimum set of required body segments. Moreover, we also introduced the control of the center of mass position in this algorithm (see figure ). As a perspective, we wish also to control the Zero Moment Point position in order to deal with balance in very fast and dynamic motions.
The solver described above can obviously deal with captured motions but it is also able to deal with gestures calculated by other modules. Indeed, this solver is embedded in the MKM software library (cf paragraph ) which also offers motion synchronization and blending. Hence, the solver can be used after motion blending is performed, by taking into account priorities associated to actions .
The methods presented above were also used to make a character jump at various heights while using a unique captured motion, contrary to approaches based on dynamic simulation or motion graphs. In this approach, general mechanical laws are used to predict the new center of mass trajectory (during the contact and the aerial phase) that is required to verify the new maximum jump height. This approach should be extended to deal with more complex and various motions for which dynamics cannot be neglected.
Navigation inside virtual environments has a key role in behavioural animation of a virtual human. This process is continuously used for several sorts of interactions (moving to take something, to watch something...). Navigation and path planning are based on a suitable representation of
the 3D database in a form enabling path planning and collision avoidance with static obstacles. But a suitable representation of the geometry is not sufficient as a part of the behaviour is related to the semantic of the environment. Most of time, 3D environments are modeled using well known
3D modelling tools such as 3DS Max, Maya and others. Such environments are not informed neither well organized to be directly used in the field of behavioral animation. That is why we propose a model of
2
D
1/2spatial subdivision, enabling navigation and path planning on unflat surfaces while describing the semantic of the different zones.
Instead of conceiving a dedicated tool constraining designers, we propose to label the 3D objects with their name and type in order to inform 3D environments. This information will be used as a key to access a typed database enabling the extraction of semantic information related to the object.
In order to handle navigation and path planning on non flat surfaces (stairs?), we propose a
2
D
1/2spatial subdivision scheme. Starting from the
3
Ddatabase (Cf. fig.
(a)), two maps are created:
The
2
D
1/2map (Cf.
(b)) is an exact decomposition of the environment into convex cells. Borders of those cells correspond to a
change of slope, a step, a bottleneck, a change of semantic type or
3
Dobject name. This map is used to handle low level navigation: determining the height associated to a footprint, computing visibility information. It also links cells to semantic information by keeping identifiers related to the objects of the
3
Ddatabase. This way, virtual humans can easily access the semantic information related to the environment they evolve in.
The 2D map (Cf.
(c)) is a simplification of the
2
D
1/2map. In this map, borders of convex cells represent a change of semantic type, a step or an identified bottleneck. This map simplifies the previous one by merging cells with similar semantic thus reducing the number of cells used during path planning.
Semantic information is stored inside an object oriented database (we intensively use notions of classes and inheritance). This database contains two types of information: semantic information related to the environment and archetype description. The semantic information associates objects types to their related information. The archetype database contains a hierarchical description of types of agents navigating inside the virtual world (pedestrians, cars?). In order to correlate archetypes to their respective navigation zones, a relational system is provided. It associates archetypes to types of zones in order to specify their navigation behavior (preference, cost?). As types of zones and archetypes are described using inheritance notions, instantiation of relations also use this property in order to provide a system enabling a concise and generic description. For example, if humanoids can navigate on sidewalks and if a crosswalk inherits from sidewalk, a relation between humanoid and sidewalk will automatically take the crosswalk into account during path planning and navigation. Thanks to this information, we are able to generate agent oriented path planning graphs used to create more realistic navigation behaviors. This information is also useful to focus agent attention to relevant zones i.e. zones they navigate in.
Thanks to this model, semantic information can easily be associated to the geometry of the environment. The spatial subdivision process, by keeping this information and organizing geometry, enables a rapid integration of virtual humans inside complex and structured environments containing necessary information to handle realistic navigation. Actually, this system is used to handle navigation, but future works will focus on creating a relation between the environment and the BIIO model (cf paragraph ), in order to provide a full framework enabling fast integration of behavioral simulations in informed environments.
Crowd simulation is an emergent problem nowadays, because of a growing interest on behalf of industries and certain organisations. Many studies have been performed, mostly using macroscopic models, or lightmicroscopic ones (like particle based). We propose, in our model, to simulate crowd motion by association of a multitude of individual behaviours, and to analyse results in order to extrapolate a levels of services classification.
This study is carried out within the framework of an industrial thesis in collaboration with AREP, Aménagement Recherche pour les Pôles d'Echange, which is a subsidiary company of the SNCF, Société Nationale de Chemins de Fer. The goal of this study is to validate train station architectural plans with respect to the movements of people inside the station. That is made possible thanks to a tool for crowd simulation allowing data extrapolation which will then be synthesized in levels of services.
The concept of Levels Of Serviceswas firstly defined by J.J. Fruinin 1971, and was reused by many researchers. But all of them made their classification only with two discriminating factors: density and flow of people. Such a classification seems well suited for security studies, but suffers a lack of information for thorough ones. What we propose to take into account with our levels of services, in addition to classical ones, is:
Information accessibility to a pedestrian inside a given place, to help him find his way for example;
Required effort to carry out a certain number of necessary tasks, smallest being best;
More specific quality factors, like average waiting time.
The last point to be approached is the fact that a level of service is not evaluated on an overall basis for the studied place, but locally at each zone of interest.
The first point to be approached for the simulation of autonomous agents is the description of their navigation environment. Our model is based on a spatial subdivision ( Fig. .a) introduced by F. Lamarche, which produces a set of convex cells by using a constraint Delaunay triangulation. The first step, called informed subdivision ( Fig. .b), computes a topological representation of the spatial subdivision by naming cells according to their number of connexity relations: dead endfor one relation, corridorfor two, and crossroadfor three or more. Then, a topological abstraction is performed twice. A grouping algorithm is first applied to the cellsof the informed subdivision to produce groups ( Fig. .c). Then, the same algorithm is applied to groupsto produce more conceptual zones ( Fig. .d).
This process results in a three level hierarchical graph, which is enhanced with some preprocessing, such as potential visibility sets ( Fig. ), and regular grids linked to each groupin order to evaluate local densities.
In order to improve the realism of the simulation of moving entities, our model manages for each agent a topological knowledge. Such a knowledge restricts the ability of the agent to globally access to stored data on the environment. The initial knowledge of th agent can vary from nothing to perfectly known. The knowledge is updated during the simulation by an observation process, using precalculated potential visibility sets. Then, when an agent need some information relative to a not observable part of its environment, it can refer to its topological knowledge.
The next necessary task to enable navigation inside an environment is path planning. We propose to take advantage of the hierarchical topological abstraction to perform path planning by part. The path evaluation is first performed entirely on the more conceptual layer ( zones), then locally on the first abstraction layer to connect the current group to the second zone of the path, and finally on the informed subdivision layer to connect the current cell to the second group of the path ( Fig. ).
The same
A*algorithm is used in the three cases, using a multi-criteria heuristic to characterise the path weight, and only taking into account known environment. This heuristic takes into account travelling distance, local densities of population and flows of people, relative and absolute
direction changes, and finally passages width. Moreover, the heuristic parameters can be dynamically changed to reflect changes in the entity path planning behaviour. Finally, the path planning algorithm is reactive to events sent by the observation process, or by the rational procedure.
These events result on a partial or full path revaluation.
The interaction with objects and other actors has been approached during the engineering training of Laurent Millet-Lacombe, in collaboration with AREP. The model we propose is close to the ecological theory of J.J. Gibson, describing interactions as affordanceslinked directly to the objects with which the interaction is possible. This model takes place as a platform called BIIO: Behavioural Interactive and Introspective Objects. What we call objectsare certainly physical objects like a chair or a door, but also agents representing virtual people. BIIO enables to attach interactive behavioursto objects, in a hierarchical way: each object inherits the properties of its parent(s), including interaction potentials. Moreover, the objects have strong introspective capacities which enable to recover the whole of their properties. The interactions are classified in two categories. First, using interactions are only available to one actor at the same time, which require to manage a waiting queue for the object. Second, observation interactions are available to many actors at the same time. An interaction is composed of four parts: A rational precondition, which is a boolean expression relative to the actor and the type of the object to interact with; A local precondition, which is a boolean expression relative to the actor and the object; An effect, which may affect one or both of the actor and the object; And finally, a duration which is relative to the actor and the object.
The rational behaviour consists in linking the basic interactions provided by BIIO in order to perform a goal oriented behavioural planning. The first task consists in the creation of a behavioural graph whose root is the goal state, and the nodes are the basic interactions leading to that goal. Then, a path must be found in this graph to select each interaction according to the actor knowledge and the interactions evaluated cost. All of these processes are part of our future work.
BCOOL stands for Behavioral and Cognitive Object Oriented Language. This language is dedicated to the description of the cognitive part of an autonomous agent. It provides object oriented paradigms in order to describe reusable cognitive components. It focuses the description on the world and the interactions it provides to agents. A stable action selection mechanism uses this representation of the world to select, in real time, a consistent action in order to fulfill a given goal.
In the field of behavioral animation, the simulation of virtual humans is a center of interest. Usually, the architecture is separated in three layers: the movement layer (motion capture and inverse kinematics), the reactive layer (behaviors) and the cognitive layer. The role of the cognitive layer is to manipulate an abstraction of the world in order to automatically select appropriated actions to achieve a given goal. BCOOL is dedicated to the description of the cognitive world of the agents . Inspired by Gibson's theory of affordances, it focuses on the description of the environment and the opportunities it provides, under a form allowing goal oriented action selection. A stable action selection algorithm uses this description to generate actions in order to achieve a given goal.
The language provides object oriented paradigms to describe the cognitive representation of objects populating an environment. The notion of class is used to describe different typologies of objects and the notion of inheritance allows specializing the description. Objects are described through properties and interactions similar to methods in object-oriented languages. Notions of polymorphism are exploited to redefine interactions and specialize them through the inheritance hierarchy. Notions of relations between object instances are also provided. Relations and properties are boolean facts describing the world. A specific operator enabling incomplete knowledge management has been added. It enables reasoning on the knowledge of the truth value of a fact in a similar way as the fact itself. The description of actions uses preconditions and effects to allow planning and is also informed with C++ code describing an effective action called once the action is selected. Thanks to this property, the cognitive process can be easily connected to the reactive model in charge of the realization of selected actions. Thanks to knowledge operator, perceptive and effective actions are described in the same way. Thus, perceptive actions can be selected to acquire some necessary information during the planning process. Once the abstract world is described, a second language is used to describe a world populated of instances of cognitive objects. This description is used to generate a database describing the world, the relations and the actions that can be performed by agents. This database is then exploited by the action selection mechanism to select, in real time, actions in order to fulfill a given goal. The mechanism is able to handle three types of goal:
Avoidance goal: those goals are used to specify facts that should never become true as a consequence of an agent action.
Realization goal: those goals are used to specify facts that should become true.
Maintain goal: those goals allow specifying facts that should always stay true inside the environment.
Once the goals are provided, the action selection mechanism select actions and calls their associated C++ code to run associated reactive behaviours. Actions are selected incrementally in order to take into account all perceived modifications of the world in the next selection. This way, the mechanism is goal oriented and reactive.
BCOOL provides a high level framework, focusing on the description of reusable cognitive components while providing easy connection with the reactive model. The incremental generation of actions allows to handle the dynamics of the world by taking into account all perceived modifications during the action selection phase. Its aim is to provide a generic and real time framework taking into account dynamic constraints imposed by behavioural animation.
We propose the STARFISH (Synoptic-objects for Tracking Actions Received From Interactive Surfaces and Humanoids) architecture that uses Synoptic Objects to allow real-time object manipulation by autonomous agents in an informed environment. We define a minimal set of primitive Basic Actions which are used to build Complex Actions. We then assign these actions to Interactive Surfaces which are the parts of an object's geometry that are concerned by the action. The agent then uses these Interactive Surfaces to get the data specific to the object when it wants to manipulate it and to adapt its behavior accordingly.
STARFISHstands for Synoptic-objects for Tracking Actions Received From Interactive Surfaces and Humanoids. It is a new system which allows the easy definition of interactions between autonomous agents and the Synoptic Objects in the environment.
Synoptic Objectsare objects designed to offer to the autonomous agent a summary, or synopsis, of what interactions they afford. When an agent queries such an object, it knows what actions it can perform, where it should position itself, where it should place its hands, what state the object is in, whether it is allowed to perform the action, etc... All these indications, are given through the use of STARFISH Actions and Interactive Surfaces. An autonomous agent is itself considered as a Synoptic Object which allows interaction between agents (such as shaking hands) without any additional special considerations.
STARFISH Actionsconsist of a group of simple atomic actions, the Basic Actions which are the building blocks used to create Complex Actions. Through the Basic Actions described in Figure , we can build more complex and varied actions. These Basic Actions are inspired by the Conceptual Dependency Theory (CDT), which seeks to represent complex natural language input using simple basic action verbs. We took CDT and used it the other way around: to generate complex behaviors using simple basic actions as building blocks. Note that for the remainder of this section, whenever we refer to a Basic Action, its name will be typeset in bold faceto differentiate it from the corresponding verb.
For example, the Open DoorComplex Action (illustrated in figure (a)) can be easily decomposed into its Basic Actions:
Transferself to the door.
Movearm towards knob.
Graspknob.
Movehand to turn knob.
Displacedoor into the open position.
un- Graspto let go of the knob.
These Complex Actions are managed using state machines through the HPTS++ architecture (cf paragraph ). A very simplified model of the Open DoorComplex Action can be seen in figure (b).
Interactive Surfacesare then used to model and describe the affordances of an object. They are generally parts of the object's geometry that act as hotspots when a certain action is to be accomplished, such as where to place the agent's hand when Grasping the doorknob or where to stand in order to open that door(Figure ). Each Synoptic Object can have as many Interactive Surfaces as needed, depending on how much interaction it offers. Furthermore, many Interactive Surfaces of a single object can be associated to the same Basic Action, and the choice of the Interactive Surface is determined by the Complex Action the agent is doing. For example, in order to carry a suitcase, the agent has to Grabit by the handle, but in order to open the same suitcase, the agent has to Grabit by the lock.
Seeing how the Interactive Surfaces and STARFISH Actions are two completely independent concepts, we do not have to go through with them both when creating a new Synoptic Object. When an object with a new shape but with the same functionality has to be introduced into the simulation, it is only necessary to model its appropriate Interactive Surfaces without having to modify their associated behaviors.
We realized a part of a virtual formation environment. One can find the context of our project in the section ``industrial contracts'', and ``GVT''. Firstly, we realized a general pattern to obtain interactive objects and a generic interaction process: the STORM model. Secondly, we focused on the definition of what has to be done in this virtual environment, by using our interaction mechanism. That's the role of the language LORA. Finaly, our last contribution is an author-tool, based on STORM and LORA, which is able to generate scenarios by demonstration. Our objectives was to create generic designs, reusable objects and behaviors, and complex scenarios .
We wanted to obtain a generic treatment of interactions between behavioral objects. A classical problem of interaction between objects is the question where should we put the definition of the interaction?One solution is to put the definition of the interaction in the object itself. We can mention the works of Kallmann, with what he called ``smart objects'', or the work on Synoptic Objects developed by Badawi in our team. We thought that it would be interesting to have information about interaction in the objects, but not have the definition of interaction totally described in a particular object. This definition has to be located somewhere between the objects, with parts of definitions distributed between the objects concerned by the interaction. These parts are named ``capacities''. The definition of interaction located between the objects is named ``relation''. ``Capacities'' combination gives a set of interaction possibilities between objects doted with interaction capacities. The relation is finally responsible of the interaction process, by using the capacities of those objects. This work leads to the deposit of a patent. For example, let us consider only two objects, a screw and a nut. We want to create a screwing interaction between those objects. According to our model, the screw will have male-screwing capacity (length, thread pitch, the male state, etc.) and the nut will have female-screwing capacity (thread pitch, a boolean which indicates if there is already a screw on it, the female state, etc.). We will also have a ``screwing relation'' (cf figure ). This relation contains the definitions of the screwing interaction: a male-screw can be screwed in a female-screw, when the size is good, when the thread pitches match, etc. This relation offers the possibility to screw the screw in the nut, and gives a certain state to this set of two objects: when it is screwed we can not manipulate the screw independently, and when we move the nut, it moves the screw. We can notice that the ``screwing relation'', the ``female-screwing capacity'' and the ``male-screwing capacity'' can be re-used whenever we need to define screwing interactions between two objects.
Based on the preceding model of interactions, we wanted to have the ability to express a complex sequence of interactions. Thus our goal was to define the referential sequence of actions for a student in the formation environment. So we created a new scenario language named LORA. This language has two main aspects: it can be written directly, as we assume when we talk about a language. But it is also a graphical language. This language inherits from different graphical languages such as grafcets and statecharts, and also languages like HPTS for the hierarchical state machine aspect. Our language consists of steps, and links between steps. Each step describes an action which can be: an atomic action (in the previous example, we can select ``screwing''), or a calculation action which uses internal variables, or a conditional action (the result of its evaluation has two possible ways when exiting) or a hierarchical action-step (a scenario can have steps which contain a sub-scenario. Those steps describe a global action locally). This language is interpreted and dynamic. Files are loaded and represented in memory in a virtual machine for scenarios. The memory of this machine can be modified dynamically: the memory representation of the scenario is interpreted with the evolution of the scene. We can edit the language whenever we want in the virtual machine. After actions are executed in the environment, the virtual machine interprets next steps in its own memory. This work leads to the deposit of a patent.
The creation of virtual training environment is a complex process, associated with high financial and temporal costs. The STORM and LORA models are totally generic, the main aim is the re-usability of objects and behaviors. As a consequence, the GVT project is turned to the design of a platform for the creation of such virtual environments. Our first tool in this platform is an authoring-tool for the creation of scenarii by demonstration. This tool is based on LORA and STORM. The author of scenarii is performing actions in the 3D-scene, in GVT. By the way, those interactions are represented with STORM, so we can directly create LORA's actions. With this tool, the author is now able to show the sequence of actions to do, and the scenarii is directly generated (cf figure ). In the same time, the author can also work on the recorded sequence to adapt it with what he want: a parallel sequence, a hierarchical action, a conditional way, etc. This work leads to the deposit of a patent.
The GVT-Giat Virtual Training project (INRIA, Giat-Industries and ENIB) is a very challenging one. Indeed, in this project, we introduce very advanced VR technology in order to produce very customizable VR applications dedicated to industrial training. GVT is based on OpenMASK, the VR-platform of the Siames team (INRIA), and AReVi (ENIB-Li2 VR-platform). All of our developments are totally re-usable in industrial training, and are not dedicated to a particular industrial equipment and procedure. We focus our activity into the following points:
design of true reactive 3D objects with embedded behaviors: the STORM model.
design of high level specification language, LORA, in order to describe complex human activity (virtual activity in relationship with the real activity).
design of an author-tool, based on STORM and LORA, which creates scenarios by demonstration.
For those three points, you can find more informations in the "new results" section. Our partner ENIB is concerned by the pedagogic point of view of the training. The main goal of this overall project is to produce a real application in order to validate all the new concepts we introduce. This application has still been shown at three meetings: Eurosatory, Perf-RV, and Laval-Virtual . The GVT project leads to the depot of 5 french patents (the latest one is ) and 1 european patent . You can find more informations on the product in the "software" section.
Within this contract that is following the VTHD contract, we work to make our OpenMASK distributed kernel more tolerant to network problems. Thanks to this new kernel, we can enable a weak synchronization between the different processes involved in a collaborative simulation. Then, we can visualize the differences between the simulated objects located on different sites, making the end-users aware of the network problems. Our aim is to provide tools that would allow to evaluate the capabilities of the VTHD++ network for rapid rerouting and dynamic provisioning.
Here our aim is to use our collaborative OpenMASK kernel to create multi-sites and multi-users 3D collaborative applications upon the VTHD++ network.
The end-users of the CVEs are to be aware of the problems due to the network. These problems such as latency or temporary breakdowns can make their view of the world inconsistent with the views of the other end-users.
It is the reason why we offer the possibility to visualize the differences between referentials and mirrors: to make a user, located on the same network node than a simulated object, aware of the fact that the other users may perceive this object in a different way (at a different location for example), because of the latency introduced when the system updates the mirrors of a referential. This kind of awareness must allow an end-user to perceive fluctuations of the network latency, and it should allow to validate the QoS obtained with the dynamic provisioning service, because it could show the instantaneous QoS provided by the network.
We also want to allow the use of CVEs with OpenMASK even during network breakdowns, thanks to our new modified kernel. Then, we want to make the users aware of that kind of problem, and inform them that collaborative work is still possible, within some limitations, waiting for the network to come up again. This should allow to validate the correct behavior of the rapid rerouting service offered by VTHD++.
We also have improved the distributed OpenMASK kernel to take into account the particularities of the collaboration between very distant sites linked with high-speed networks. Our software tools have been tested on a local network; the new distributed kernel and some of these new tools have already been deployed on VTHD++, between Rennes and Grenoble, allowing us to make collaborative interactions with haptic feedback between two remote haptic devices: a Virtose in Rennes and a Spidar in Grenoble. Such a demonstration has been shown at the last VTHD++ meeting in Rennes, and a video of this demonstration has be used for the final VTHD++ review.
The aim of the ROBEA Project "ECOVIA" was to study human perception and integration of visual and haptic information. Our results intend to improve computer-human interaction in robotics during tele-operations or in virtual reality systems.
The ECOVIA project was planned to study human perception and visuo-haptic integration, and to identify the potential robotic applications. ECOVIA was planned for 2 years and has began on October 2003. It was a collaboration between 5 partners : 2 INRIA projects (i3D and SIAMES), CEA LIST (French Commission for Atomic Energy), Collège de France (LPPA), and University of Paris 5 (LCD).
This research was part of a complex project for the simulation and implementation of fulfilling Virtual Environments (VE) and for the application of sensory integration in a robotic context.
In this framework, it was of great interest to study the perception of integrated visual and haptic information. The comprehension and modelling of multimodal, and more specifically, visuo-haptic integration have been debated for ages. Ernst and Banks have recently proposed (Nature, 2002) a statistical model for the visuo-haptic integration based on the maximum-likelihood estimator (MLE) of the environmental property. They proposed a model in which each sensory channel contributes to the perception in an inversely proportional fashion to its variance (noise). Within ECOVIA, we first planned to test this model and second to study and if possible to model other aspects of visuo-haptic integration.
The research activity was divided into 5 research actions. Action 1 tested the correctness of the model proposed by Ernst and Banks in a different visuo-haptic environment. Action 2 focused on the influence of bimodal information (visual + haptic) on the elaboration and use of internal models. In the third action (Action 3), we studied the possibility for the modality weighting (as proposed by Ernst and Banks) to be related to other parameters than the noise of the signal. The fourth action (Action 4) proposed a physiological analysis of the visuo-haptic integration. Last, the fifth action (Action 5) provided the ECOVIA project with application perspectives. This action was first focused on the identification of potential applications of our fundamental results. Then we developed one particular application, in the field of robotics.
The main objective of the TCAN Pre-Project ``AMELIE'' was to achieve a pioneer research program which integrated cognitive issues in the ergonomic design of Virtual Environments. Expected results should improve existing virtual reality devices and their associated interaction techniques.
The AMELIE pre-project was planned for 1 year and has began on October 2004. It was a collaboration between 4 partners: SIAMES, CEA LIST (French Commission for Atomic Energy), Ecole des Mines de Paris (CAOR), and University of Paris 5 (LEI). The general objective of AMELIE was to achieve a pioneer research program which integrates cognitive issues in the ergonomic design of Virtual Environments (VE). AMELIE focused on three main problems: (1) the immersion of the user and its impact on cognitive processes involved when achieving a task in VR; (2) the influence of the means of interaction (notably the virtual display of the user, i.e. his/her ``avatar'' on the cognitive processes; (3) the building and use of an assistive model of the user, internal to the VE, taking into account the behaviour and the potential mistakes of the user. Potential applications of AMELIE concerned mainly the improvement of existing devices of VR and of their associated interaction techniques (i.e. the software issues), in industrial simulations such as in virtual verifications of assembly/disassembly operations.
The ROBEA (CNRS Interdisciplinary Research Program) Project entitled "Bayesian Models for Motion Generation" is a partnership with the Cybermove and Evasion Research Projects of the Gravir Lab in Grenoble. The aim of this program is to study how bayesian models can be used to teach an autonomous agent its behaviors, instead of specifying all the probability distributions by hand. It requires to be able to measure at each instant sensory and motor variables of the controled character. The first year has been mainly devoted to the integration of the bayesian programming and interactive natural sceneries modules developed by our partners inside OpenMASK. In the second year, we have developed the urban application that will be used to study the learning by example of a pedestrian navigating in a virtual city. In this third year, we have studied how bayesian programming can be used to learn a behavior by example and we have made some navigation experiments.
Parameters used to learn the navigation task of a character consist in its speed and the distance and orientation of the next cell boundary in the path to be followed. Each parameter is described by a set of possible discrete values. The speed vector module is linearly discretised in five
values
[0..4]in the interval from 0 to
2
m.
s
-1. The speed vector orientation is discretized in three value: -1 if the angle is greater than
/24, 1 if the angle is lower than
-
/24, and 0 in the other case. The orientation of the target point is discretized in nine values in the interval
[-4..4]based on the cubic function
(
x
3in the interval
[-1..1], while its distance is discretized in five values
[0..4]in the interval from 0 to 40, based on the quadratic function
x2in the interval
[0..1].
Salome2, RNTL project : The Siames project is involved in this RNTL project with twenty other partners (Open Cascade, Esi Software, Cedrat, Principia R&D, Mensi, Goset, Eads CCR, Renault, Bureau Veritas, CEA/Den, EDF R&D, CEA/List, Cstb, Brgm, Ifp, L3S, Inpg/Leg, Armines, Lan Paris6, Lma Pau). The SALOME 2 project aims to improve the distribution of digital simulation software developed in France which are considered references in their application domains. It provides them with a generic, user-friendly, and efficient user interface which facilitates cost reduction and reduces the delays in performing studies. It is a means of facilitating the linkage of code, the reuse of portions of code, the inter-operability between simulation code and the CAD modeling software. Our interest in this project is the coupling of the advanced Virtual Reality researches with the CAD environment.
ODL OpenMASK: OpenMASK( http://www.openmask.org) (Open Modular Animation and Simulation Kit) is an experimental VR software platform (see section ) always under development. In order to promote the Open Source deployment of this solution, we pay attention to the manpower dedicated to the evolution of the software platform. Granted by INRIA (under the INRIA ODL programm), a software developper is in charge of the future evolutions of this software.
CNRS-ATIP project : Locomotion of extinct hominids. This collaboration implies four laboratories :
The laboratory IRISA UMR 6074, Siames project
The laboratory of Physiology and Biomechanics of Muscular Exercise (Rennes 2 university)
The laboratory of Dynamic of human evolution, UPR 2174
The laboratory of anthropology of populations of the past, UMR 5809
The interest of this project lies in the involved pluridisciplinarity. We want to understand the bipedal locomotion and to be able to model and simulate this locomotion for extinct hominids. The skeleton is in a pretty good condition. Its study allowed us to understand the functional surfaces involved in the gait motion. A three dimensional model of this skeleton is now available. The study of the articulations lead us to propose a model of the articulated chain, including links and joints. We have developed a motion retargeting method based on morphological and postural parameters. This allows to develop the bipedal gait comprehension and to propose to the paleoanthropologists a tool for testing locomotion hypothesis.
OpenMASK (Open Modular Animation and Simulation Kit) is the federative platform for research developments in the Siames team. It was born from the need to integrate the different research activities of the SIAMES lab in a real-time and distributed Virtual Reality and Simulation environment. First known as GASP it turned into OpenMASK in 2002. Willing to reinforce collaboration with industrial actors, Vincent Quester-Séméon s work consists first in improving the already existing communication tools of the lab and creating new ones if need be. Second he had to investigate to see how we could create an OpenMASK user club composed of different actors of the industrial world and study what kind of economic model could be adopted to make this club benefit.
Internet Website improvement at www.openmask.org : a new flash website has been created, replacing the old one. The goal was to make it more attractive than it was. The chosen technology to create it was Flash MX in order to make it visually attractive and to be able to improve interaction with it within the next few weeks. - Presentation papers improvement: the already existing papers we could distribute about OpenMASK had to be improved in order to make it more attractive. These papers had been distributed in show cases like at SIGGRAPH, Los Angeles summer 2005 - A presentation CD-ROM is currently created with Yann Jehanneuf. It presents OpenMASK lab, from a technical and historical view insisting mainly on MKM and HTPS++ technologies. It will certainly contain a part talking about the future OpenMASK user club.
Vincent Quester-Séméon has been contributing to the organization of three different show cases. He has also been present to all of three, presenting some of OpenMASK or MKM technologies. The goal of such show cases were obviously to try to create new contacts and at least, to show SIAMES lab technologies. - The first one was « Les Journée INRIA rencontres industries » in 2005 in Versailles, w ith Alain Chauffaut and Jean-Marie Houssay. We have presented there the virtuose (force feedback device) haptic demo during one day. - The second one was for « Le Carrefour de l image » in the Reunion Island (in a town named Le Port) in march 2005. With Stéphane Donikian we have been presenting mainly MKM thanks to different demo. - The last one was during SIGGRAPH in august 2005, in Los Angeles USA. During the show case we ve been presenting OpenMASK and MKM technologies.
Formation Vincent Quester-Séméon's goal at IRISA was also to be able to give formation to any desiring person. To do this, all formation tools had been improved or some had been created: the tutorial had been completely modified and improved to create amore appropriate learning and teaching tool. New examples have been added to the tutorial.
Since September we have been working harder on the project to create user club. Step by step the club idea is turning sharper. First we met some people from the CCI (chambre du commerce et de l industrie de Rennes) who could gave us some ideas to create this club, then we decided what could be the structure of such a project. Today, a good presentation of this club is currently being written. Soon will come the club subscription form.
Ahad Yari Rad, is a PhD student in Fine Arts, at the University of Rennes 2. His research subject concerns the Esthetic of Virtual Reality. He is performing the practical part of his PhD in the SIAMES's Project. He is currently developing a Virtual Museum of Contemporary Photography. This museum offers to the user the ability to navigate and interact with couples of photographs. By its own body movements and right arm gestures, the user has the ability to zoom in and out, to pan left and right and to step-by-step erase one photograph to make the second one appear (cf left picture of the figure ).
We are member of the core group of Intuition : Virtual Reality and Virtual Environments Applications for Future Workspaceswhich is a Network of Excellence involving more than 68 european partners form 15 different countries. This project belongs to the joint call IST-NMP of the FP6 program.
INTUITION's major objective is to bring together leading experts and key actors across all major areas of VR understanding, development, testing and application in Europe, including industrial representatives and key research institutes and universities in order to overcome fragmentation and promote VE establishment within product and process design. To perform this, a number of activities will be carried out in order to establish a common view of VR technology current status, open issues and future trends.
Thus the INTUITION Network aims are:
Systematically acquire and cluster knowledge on VR concepts, methodologies and guidelines, to provide a thorough picture of the state of the art and provide a reference point for future project development;
Perform an initial review of existing and emerging VR systems and VE applications, and establish a framework of relevant problems and limitations to be overcome;
Identify user requirements and wishes and also new promising application fields for VR technologies.
Anatole Lécuyer is the leader of the "Haptic Interaction" Working Group (WG 2.10). The main objective of the Haptic Interaction Working Group is to federate strongly the major European actors in Haptics and to promote their joint activity. The objectives of the first 18 months of the WG were to provide feedback concerning the Haptic Interaction field and to exploit knowledge emanating from Cluster "Integrating and structuring activities" regarding the Haptic Interaction domain.
Key achievements:
First meeting of the WG following the Worldhaptics conference in Pisa on March 21st 2005.
Representative of INTUITION Haptic WG at WorldHaptics Conference (March 05)
First version of the Term of Reference of the Haptic WG (August 2005)
The Term of Reference of the INTUITION Working Group on Haptic Interaction provides the description of the WG, its relevant area, its research needs and the research topics to be addressed. An international aspect is given to the research area addressed by briefly describing the international related activities that might be relative to the WG and by positioning Europe in a worldwide scale. Complementary, the State-of-the-Art of technology in the field of haptic interaction is described.
Bruno Arnaldi is the leader of the "Towards a sustainable network" Working Package (WP 1.15). The ultimate goal, which is the design and realisation of a permanent Organisation which will replace the NoE after the period of the EC financial support. The scope of this Organisation as perceived at the time being by the INTUITION members is to take on the Networks' successful ERA and proceed to and mostly overview a deep integration of the European competencies on VR. The major activity of this Work Package is the establishment and operation of the Network Business Office.
Key achievements:
First version of the deliverable D1_15.1 document sent to NMC (May'2005)
Final version of D1_15.1 (July'2005)
This report D1_15.1 presents the initial political and strategic orientation of INTUITION including sustainability of the R&D activities on Virtual Reality in Europe. This is a living document since the Political and Strategic Orientation of INTUITION will be continuously updated based on the developments (Scientific, Commercial and market) of the VR field both in Europe and worldwide.
The scope of this report is to introduce a clear vision of INTUITION strategy concerning fundamental research and industrial development and to identify the tools and structure in order to develop its strategy.
We have been the representative of INTUITION Haptic,Sustainability and Integration of Ressource WG at the General Assembly in Stuttgart (September 2005).
Intuition's consortium members are: ICCS, Alenia, CEA, CERTH, COAT-Basel, CRF, INRIA Rhône-Alpes, FhG-IAO, UNOT, USAL, VTEC, VTT, ART, ALA, ARMINES, BARCO, CSR SRL, CLARTE, CNRS (5 laboratories), CS, Dassault Aviation, EADS, ENIB, EPFL, EDAG, EDF, ESA, ICS-FORTH, FHW, FTRD, FhG-IPK, FhG-FIT, LABEIN, HUT, ICIDO, INRS, IDG, MPITuebingen, UTBv, NNC, ONDIM, OVIDIUS, PERCRO, PUE, RTT, SNCF, SpaceApps, ETH, TUM, TECNATOM, TILS, TVP - S.A., TGS Europe, TNO, UPM, UCY, UNIGE, UMA, UniPatras, UoS, Twente, IR-UVEG, UoW, Plzen.
Member of the programme committee of several conferences : Eurographics Symposium on Rendering'2005, Grafite2005, Pacific Graphics'2005 (K. Bouatouch).
Member of the Editorial Board of the Visual Computer Journal (K. Bouatouch).
Responsible of a collaboration with the computer graphics group of the University of Central Florida (K. Bouatouch).
External examiner of a PhD, University of Bristol (K. Bouatouch).
S. Donikian has been member this year of the Program Committee of the following international conferences: ACM AAMAS'05, ACM/Eurographics SCA'05, CASA'05, AFRIGRAPH'05, V-Crowds05, SIBGRAPI05. He is also reviewers for international journal such as IEEE Transaction on Vizualization and Computer Graphics and Graphical Models.
Member of the core group of the RNTL Salome2 project, to develop relations between Computer Aided Design and Virtual Reality. G. Dumont.
Participation to the thematic pluri-disciplinary network micro-robotics, RTP number 44 of CNRS STIC department G. Dumont
Participation to the specific action AS151 (RTP 7) of CNRS STIC : "Virtual Human: towards a very realistic real time synthetic human". G Dumont
Member of the national comity of RTP 15 of CNRS : "Interfaces and Virtual Reality" : Anatole Lécuyer
Member of the national core group of AS 131 (RTP 15) of CNRS : "Haptic Interface and Haptic Information" : Anatole Lécuyer
Member of the CCSTIC (National committee for STIC) : B. Arnaldi.
Member of ``Comité d'Orientation du RNTL'': B. Arnaldi.
Active member of AFIG (French Association for Computer Graphics - treasurer of the association: S. Donikian.
Co-animator of the french working group on Animation and Simulation : S. Donikian.
Member of ``comité scientifique du Programme Interdisciplinaire de Recherche du CNRS, ROBEA'' (Robotique et Entités Artificielles): S. Donikian.
Member of ``comité d'évaluation du RIAM'': S. Donikian
Expertise for national and european agencies of academic and industrial projects in the fields of computer graphics and computer games: S. Donikian
Member of the national core group of the thematic pluri-disciplinary network Virtual Reality and Computer Graphics, RTP number 7 of CNRS STIC department: B. Arnaldi.
Member of the national core group of the thematic pluri-disciplinary network Virtual Reality and Interfaces, RTP number 15 of CNRS STIC department: A. Lécuyer.
Member of the national core group of AS 30 (RTP 7) of CNRS: "Virtual Reality and Cognition": B. Arnaldi, S. Donikian and T. Duval
Co-chair of the specific action AS 151 (RTP 7) of CNRS: "Virtual Human: towards a very realistic real time synthetic human": S. Donikian
Member of the Francophone Association about Human-Machine Interaction (AFIHM): Thierry Duval.
Thierry Duval is reviewer of the RIHM journal.
Thierry Duval has been reviewer for two RIAM projects.
Leader of the Working Group on ``Haptic Interaction'', within the INTUITION European Network of Excellence (A. Lécuyer).
Reviewer for IEEE Trans. on Neur. Sys. & Rehab. Eng., ACM/IEEE ISMAR 2003, EGVE 2004, MMVIS 2004, IEEE VR 2005, VRIC 2005, M2VIS'05 (A. Lécuyer).
Member of International Program Committee of EGVE 2004, MMVIS 2004, VRIC 2005, M2VIS'05 (A. Lécuyer).
Scientific expert for the French National Association for Research (ANR) and the Dutch Research Foundation (NWO) (A. Lécuyer).
Member of ACM SIGGRAPH and Eurographics: F. Multon
F. Multon was reviewer of Gesture Workshop 2000 and 2005 which best papers appeared in Lecture Notes in Computer Sciences
F. Multon was co-organizer of the first "`Images de Synthèses et Sports"' symposium, in Cannes 2005 that gathered scientifics in computer graphics, sports, and sports trainers.
Member of AFIG (French Association for Computer Graphics): F. Lamarche
Reviewer of Eurographics 2005 : F. Lamarche
Co-responsible of Master of Computer Science(K. Bouatouch).
Responsible of a course Master of Computer ScienceIfsic: Ray Tracing and Volumic Visualization (K. Bouatouch and B. Arnaldi).
Director of the DIIC degree, an engineering degree in computer science and communication (K. Bouatouch.
Master MN-RV(Master of Numerical Models and Virtual Reality, University of Maine, Laval, France) : Physical models for virtual reality (G. Dumont).
Mechanical Agregation course: mechanical science, plasticity, finite element method... ENS Cachan (G. Dumont).
Master MN-RV(Master of Numerical Models and Virtual Reality, University of Maine, Laval, France) : Haptic Perception and Computer Haptics (A. Lécuyer).
Computer Science Master of Computer ScienceIfsic: Real-Time Motion (B. Arnaldi).
Master of Computer ScienceCalais, Computer Animation (Stéphane Donikian).
Responsible of the Software Engineering Speciality (GL) of the Master of Computer Science CCIIfsic (T. Duval).
DIIC LSI & INC, Master of Computer Science GL and MiticIfsic : Man-Machine Interfaces and Design of Interactive Applications (T. Duval).
Master of Computer Science CCIIfsic : Computer Graphics (T. Duval).
Computer Science Master of Computer Science: Spatial Reasoning (S. Donikian).
Biomechanics L1, L2, M1 STAPS(University Rennes 2), kinematics, kinetics, dynamics, human performance, biomechanics (F. Multon).
Computer science in motion analysis M1 STAPS(University Rennes 2), algorithmic, computer animation, virtual reality, biomechanics, motion capture (F. Multon).
Analysis, modelling and simulation of human motion M2 research STAPS(University Rennes 2), biomechanics, motion capture, numerical methods, simulation, dynamics (F. Multon).
Motion capture of synthetic human-like figures M2 research Computer Science(joint course University South Britany UBS and University West Britany UBO), motion capture, computer animation, motion analysis, retargetting, constraints-based animation. (F. Multon).
Master MITIC(IFSIC), Modelling, Animation and Rendering (S. Donikian and F. Lamarche).
DIIC(IFSIC), Algorithmic and Programming (F. Lamarche).
DIIC INC(IFSIC), Animation (F. Lamarche).
DIIC INC(IFSIC), Image Synthesis (K. Bouatouch and F. Lamarche).
Licence 2(IFSIC), Reactive system conception (F. Lamarche).