The MIRAGES project research activity is oriented towards the development of model-based approaches with feedback. It is the way that MIRAGES has chosen to develop Computer Vision/Computer Graphics collaboration techniques since 1984 when this approach was chosen by the members of the project.Its activity is concentrated on the manipulation of 3D objects in image sequences with an application domain oriented towards new services (related to those 3D objects) which will appear in future communication networks as well as new products for objects customization. Our research activity is carried out in two directions:

We are interested in the determination of properties such as structure, movement and photometry of 3D objects (especially human beings) in image sequences. Our approach differs from traditional methods as we mainly look for feedback model-based methods to guide the analysis. The main problems we are handling are:

3D object tracking in image sequences when the geometry of the object is known but its movement has to be determined. Both rigid, articulated and deformable objects are considered: particularly human body and face tracking.

Interactive, semi-automatic and automatic camera calibration: Here, we use 3D object models (very often, generic models) as calibration tools, in order to perform calibration. Calibration is necessary for 3D model specification.

Automatic and semi-automatic object model specification: starting from a generic model of an object(for example a human body or a face) and an image sequence, this task consists in constructing the specific model which will be used for tracking in image sequences (see above).

Inverse rendering (Computer Graphics denomination corresponding to photometric analysis in Computer Vision): we dispose of input images of a scene and from a 3D geometric model scene. The aim is to compute the photometry of the part of the scene available on those images in such a way that the digital synthesis of the scene model corresponds faithfully to the input images. The main application of this specific research is the creation of advanced tools for audio-visual productions (advertising, films,... )

Collaboration is carried out with AXIATEC (a french company interested in products customization) on face modelling and animation. The Golf-STREAM contract (RIAM funding) started in june 2002 in collaboration with the french Symah Vision production company. The task was to study complete human body 3D tracking and more specifically professional golfers from image sequences in order to allow them to improve their technique and also enrich journalists comments on television broadcasts of professional golf tournements. This task was not entirely accomplished during the period of the contract but we plan on continuing this research with our own funding. We hope to extend this work to other types of applications such as video-surveillance, tele-conferencing, video games, while mixing real and synthetic images.

The second research direction is the creation and deformation of soft objects, particularly non linear and hysteretic ones. Presently, our research activity is limited to the 3D simulation of garments. In this research area, we are mainly studying:

The control of the mechanical properties of soft 2D structures (like textile materials) modelled by mass/spring systems.

The automatic construction of garments from 2D patterns around a human body modelled as an articulated mechanical system.

Potential applications are the realistic synthesis of 3D garments on 3D numerical mannequins applied to future e-commerce of garments for the textile industry. We put a great effort to build a strong consortium in order to submit a proposal to the ANR. We finally obtained an important funding source for this application.

A 3D model-based approach is applied to human body tracking for production and post-production applications.

We are currently developing a 3D tracking system using video cameras able to produce outdoor motion capture without the use of markers. A generic 3D human body is used as input. Then, body adjustments are made using images. After positioning the puppet on the first frame, the texture is learnt and a tracker has to retrieve the position of the puppet automatically in the rest of the image sequence.

The 3D reconstruction of a realistic face model from a small set of images is one of the greatest challenges in the field of audio-visual post-production and special effects generation. Previously manual, tracking is becoming more and more automated. We are working on faces, which are one of the most difficult entities to track because of their complex deformations. In order to perform a high quality tracking, we developed a technique to model a 3D mesh of a face from a small set of images. Which may be taken from a video sequence. This model, first constructed in a neutral expression, is then animated for tracking purposes.

Usual geometric modeling is not sufficient to simulate cloth: an in-depth knowledge of fabrics behaviour is required. Relations between stress and strain have to be dealt with, that are not defined by ordinary differential equations. Since analytical solutions are not available, numerical models are used; we have chosen mass-spring systems.

These models have to be fed with correct parameters. The Kawabata Evaluation System is the de-facto industry standard for fabrics measurements (limited to warp/weft textile materials): it operates typical deformations upon cloth samples and evaluates the resulting stress, which provides digital data. Our textile models are now able to simulate precisely Kawabata curves of real textile for tension, shear and bending.

Once a numerical model of cloth is stated, we have to produce complete garments. The technique we use directly comes from the textile industry. We use the same 2D patterns as those really manipulated by garment designers. This incorporates sewing informations. We have to construct the garment automatically from this data. It appears to be a very complex problem, on solved in the literature (and even almost never tackled !). our goal is then to compute the evolution of this complet mass/spring system with respect to time; it relies on the integration of the fondamental equation of dynamics. Several families of algorithms exist, adequation to the underlying problem of which vary both in terms of stability and computation time. Checking the validity of our approach is also a very strong challenge that we began to undertake.

Handling contact phenomenan between pieces of cloth, and the characters that wear them is the other big challenge: large amounts of data have to be processed to accurately detect possible collisions, and a realistic model has to be used once an actual collision has been detected.

Emilion is the cloth simulation system that was developed first by J. Denise during his Ph.D. Then, it was extended by D. Reversat to allow draping on animated characters (See ). An automatic prepositioning technique by T. Le Thanh and A. Gagalowicz (See ) was recently added. Thanks to these new results garment simulation becomes really affordable.

This software computes the evolution along time of garments that are represented by mass-spring systems. Both explicit and implicit integration schemes are available. Hysteretic behavior is taken into account in order to reach a high degree of realism. Contact and inverse kinematics are introduced through the constraints handling method initiated by Baraff. An attractive feature is the spatial partitioning scheme that makes collision detection a low-cost task, even with large scenes (less than 15 % computation time in typical simulations).

Emilion was written in C++ and makes heavy use of Generic Programming Concepts (which most famous incarnation is STL, the Standard Template Library). Tcl was chosen as its scripting language to ease both its feeding with external data and the simulation of Kawabata experiments.

We are targeting the problem of how to track in 3D a human body captured with several video cameras in order to reduce the cost of post-production by making it
*almost*automatic. The first problem is to produce a realistic 3D model of the person to track. The idea is to propose a reference position, called a stanza position, to be performed off
line by the person to track. We take pictures of the person in that position and then construct the 3D shape of the person in that position from the set of images. Then we incorporate a 3D
skeleton automatically inside this reference shape.

We continued the software development of the GolfStream project, which consists in a precise and accurate markerless tracking of a golf swing movement. Actually, we investigated the possible improvement of the former results, in order to start from a better basis. To do so, we first focused on the 3D human generic model used by the team. Using the 3D modeling software program available in MIRAGES, we decided to the improve the generic model by manipulating it by hand. We managed to give a more precise and more regular shape to the envelope of the body, without the artefact present in the previous model. Figure shows in a) the former generic model and we came up to in b) and c). We also improved the position of the 3D skeleton inside this envelope. As both were validated medically, this 3D model, more accurate and a bit richer (more polygons and with the knowledge of 3D skeleton joint positions), reduced some "border effects" we had with the previous one. Once again, automatically positioned 3D skeletons are visible in figure

However, the complexity of the new generic model led to some other issues, particularly in computing time. As most of algorithms have a NxN complexity, computing times started to be too long, even for the first steps of the global process. We then decided to try to improve some algorithms, by putting in common the latest results coming from VIP3D, another project led in MIRAGES, dealing with face tracking. This common work allowed us to find good improvements, from one project to another and reciprocally with, for instance, a reduction from 2 minutes to 2 seconds of some computations in the GolfStream software program.

Our target is to construct realistic animation from given video frame queues. As input, we have captured video of golf action from six viewpoints (6 cameras); as output we should produce a 3D animation which mimics exactly what is shown in the video. The reconstruction of a 3D specific human model is obtained by the deforming the mesh of the 3D generic model so that it projects correctly on the images of the specific player in the stanza position.

The second problem is to bring the specific 3D human model from a stanza position to the initial position of the animation to track (here the swing initialisation of a golfer). We decided to study manually the degrees of freedom to act on the 3D skeleton as well as which type of skinning function would allow us to move the golfer from the stanza position to the swing initialisation one.

An effective method is presented in "Sweep-based Human Deformation" [Dae-Eun Hyun et al] construct series of ellipses along human skeleton as control silhouettes of a human mesh. Current human animation research mainly use mesh-weight algorithms and control-silhouette-based algorithms, as well as few different algorithms. The control-silhouette-based algorithms are more robust and flexible than others so we follow the basic idea to implement our mesh deformation method. The ellipse silhouette is pre-designed in Maya and exported. Then we attach vertices of mesh to the ellipse silhouette. In animation, when we input skeleton moving information, ellipse silhouette bounded to skeleton will have relative transformation then control the mesh deformation.

We also pre-compute projected coordinates of vertices, saved as texture coordinates. However, we also need to consider texture merging problems because no frame image can provide full texture mapping. It requires to compose six frame images in order to catch relative informations. It brings lighting problems between different images, which still remain to be solved.

We noticed that direct attachment of ellipse silhouettes to skeleton bring many control problems and interpolation errors. So we introduced Hermite Spline control. Generally speaking, the more the ellipse silhouette is close to the mesh, the better the final animation effect will be. So the center of ellipses can not be always attached to the skeleton but have offset. We match each segment plane of human mesh perpendiculary to the skeleton with ellipses which have the closest size. Then we get a set of ellipses centers of sample segment plane. Based on Hermite algorithm, we constructed approximate curves instead of skeleton to give the information of ellipse centers for the control of the silhouette. Figure shows the result of the 3D reconstruction of a specific golfer in a stanza position projected on the views of the 6 cameras. Figure shows the result of the computation of the texture coordinates extracted from each of the 6 views which allows to perform the inverse mapping from each of the 6 images onto the 3D Model. Figure compares the reprojection of the textured 3D mannequin on each of the 6 cameras views with the real images of the golfer. The 3D model is displaced by hand to allow a better comparison with the real image but, of course, this 3D positioning is approximately correct as it is performed by hand. Nevertheless sufficient to check the quality of the 3D reconstruction and of the inverse mapping. This 3D textured model is then transformed to the swing initialisation position which allow us to check both the set of degrees of freedom activated as well as the skinning function. The check can be evaluated by the comparison of the real golfer images with the reconstructed ones using the degrees of freedom and the skinning function. The synthetic golfer and the real one are, once again, positioned side by side for an easier comparison (see Figure ).

A feasibility study of semi-automatic pre-positioning for human motion tracking for post-production has been performed. The pre-positioning operation is placed at the second step in the human motion tracking algorithm produced by the MIRAGES project, that is, placed between the 3D model creation and the model-based tracking. It is preformed only once for the first frames of video streams captured by a set of synchronized video cameras, whose intrinsic and extrinsic parameters have been estimated beforehand. The operation is usually done by a human operator with a 3D modeling tool, and it is usually very time consuming. This study aims at making it easier, although it also needs some assistance from operators having no particlar skills.

The implemented algorithm is based on silhouettes in the 2D pictures and based on the 3D surface and the 3D wire-frame skeleton models that have already been customized for the target person, and outputs pre-positioned 3D models that is adjusted to all of the first-frame pictures from the viewpoint of silhouettes; it is because our target application is rotoscopy. The abstract of the implemented algorithm is described as follows: First, operators mark possible regions of each joint and silhouettes of each body part on the pictures (See Figure ). Next, the program calculates the possible space of each joint as a visual hull of the marked regions. Then, a huge number of possible parameter : sets of joint positions and rotations around links are generated using quasi-random sequences on the conditions that they are consistent to the marked regions and that every bone length should be preserved. They are evaluated in the branch-and-bound framework by the difference between reprojected contours and the marked silhouettes, and finally the best one is chosen among a certain number of parameter sets. The adjustment is performed hierarchically from the leaves to the root, that is, from the joints belonging to arms, legs and a head to the pelvis.

The point of this algorithm is that the operator does not need to mark a possible region or a silhouette point if they are not confident, and they do not care about the consistency between the pictures when they mark those items, either. The order of adjustment, from leaves to the root, is another feature of this algorithm, and, because of this, operators do not need to mark any possible regions for the joints belonging to a spine, which are difficult to find from the appearances, as a compensation of a concentrated attack of the adjustment error on the pelvis joint (See Fig. ).

Many experiments led to the following results. (1) The quality of the results strongly depends on the quality of the customized models. (2) If there is enough time to compute, silhouettes can be well adjusted, but it is difficult to find the correct rotation angles without any additional restrictions. (3) Total computation is still long, although GPU acceleration is employed in some parts of the implementation. As a result, this algorithm will be useful now to briefly adjust models using fewer marks and fewer sets of parameters before precise manual operations rather than to make finishing touches.

A model-based analysis-by-synthesis framework for human motion tracking was proposed (see Fig. ). Prior to this framework a textured 3D puppet model was built and will be used for tracking the human motion. At this point of time, rigid skin deformation is used. Our method does not require image background segmentation or skeletonisation, which are very sensitive to noise.

Once the 3D puppet is pre-positioned, the computing system will synthesize the necessary motion by manipulating the skeleton of the model puppet with the necessary degrees of freedom. The skeleton in-turn drives the skin deformation. Finally, the deformation synthesizes an image that is compared with a real input image. The differences between the synthesized and the real image are fed back as the process iterates until it finds the best result (minimum error). This human movement tracking framework uses a simulated annealing algorithm. Rigid skin deformation function is used. For testing, we concentrate on tracking the motion of arms since they are highly articulated and appear qualitatively small in the images.

We have tested our framework under different scenarios. Here, we show one of our results in cluttered environment and change background. The "white" and "grey" labels show the superimposed tracking of the upperarm and forearm (See figure et ). The top 5 images show the results from the camera viewpoints that we used for tracking. The bottom 5 images show the results on another camera from the same video scene that we used for verification and visualization only.

The implementation of this system uses C++. The texture synthesis is done by rendering using the Wildmagic 3D rendering engine that utilizes the GPU hardware acceleration using OpenGL.

Human faces seen as deformable objects is, of course, an important research domain of MIRAGES. For our post-production project, we had to implement software for 3D face modeling. This job was performed by a former PhD student, but implemented as a plugin in the Maya software. Because of the complexity of coding under Maya and the requirements of our new industrial partner (AXIATEC), a new stand-alone implementation was necessary to allow the evolution and the improvement of our application. After the software had been reimplemented TOTALLY as a stand alone OpenGL application with a QT-interface, the algorithm underwent significant changes which improved its performance and precision, and was tested on several sets of input images. The general idea of the algorithm is the following :

As input, the algorithm gets a set of images of a particular person taken from the different viewpoints and a generic face model which will be further morphologically adapted to the model of this particular person.

In the very beginning the user creates, for each image, a camera which is then manually positioned in front of the image plane so that the projection of the generic model matches approximately the person's face on this image (see Figure ).

When all the cameras are defined, the user must manually position a set of characteristic points on the images (see Figure ) and run the adaptation and calibration loop during which the algorithm consecutively reconstructs the approximate 3D coordinates of the characteristic points based on the current positions of the cameras. Then, using 2D-3D correspondances of the characteristic points, the algorithm performs an automatic camera calibration. The goal of this operation is to match the projections of the characteristic points of the generic model to the positions defined by the user. Usually, a good result is achieved after 8-10 iterations. At the end, the final deformation of the characteristic points is interpolated to the rest of the vertices of the 3D generic mesh using RBF functions. The result of this operation is shown in Figure .

Since the number of characteristic points is limited, some regions of the face can still be different from the reference model (for example, the chin, the back of the head). To correct this, we perform the adaptation of the silhouettes of the 3D model. As an input the adaptation algorithm takes a set of manually-drawn Bezier curves which correspond to the 2D silhouette contour of the face on the images (see Figure ) and the calculated projection of the 3D silhouette contour of the model on each of the cameras and adapts the projection of this silhouette contours to the Bezier curves. The obtained deformation is then reconstructed in 3D and interpolated to the rest of the 3D mesh. This whole process is repeated several times until all the curves are adapted. The result of the complete operation is shown in Figure .

The improvements made to this algorithm were the following:

Significant reduction of the computational time (from 60 seconds to 2) for building a silhouette contour on a 3D model.

Combination of contour adaptation and re-calibration of the cameras to achieve better precision.

Addition of other parts of the face (eyes, teeth) as well as modeling of the hair volume presented as an indepentent 3D mesh.

Addition of the possibility to change interactively facial expressions by simply acting on 11 to 19 sliders depending on the complexity of the face expression domain investigated (see Figure ).

Correction of the set of characteristic points.

Implementation of the stand alone software for calculation of the error between two 3D models (needed for verification of the results of the modeling).

Low-band filtering of the model's surface.

Correction of the error in case when characteristic points are reconstructed from the anti-parallel views.

Addition of the possibility of face reconstruction using only one photo.

Improvements of the contour adaptation algorithm by decomposition of the 3D contour based on the adjacency of vertices and by elimination of small deformations.

Having developed 3D face tracking algorithm as a plugin to Maya software we've also implemented it as a stand-alone application using only OpenGL and QT-interface. Our tracking algorithm is based on the use of the 3D model of the person and of an analysis/synthesis method developed at MIRAGES for detecting the location and the facial expression of this person automatically on a monocular image sequence. In the initialization step, the 3D model is manually positioned on the first image of the sequence with the proper facial expression in order to fit the face on the first image. The 3D Model gets textured using this first image by inverse mapping. After that, the tracking process is launched for every image of the sequence. It is based on the minimization of the error between the projected textured model and the face on the image. For the moment, the error is being minimized by a simulated annealing method, which is rather slow. Another method has been investigated - that is the particle swarm optimization method, but it showed less efficiency and precision. Our next goal is to investigate the use of a gradient descent algorithm.

Garment simulation seems to be a completely different research domain which has nothing to do with the other activities of MIRAGES. In fact, they are intimately related. As it has been said previously, human body modeling and tracking are the core of our research activity. Usually, people that we have to track are dressed, so that what we have to track is more a garment than a human body itself. As we use model-based approaches, we need to have a garment model at our disposal. As it is a very complex domain, it became a research domain in itself in MIRAGES. Research on cloth animation and computer generated garments is a field of major interest among both Computer Graphics and Textile research communities. The most common approach is to model cloth objects using mass-spring systems and to resolve Newton's equation governing the system motion using implicit integration methods. Unfortunately, results usually lack of realism. So the main effort carried out was to improve realism which is both necessary for its use in the textile industry as well as for a precise 3D human body tracking.

We want to develop a virtual dressing system which can be used easily by an end user who can try garments using some very simple actions. Our previous work gave a fast 3D reconstruction of 2D garments around 3D mannequin. Even though the reconstruction was performed in a few seconds, the garments were highly deformed. We developped a multi-resolution minimization method to reduce significantly the deformation of garments.

The shape of the virtual garment is reconstructed from a set of 2D patterns; these patterns come from a CAD system or are created by a designer. In general, a mass/spring system is used to model the mechanical behavior of cloth. Any virtual cloth modeled by a mass/spring system can be applied to our method. The input to the technique presented in this paper is simply the output of our previous works. Such an input is visualized on the left part of figure . The garment is already positioned around the body but its energy is very high (the garment is highly deformed compared to the final result - See Figure ). Therefore a long computation time is required to obtain an acceptable result (stable position of the garment).

The discrete resolution of the cloth used in our work varies from 50mm to 5mm. These resolutions are nowadays used in most physically-based simulation systems. The higher the resolution is, the better the garment is modeled, but the computing times grows exponentially.

The garment is firstly discretized in various resolutions (from lowest to highest). Once the lowest resolution particles system is minimized using Conjugate Gradient method in a very short time, the higher resolution is reconstructed and minimized from previous resolutions. Finally, the highest resolution is minimized within a reasonable time.

Figure gives some computing times for our energy minimization algorithm on a workstation INTEL 2.4 GHz. Compared to the results available from our previous work (See figure ) we designed a new optimization technique which allows to reduce the simulation time drastically. For example, in the case of figure , the garment simulation lasted roughly 50,000 seconds to reach convergence. With the new technique, the same simulation result (compare right of figure with right of figure ) is obtained after roughly 5,000 seconds only. In the average, computing time was reduced by a factor of 10.

This is due to the fact that the garment prepositioning is now from a very different nature. Before, automatic prepositioning designed the garment around the body with very high deformations (compare figure left and right) that our simulation took time to dissipate. Now, the automatic prepositioning brings the garment with almost no deformations (compare left and right of figure ) only geometrically and the simulation is only used for fine tuning, to bring the garment in contact with the mannequin.

The multi-resolution algorithm gives better results for minimization. For these experiences we use a human model with approx. 15,000 vertices. Our algorithm is implemented in C++ and OpenGL.

Our system controls Kawabata characteristics for tension, shear and bending. It is required to model the mechanical behavior of garment faithfully.

The main idea behind this was to increase the speed but still getting similar results compared to the single processor version. The following tasks had to be tackled:

Find a suitable parallel environment.

As the parallel version of the simulator is supposed to run on a shared memory architecture as well as on a distributed memory architecture, like INRIA, Rocquencourt's cluster, an MPI Environment was chosen (mpich2).

Partition the simulation tasks

In order to increase the speed, parts of the simulation have to be segmented into sub-parts which can be executed in parallel.

Partition of the mesh

The garment simulator is based on a triangulated mesh and a mass-spring system. This mesh had to be partitioned so that every process could have its own sub-part to simulate. The partitioning was extremely important as we had to take care of the complex adjacency structure given by the mass-spring system. Therefore, we use the ParMetis library.

Partition the time-step calculation

This was the main part of our work. The most important steps that had to be taken here were:

Exchange all necessary data (esp. mass positions, spring information at the partition borders) between the processes).

Rewrite the integrator (based on a modified CG - method) for parallel computation

Rewrite the load and store functions to capture and reload every simulation step if desired

Partition of the collision handling

We have also tried to parallelize the collision handling for the simulator, but without much success. So we decided to find and resolve collisions sequentially after the calculation of a time step in parallel.

We were partially successful in developing a parallel version of the garment simulator. The parallel version was able to simulate the same scene of a falling piece of cloth like the sequential version on a user defined number of processors with a clear increase of speed (approx. 3.5 times faster with 4 processors). Unfortunately, there were still problems in the parallel CG unit, and the simulator crashed on several occasions and scenes where the sequential version completed the execution without error.

The goal of this work is to set up a protocol making it possible to reconstruct the real shape of a fabric in 3D and to compare it with the virtual form given by the simulator. The comparison can be done visually and by using a metric.

The validation of the results provided by the simulator by comparing them with the data provided from real experiment passes by the choice of the experiment to realize. Our choice was thus made on the use of a numerical camera and reflective markers as only needed material to our experiment. The reflective markers are stuck on fabric in such way so that they form a mesh which covers all the surface of the fabric. The idea is thus to take several photographs of fabric under different angles and to rebuild its 3D geometry starting from the positions of the markers. As for the experiment carried out, we chose to analyse the shape of a piece of fabric laid on a rectangular table. The form obtained is sufficiently rich to constitute a test satisfying as for the realism of the simulator (the shape of fabric on the level of the corners, presence of folds). Moreover, we chose to let the fabric fall on a table in a dissymmetrical manner (one side is longer than the other) so that its shape is richer.

We have at our disposal N digital images
c
m. The virtual table is horizontal (it will be seen that in the experiment it is not exactly the case).

Reflective markers are fixed every five centimetre on the real fabric. These markers represent feature points of the fabric whose 3D position will be compared with the position of the markers. The position of virtual fabric is completely specified by the data of the 3D positionsof its mass points. The step to follow to rebuild the form (or the position) 3D of fabric can be summarized as follows:

Matching of the feature points of the generic model with their equivalent ones found on each one of the N images.

Calibration of the system of cameras

3D reconstruction of the feature points

Deformation of the fabric model starting from the position of the feature points to find the 3D shape of fabric.

The first step is carried out while clicking simply on the points of the images which correspond to the feature points of the model.

The steps two and three can be realized simultaneously. Indeed, the calibration of the cameras and the 3D reconstruction of the feature points of fabric are achieved cooperatively in our technique. several loops of calibration/3D reconstruction computations are used to improve simultaneously and repeatedly, the calibration of the cameras and the 3D reconstruction of the feature points.

The mapping between the 3D feature points of the model and their projections on the images ( in white ) provides a first approximative calibration of the cameras. One can then reconstruct the 3D characteristic points of the specific model by triangulation. These new 3D points deform the generic model which is modified by the new values of these points and thus converges to the specific model (the 3D shape is represented by the N images). However, as the calibration is approximate, this 3D reconstruction is not perfect, which appears by the fact that the feature points of the modified generic model are not projected exactly on their equivalents in the N images. One can then reiterate the procedure by defining a new calibration by association of the new 3D points with their (fixed) projection on the N images, then by again reconstructing these points in 3D until projections of the points of the specific model correctly project on their images.

One saw in the preceding paragraph how the calibration/reconstruction loop makes it possible to rebuild the 3D position of the feature points of the fabric model. But the geometrical
shape of fabric requires the reconstruction of the position of all the 3D points of the model. The feature points are all located at a distance of
5
c
mon our fabric and our model of fabric is a grid whose step is equal to
1
c
m. It is then necessary to interpolate the position of the points of the model starting from the 3D reconstruction positions of the feature points.

We interpolate the position of the points of the fabric model using RBF (Radial Basis Functions) starting from the characteristic positions which are close.

The result obtained of the 3D reconstruction of fabric is shown in the image . One can see that the total form is the same one (one can easily check it on the edges of fabric). The reconstructed feature points (cubic blue) are projected exactly on the white ones. So now, we dispose of the digital equivalent of the real cloth!!

In the preceding section we showed how it was possible with our technique to find the precise 3D shape of a fabric laying on a table starting from a certain number of digital images.

Having obtained the 3D shape of this fabric, we proceed in this section to the virtual simulation at the same VIRTUAL cloth laying on the same table in order to judge the precision of the simulation provided by our software.

We need to reproduce in a rigorous way the initial conditions of the free fall of the same piece of fabric.

The fabric is thus positioned flat on the table (at an altitude
z_{0}) with the same orientation as that on the real table (see the figure
). We decided to carry out the virtual experiment with
different spatial samplings of the virtual cloth in order to evaluate its influence on the quality of the simulation results.

The output of our simulation software is a 3D animation (25 images per second) representing the free fall movement of fabric on the table. We are interested in the final position reached by the fabric once its movement stopped. We compare the 3D shape obtained with the real form. The first criterion used is a visual comparison which makes it possible to say in a subjective way if the virtual fabric has a shape similar to real one or not and to locate the places where the differences are most important. This criterion is only qualitative. To get a quantitative evaluation, we then proceed to an evaluation of the distance between the virtual 3D shape and the real one..

If we denote by
n_{R}the number of points (feature) of the real 3D shape and
n_{v}the number of points of the grid of the virtual 3D shape obtained by simulation, according to the discretization chosen for simulation, we compute the
eerror with the equation

where
P_{R}(
I)represents the 3D position in the real grid of the point of index
iof the real grid and
P_{v}(
I)the 3D position in the virtual grid of the point of index
iin the virtual grid. The functions
and
make it possible to find for a point of index
iin a grid given its counterpart in the other grid.

The distance
ethus computes an estimation of the average distance between each point of the grid and its counterpart on the 3D shape of the second grid.

In this paragraph we present the results obtained for the simulation of cloth at different spatial samplings. For each sampling we visualise the characteristic points of the real fabric and the virtual fabric with their equivalent virtual ones (blue cubes) in order to be able to visually compare the positions of the real fabric with the virtual one. We also show the images representing the 3D reconstructed fabric and synthetic fabric (both in wire) in order to be able to compare the two 3D shapes. Finally, for each experiment we furnish the distance between the virtual 3D shape and the 3D reconstruction of real fabric by using the criterion defined in the preceding paragraph.

The error obtained for this
5
c
msampling is:

e_{5
c
m}= 3.01
cm

On the average, each mass point of virtual cloth is distant of its counterpart on real fabric of
3
c
m. This error is numerically important and the image of the figure
confirms it. On this figure, the red grid represents
virtual fabric whereas the green grid represents real fabric (reconstructed in 3D). On the real photograph (the left-hand column), we see that at the level of the plane of the table, the
virtual points (blue cubes) arrive exactly on the real ones (white). The error is thus almost zero in the field of the table, which infers a very important error for all the points of
fabric located outside the plane of the table. This difference is explained primarily by the step of sampling taken. Indeed, the analysis of the shape of real fabric, especially in the
vicinity of the corners of the table, shows that the "ears" described by real fabric are fine and cannot be described by a grid whose step is
5
c
m. The folds described by real fabric cannot be reproduced by a set of "rigid rods" of length
5
c
m. It is thus necessary to more finely discretize virtual fabric so that it has more degrees of freedom, in particular in the corners of the table.

The error obtained for the
2
c
msampling is:

e_{2
c
m}= 1.42
cm

On the average, each mass point of virtual cloth at a
1.42
c
mdistance from its counterpart on real fabric The figure
confirms this decrease of the error compared to the
The fabric has more freedom to match at the level of the edges and corners of the table. The virtual shape (red grid) approaches the real shape (green grid). The virtual one ( in blue)
shows that the virtual fabric is positioned exactly on real fabric on the level of the plane of the table. On the edges, even if the total shape of virtual fabric is closer to the real
shape than to the
5
c
msampling, there remain nevertheless important differences in particular at the level of the vicinities of the corners of the table. We decided to perform the
same experiment with a more refined sampling, with
1
c
m.

The error obtained for the
1
c
msampling is:

e_{1
c
m}= 0.90
cm

For this spatial sampling, each point of the virtual grid is at a
0.9
c
mdistance on average from its counterpart on the real grid. The figure
shows that the virtual fabric is very close to real
fabric. We can check that the sides of virtual fabric are positioned very near to the sides of real fabric. The total form is very close, but there remain some differences at the level of
the "ears" of cloth and on the edges of fabric at the level of the smallest side of the table. Therefore, we continued with a more refined fabric

The error obtained for the
0.5
c
msampling is:

e_{0.5
c
m}= 0.82
cm

The improvement of the error is thus less important than for the preceding cases. This is due to the fact that the total shape of the real tablecloth (green) is almost
readjusted. The differences observed at the level of the "ears" for the
1
c
msampling are attenuated here. It is important to note that an experiment on cloth simulation with this sampling requires a relatively important computing time
(2 days of computation for a fixed time step of
1
e-04
sfor a total of
1.92
sof real animation on a PC Pentium IV 3.6GHz having 2Go of RAM). One carried out the same experiment for an even finer (
0.4
c
m), but the result obtained is roughly the same as with a
0.5
c
msampling even if the error drops slightly to
0.79
c
m.

thanks to the use of measured mechanical and viscous parameters. This also validates our simulation software We have also shown that a choice of
0.5
c
msampling is necessary to obtain satisfactory results. The use of measured viscosity parameters models the dissipation phenomena within fabric and between fabric
and air is also modeled realistically This can be checked by comparing the total simulation time necessary to bring the fabric to its final stable solution with the real stabilization
time

The observation of the result of the simulation produced for a
0.5
c
msampling shows that the shape of fabric obtained does not have exactly the same 3D shape as the real fabric. This difference is confirmed by the numerical
criterion which gives an average error on each characteristic point of the model (each mass) equal to about
8
m
m. Several explanations can be given to this difference.

Initially, all the measured parameters of fabric (mechanical and viscosity) were obtained for specific experimental conditions (temperature and pressure) and which are not the same as that ones during our experimentation of free fall carried out in a simple office where the atmospheric conditions are not controlled.

Also, and as for any approach of modeling and simulation of reality, the difference between reality and the virtual one can come from the model itself. Our modeling is carried out by respecting the analysis/synthesis protocol. The only behavior of a real fabric which is not taken into account by our model is the response of fabric to a compression in the direction of the warp and the weft. This response is a very short resistance to the compression, followed by a bending and a deformation of fabric in a direction perpendicular to the pressure applied (apart from the plane of fabric) for real textile, and is called "buckling" (see the figure ).

Unfortunately our current model treats buckling with the same law of answer as tension obtained from the KES which is uncorrect. This limitation of our model is primarily due to the absence of a mechanical system of measurement of the buckling of fabrics in the literature, even if Choi and KB proposed a model of buckling but which remains purely theoretical without any experimental validation.

Also, the residual error between the real shape of fabric and the virtual form are primarily located at the level of the corners of the table and more precisely in the
orientation of the "ears" as the comparative image for the
0.5
c
msampling shows it (especially seen from top).

One can thus conjecture that the difference between our model and reality are due to the absence of buckling in our model. Indeed, in the current configuration of our model, the fabric cannot behave correctly under a compressive stress and adopt thus exactly the same orientation of the real "ears". On the contrary, it very strongly resists to this constraint (the law in compression is the same one as that in traction). The modeling of buckling could thus complete the improvement of our model and thus lead to simulations having a very high degree of accuracy in comparison with reality.

The validation of the results produced by our simulator, by confronting them with real data, is an essential point in our modeling activity. It makes it possible to judge in a precise way the real contribution of our fabric model fed with measured parameters. The acquisition of the 3D shape of a fabric is however not easy since the woven materials are very deformable.

In this work, we chose to compare one simple virtual cloth and a real one to validate our modeling and our software. A protocol of acquisition of the real 3D shape of fabric was set up. The 3D reconstruction of the real shape of fabric with markers on it is done using a certain number of images of fabric and an algorithm of calibration/reconstruction using POSIT.

The virtual experiment of draped was carried out by using decreasing spatial samplings. It was observed that the shape of virtual cloth converges towards that of real one as the sampling is decreased. Indeed, the fabric has more degrees of freedom and can thus reproduce the finest details of real fabric (folds).

The examination of the result of the simulation produced for the smallest (
0.5
c
m) sampling shows that a residual error remains even with this step of very fine sampling. An asymptote seems to be reached what makes us think that the solution
cannot be found with the choice of an even finer sampling.

The explanation is in the absence of a realistic modeling of buckling in our mass-spring model. A meticulous observation shows that the compression of the tension springs is primarily located at the level of the corners of the table. An appropriate modeling of buckling would probably allow a more realistic deformation of fabric at the level of the corners of the table and a better orientation of the "ears" formed by fabric at these places. Buckling is probably the bottleneck to reach a more precise and more exact simulation tool

The behavior of textile has drawn the attention of researchers from both manufacture domain and Computer Graphics domain for long time. A realistic simulation of textile requires sufficient knowledge of textile's response to both compressive force and stretching force. The KES (Kawabata Evaluation System) is widely used as the industry standard for textile measurement. It measures typical deformations on cloth samples and evaluates the resulting forces, which provide us the proper parameters for our textile simulation system. However, it is only effective for stretching forces. An accurate model for buckling, which is caused by compressive force, is still not available due to its unstable property. Thus an in-depth study of buckling behavior of textile is very necessary. We plan to embed it into Emilion for more realistic simulation results.

For rigid materials, buckling usually means a 'failure'. It first presents strong resistance to compressive forces, but after the force reaches a critical load, an unrestorable deformation will happen. The rigid object is not able to restore its original shape even if the force is released.

For textile, its response to compressive force is very different from rigid materials. When a compressive force is applied, the textile shows weak resistance. It easily passes the unstable post-buckling state, and gets into a new stable state. It does not break or collapse, but forms wrinkles instead. Thus, we can assume that in any time step of the simulation, the textile is not in the unstable post-buckling state.

Choi and Ko have modified the mass-spring model, which has been used for textile simulation for a long time, to simulate buckling behavior without import a fictitious damping force.
Besides the springs between adjacent masses, they add springs between
P(
i,
j)and
P(
i±2,
j),
P(
i,
j±2),
P(
i±2,
j±2). They call this kind of connections
*Interlaced*connections, which are responsible for buckling created by compressive and bending forces.

For the connections between adjacent particles, they are represented by a simple linear spring model. The energy between adjacent particles
iand
jis

where
x_{ij}=
x_{j}-
x_{i},
Lis the natural length of the spring, and
k_{s}is the spring constant. But this energy function only account for tension. It can not be applied to buckling.

They simplify this kind of connection to a beam structure. Before buckling, the shape is a straight beam of length
L. Compressive loads are added at the pinned ends of the beam as shown in Figure
. The post-buckling equilibrium is approximated to be
a circular arc with constant arc length.

Based on the assumption in section
, they use the moment equilibrium equation (under the
pinned ends condition)
k_{b}+
Py= 0to predict the post-buckling equilibrium shape, where
k_{b}is the flextural rigidity,
is the curvature,
Pis the compressive load, and
yis the deflection.

According to the property of the textile, they consider the length
Las a constant in the buckling process, so the bending deformation energy can be estimated from the shape as

When the compressive force is applied, shear force
Vand bending moment
Mare produced. Ignore the shear force
V, the bending moment
Mhas a linear relationship with the curvature
:
M=
k_{b}. Thus the bending deformation energy can be calculated by

Since the arc length is considered to be a constant, the curvature can be expressed as

So

Then the force vector can be derived as

where .

So the Jacobian matrix of the force vector is

The method described in section achieved realistic results, and allowed to use large time steps in the simulation which enable a high performance. However, their model is based on a quad mesh, which severely limits its use. They have modified also their model to extend it to triangular meshes.

The total bending energy assigned to a particle in triangular meshes is shown in Figure . Similar with Figure , the buckling behavior in triangular meshed is defined for every pair of triangles which sharing an edge, as the red lines in Figure .

For a particle
iin the triangular mesh, all the stretching and shear energy function of its surrounding area is

where
T(
x)is the triangles around particle
i, and
is the areal energy of the triangle
.

The definition of curvature
is the same as the case of quad mesh. The bending deformation energy between two particles
iand
jis defined as

where
Mis the bending moment,
A_{i}and
A_{j}are the areas of the triangles associated with particles
iand
j.

The Golf-Stream project was funded by the French research agency Riam (Research and Innovation in Audiovisual and Multimedia). Its research topic is the capture of a real life character movement. The funding stopped in july 2006. This is a complex problem that we were solved by adjoining INRIA's skills with those of the Symah Vision production company. PGA (professional Golfer association) and FFG (French Federation of Golf) were part of that project and served as experts in order to drive the analysis and 3D tracking of the swing of professional golfers that we chose as an application (see section )

MIRAGES develops a tool that makes it possible to capture actual movements of a golfer on site. As an initial stage, static images of the player are shot from different angles and are used to reconstruct his or her skeleton and body envelope. The 3D virtual character is then superimposed over the player in action and is adjusts itself to the player's body with few interactions. The software has to determine body movements (pelvis rotation, arms position, etc.) as well as the position of the player's center of gravity and direction imposed to the ball with great precision. This method does not require the use of sensors placed on the athlete's body. Unfortunately, we could not finish this research during the funding period but we are still pursuing this research with our internal funding. We still have to work on the automatic prepositioning of the 3D puppet at the swing movement start and implement the complete 3D tracker. To obtain a result on a sequence, we also have to optimize software to be much faster and usable by TV crews. Standard video cameras are able to provide up to fifty images per second. It will thus be possible to broadcast a slow-motion sequence showing the superimposed skeleton of the player with added visualization tools that support live comments on such and such aspect of the shot, barely a few minutes after the shot. This product should be available for TV broadcasts in the future. Golf Stream was coordinated By Simah Vision but as this company disappeared, Golf-Stream continued under the MIRAGES Coordination.

This contract deals with the "cheap" realistic 3D modeling of human faces from a small set of images (even from only one picture) whatever the position of the face on the images. This research is conducted with the collaboration of AXIATEC, a small company interested in customization products. This research is an extension of the former work done within the framework of the VIP3D project. We still have to implement the case of one image, add eyes, teeth, tongue and hair to the faces reconstructed before.

This contract is an application of our research on 3D garment simulation. The idea was to put together ATTITUDE STUDIO expertise on the design of realistic human avatars, and the expertise of MIRAGES on 3D garment simulation in order to produce a realistic avatar (EVE SOLAL) wearing realistic garments.

Attitude Studio's computer graphists provided us with a 200 frames animation of their own virtual "star" : Eve Solal. This allowed us to perform our first test with dynamic environment on our simulator and thus to improve and validate it. Then we prepared several videos of Eve with different clothes that where submitted to Attitude's computer graphists opinion (see figure ).

Hatem Charfi and André Gagalowicz have also performed their study on the characterization of fabric viscosity using the MOCAP system gently made available by Attitude Studio (see figure ). This collaboration allowed us to improve considerably the performance of our garment simulation software (See section )

Nadine Corrado is a Fashion Creator; we collaborate on the validation of our 3D simulation of garments software. She is a potential end-user and also brought us a lot of information about garment conception.

MIRAGES recently got an important funding from the ANR (French Research Funding Agency) through the RNTL programme. The aim of this project is to design, within 3 years, a first prototype of a virtual try-on system which will allow any person to buy garments through Internet. The client will have the possibility to choose the type and style of garment and the kind of the textile material and to see himself/herself in 3D wearing the chosen garment. This raises very difficult and fascinating scientific problems that we will have to solve. This project is coordinated by Telmat (a company building 3D scanners for human persons), La Redoute (the biggest garment distributor in France) will bring its expertise in the contacts and interfaces with the clients, Nadine Corrado will serve as the expert on garment design and ENSITM (university laboratory specialized in the mechanics of textile) will furnish physical data for the garment simulator developed by MIRAGES.

Collaboration with XID Technologies, a company located in Singapore. André Gagalowicz is a scientific advisor for the company.

Collaboration with NTU (Nanyang Technological University) in Singapore has been initiated. A PHD student Quah Chee Kwang is supervised by both A. Gagalowicz from MIRAGES and Seah Hock Soon from NTU. Tsuyoshi Minakawa, researcher from the SDL laboratory of HITACHI, came to MIRAGES for one year (up to the 15th of september) in the framework of the INRIA/HITACHI collaboration.

Emmanuele Trucco from Heliot-Watt University in Edinburgh was invited from march 15th until april the 15th.

André Gagalowicz was a scientific advisor at the INSA Lyon Scientific Committee.

André Gagalowicz was a scientific advisor at the University of Bordeaux III Scientific Committee.

André Gagalowicz was a member of the scientific advisory board of "Machine, Graphics and Vision" journal

André Gagalowicz was a member of the scientific advisory board of "Computer Graphics and Geometry" journal,

André Gagalowicz was a member of the scientific advisory board of the LCPC journal.

André Gagalowicz was a scientific advisor of XID Technologies, Singapore.

André Gagalowicz has taught Computer Vision in the DESS on images of the University of Bordeaux III.

André Gagalowicz was a member of the conference program committees of:

GRAPP 2006 Conference, Setubal, Portugal.

GraphiCon 2006 Conference, Novosibirsk, Russia.

ICCVG 2006 Conference, Warsaw, Poland.

André Gagalowicz was invited by IPAL in Singapore from october 24 to november 6.

André Gagalowicz has presented an invited talk or paper at:

ENS, Paris, on january the 1st, january the 11, and october the 11 ;

GRAPP 2006 Conference, Setubal, Portugal, february 25-28 ; http://www.grapp.org/grapp2006/index.htm

"Journée Mathématique et Industrie", Mulhouse, april 7 ;

INRIA/City University of Hong Kong Workshop, Hong Kong, China, may 11-12 ;

"Les entretiens de l'INSEP", Vincennes, june 8 ;

GraphiCon 2006 Conference, Novosibirsk, Russia, july 1-5 ; http://ccfit.nsu.ru/graphicon2006/

ICCVG 2006 Conference, Warsaw, Poland, september 25-27 ; http://www.pjwstk.edu.pl/iccvg2006/

Workshop of GDR ISIS "Gestes et Images", Paris, november 10 ;

Futurotextiles Conference, Lille, november 23-24.

André Gagalowicz was a member of the thesis committee of Hatem Charfi, april 20.

Hatem Charfi presented a talk at the Colloquium at the University of Tuebingen, february 17 (see ).

Hatem Charfi presented a talk at a seminary at INRIA Sophia, march 10 (see ).

Maxence Mathieu and André Gagalowicz published a paper in the Digital Human Modeling for Design and Engineering Conference (Lyon) in

André Gagalowicz presented an invited paper at GRAPP 2006 in .

André Gagalowicz presented an invited paper at GraphiCon 2006 in .

André Gagalowicz presented an invited paper at ICCVG 2006 in .

André Gagalowicz presented an invited talk at ENS on January the 11th (see .

André Gagalowicz presented an invited talk at "Journée Mathématiques et Industrie" in Mulhouse on April the 7th (see .

André Gagalowicz presented a talk at the joint INRIA/City University of Hong Kong workshop, China, May 11-12 (see .

André Gagalowicz presented an invited talk at "Les entretiens de l'INSEP", Paris November 10th (see .

André Gagalowicz presented an invited talk at the "Futurotextiles conference" in Lille, on Novem ber 23-24 (see .

C. K. Quah, A. Gagalowicz and H. SS Seah have an accepted paper in the Asia-Pacific Congress on Sports Technology in 2007 (see ).

C. K. Quah, A. Gagalowicz and H. SS Seah have another accepted paper in the Asia-Pacific Congress on Sports Technology in 2007 (see ).

T. Le Thanh and A.Gagalowicz have an accepted paper for MIRAGES 2007 conference which will be published both in the MIRAGES 2007 proceedings as well as in the LNCS series of Springer Verlag (see )

T. Le Thanh and A.Gagalowicz have an accepted paper for VISAPP 2007 conference (see )

T. D. Kalinkina and A. Gagalowicz have an accepted paper for MIRAGES 2007 conference which will be published both in the MIRAGES 2007 proceedings as well as in the LNCS series of Springer Verlag (see ).