The research group that we have entitled fluminancefrom a contraction between the words “Fluid” and “Luminance” is dedicated to the extraction of information on fluid flows from image sequences and to the development of tools for the analysis and control of these flows. The objectives of the group are at the frontiers of several important domains. The group aims at providing in the one hand image sequence methods devoted to the analysis and description of fluid flows and in the other hand physically consistent models and operational tools to extract meaningful features characterizing or describing the observed flow and enabling decisions or actions. Such a twofold goal is of major interest for the inspection, the analysis and the monitoring of complex fluid flows, but also for control purpose of specific flows involved in industrial problems. To reach these goals we will mainly rely on data assimilation strategies and on motion measurement techniques. From a methodological point of view, the techniques involved for image analysis are either stochastic or variational. One of the main originality of the the fluminancegroup is to combine cutting-edge researches on these methods with an ability to conduct proper intensive experimental validations on prototype flows mastered in laboratory. The scientific objectives decompose in three main themes:

**Characterization of fluid flow from images**

We aim here at providing accurate measurements and consistent analysis of complex fluid flows through image analysis techniques.The application domain ranges from industrial processes and experimental fluid mechanics to environmental and life sciences. This theme includes also the use of non-conventional imaging techniques such as Schlieren techniques, Shadowgraphs, holography. The objective will be here to go towards 3D dense velocity measurements.

**Coupling dynamical model and image data**

We focus here on the study, through image data, of complex and partially known fluid flows involving complex boundary conditions, multi-phase fluids, fluids and structures interaction problems. Our credo is that image analysis can provide sufficiently fine observations on small an medium scales to construct models which, applied at medium and large scale, account accurately for a wider range of the dynamics scales. The image data and a sound modeling of the dynamical uncertainty at the observation scale should allow us to reconstruct the observed flow and to provide efficient real flows (experimental or natural) based dynamical modeling. Our final goal will be to go towards a 3D reconstruction of real flows, or to operate large motion scales simulations that fit real world flow data and incorporate an appropriate uncertainty modeling.

**Control and optimization of turbulent flows**

We are interested on active control and more precisely on closed-loop control. The main idea is to extract reliable image features to act on the flow. This approach is well known in the robot control community, it is called visual servoing. More generally, it is a technique to control a dynamic system from image features. We plan to apply this approach on flows involved in various domains such as environment, transport, microfluidic, industrial chemistry, pharmacy, food industry, agriculture, etc.

Turbulent fluid flows involved in environmental or
industrial applications are complex. In fluid mechanics
laboratories, canonical turbulent shear flows have been
studied for many years and a relatively clear picture of
their underlying structure exists. However, the direct
applicability of these efforts to real relevant flows, which
often occur in complex geometries and in the presence of
multiple non canonical influences, like cross-shear, span
wise non-uniform and thermal stratification, is still
unknown. In addition, the turbulence can be characterized by
Reynolds number ranging between
10
^{3}and
10
^{4}, corresponding to transitional regime for
which the use of classical turbulence models is limited.

In this context, we have performed research studies on turbulent shear flows of low velocities by tackling crucial topics of measurements, analysis and modeling of environmental and industrial flows in presence of non-canonical influences. This concerns more precisely the study of the interaction between a mixing layer and circular cylinder wake flow, the study of wake flow with span wise non uniformity, the study of mixing layer under the influence of thermal stratification and the study of mixing layer forced between non-uniform flows. The analysis of these flows has required the design of adequate dynamical models, using proper orthogonal decomposition and Galerkin projection. Understanding issues such as the mechanisms of heat and mass transfer involved in these shear flows provides meaningful information for the control of relevant engineering flows and the design of new technologies. To investigate more thoroughly these complex flows numerical and experimental tools have been designed. An immersed boundary method was proposed to mimic complex geometries into Direct Numerical simulation (DNS) and Large Eddy Simulations (LES) codes. A novel anemometer has been designed and implemented for the simultaneous measurement of velocity and temperature in air flows with a single hot-wire probe.

**Mixing layer wake interaction**

We have investigated the vortex shedding of a circular cylinder immersed in a plane turbulent mixing layer. For a centre span Reynolds number of 7500, the wake flow splits into three regions: a high-velocity wake, a low-velocity wake and a region of interaction in the middle span of the body. A strong unsteady secondary flow is observed, and explained with span wise base pressure gradients. Unexpected features are found for formation length and the base pressure along the span of the cylinder. In the high-velocity side, where the local Reynolds number is the highest, the formation length is longest. Based on the formation length measurements it was shown that as a function of the centre span Reynolds number, the wake flows behaves as circular cylinder in uniform flow. Three cells with a constant frequency with adjacent dislocations are observed. For each cell, a shedding mode was suggested. The relation of the secondary flow to the frequencies was examined. All the observations were analyzed by analogical reasoning with other flows. This pointed out the action of the secondary flow in the high-velocity side regarded as a wake interference mechanism .

**Low order complex flow modeling**

We have proposed improvements to the construction of low order dynamical systems (LODS) for incompressible turbulent external flows. The reduced model is obtained by means of a Proper Orthogonal Decomposition (POD) basis extracted through a truncated singular value decomposition of the flow auto-correlation matrix built from noisy PIV experimental velocity measurements. The POD modes are then used to formulate a reduced dynamical system that contains the main features of the flow. This low order dynamical system (LODS) is obtained through a Galerkin projection of the Navier-Stokes Equations on the POD basis. Usually, the resulting system of ordinary differential equations presents stability problems due to modes truncation and numerical uncertainties, especially when working on experimental data. The technique we proposed relies on an optimal control approach to estimate the dynamical system coefficients and its initial condition. This allows us to recover a reliable and stable spatio-temporal reconstruction of the large scales of the flow. The technique has been assessed on the near wake behind a cylinder observed through very noisy PIV measurement. It has been also evaluated for configurations involving a rotating cylinder.

Studies on complex 3D dynamical behavior resulting from the interaction between a plane mixing layer and the wake of a cylinder have been also investigated using POD representation, applied to data from two synchronized 2D PIV systems (Dual-plane PIV). This approach allowed us to construct a 3D-POD representation. An analysis of the correlations shows different length scales in the regions dominated by wake like structures and shear layer type structures . In order to characterize the particular organization in the plane of symmetry, a Galerkin projection from a slice POD has been performed. This led to a low-dimensional dynamical system that allowed the analysis of the relationship between the dominant frequencies. This study led to a reconstruction of the dominant periodic motion suspected from previous studies . This work allowed us to make a link between the three-dimensional organization and the secondary unsteady motion from the low velocity side to the high velocity side of the mixing layer, appearing in this highly 3D flow configuration.

**Direct and Large Eddy simulations of complex flows**

During the PhD of P. Parnaudeau, we have proposed a direct forcing method better suited to the use of compact finite difference schemes in Direct Numerical Simulation. The new forcing creates inside the body an artificial flow preserving the no-slip condition at the surface but reducing the step-like change of the velocity derivatives across the immersed boundary. This modification led to improve results both qualitatively and quantitatively for conventional and complex flow geometries .

Three-dimensional direct numerical simulations have been
performed for vortex shedding behind cylinders. We focused in
particular on cases for which the body diameter and the
incoming flow involved span wise linear non-uniformity. Four
configurations were considered: the shear flow, the tapered
cylinder and their combinations, which gave rise namely to
the adverse and aiding cases. In contrast with the
observations of other investigators, these computations
highlighted distinct vortical features between the shear case
and the tapered case. In addition, it was observed that the
shear case and the adverse case (respectively the tapered and
aiding case), yielded similarities in flow topology. This
phenomenon was explained by the span wise variations of the
ratio of mean velocity and the cylinder diameter which seemed
to govern these flows. Indeed, it was observed that large
span wise variations of
U/
Dseemed to enhance
three-dimensionality, through the appearance of
vortex-adhesions and dislocations. Span wise cellular pattern
of vortex shedding were identified. Their modifications in
cell size, junction position and number were correlated with
the variation of
U/
D. In the Lee side of the obstacle
a wavy secondary motion was identified. Induced secondary
flow due to the bending of Karman vortices in the vicinity of
vortex-adhesion and dislocations was suggested to explain
this result
.

**LES and experimental wake flow database**

We contributed to the study of flow over a
circular cylinder at Reynolds number
Re= 3900. Although this classical
flow is widely documented in the literature, especially for
this precise Reynolds number, which leads to a sub critical
flow regime, there is no consensus about the turbulence
statistics immediately just behind the obstacle. This flow
has been studied both numerically with Large Eddy Simulation
and experimentally with Hot-Wire Anemometry and Particle
Image Velocimetry. The numerical simulation has been
performed using high-order schemes and the specific Immersed
Boundary Method previously mentioned. We focused on
turbulence statistics and power spectra in the near wake up
to 10 diameters. Statistical estimation is shown to need
large integration times increasing the computational cost and
leading to an uncertainty of about
10%for most flow characteristics
considered in this study. The present numerical and
experimental results are found to be in good agreement with
previous Large Eddy Simulation data. Our study has exhibited
significant differences compared with the experimental data
found in the literature. The obtained results attenuate
previous numerical-experimental controversy for this type of
flows
.

**Simultaneous velocity temperature measurements in
turbulent flows**

We have worked on the design of a novel anemometer for the simultaneous measurement of velocity and temperature in airflows with a single hot wire probe. The principle of periodically varying the overheat ratio of the wire has been selected and applied through a tunable electronic chain. Specific methods were developed for the calibration procedure and the signal processing. The accuracy of the measurements was assessed by means of Monte-Carlo simulations. Accurate results were provided for two types of turbulent non-isothermal flows, a coaxial heated jet and a low speed thermal mixing. The particular interest of the synchronization of the two measurements has been emphasized during the PhD thesis of T. Ndoye.

A new dynamic calibration technique has been developed for hot-wire probes. The technique permits, in a short time range, the combined calibration of velocity, temperature and direction calibration of single and multiple hot-wire probes. The calibration and measurements uncertainties were modeled, simulated and controlled, in order to reduce their estimated values.

Flow visualization has been a powerful tool to depict or to understand flow feature properties. Efforts to develop high-quality flow visualization techniques date back over a century. The analysis of the recorded images consisted firstly to a qualitative interpretation of the streak lines leading to an overall global insight into the flow properties but lacking quantitative details on important parameters such as velocity fields or turbulence intensities. Point measurement tools such as hot wire probes or Laser Doppler Velocimetry have typically provided these details. As these probes give information only at the point where they are placed, simultaneous evaluations at different points require to dispose a very large number of probes and the evaluation of unsteady field (most of the flows are unsteady) is almost unachievable with them.

In an effort to avoid the limitations of these probes, the Particle Image Velocimetry ( piv), a non-intrusive diagnostic technique, has been developed in the last two decades . The pivtechnique enables obtaining velocity fields by seeding the flow with particles (e.g. dye, smoke, particles) and observing the motion of these tracers. In computer vision, the estimation of the projection of the apparent motion of a 3D scene onto the image plane, refereed in the literature as optical-flow, is an intensive subject of researches since the 80's and the seminal work of B. Horn and B. Schunk . Unlike to dense optical flow estimators, the former approach provides techniques that supply only sparse velocity fields. These methods have demonstrated to be robust and to provide accurate measurements for flows seeded with particles. These restrictions and their inherent discrete local nature limit too much their use and prevent any evolutions of these techniques towards the devising of methods supplying physically consistent results and small scale velocity measurements. It does not authorize also the use of scalar images exploited in numerous situations to visualize flows (image showing the diffusion of a scalar such as dye, pollutant, light index refraction, flurocein,...). At the opposite, variational techniques enable in a well-established mathematical framework to estimate spatially continuous velocity fields, which should allow more properly to go towards the measurement of smaller motion scales. As these methods are defined through PDE's systems they allow quite naturally including as constraints the kinematical and dynamical laws governing the observed fluid flows. Besides, within this framework it is also much easier to define characteristic features estimation on the basis of physically grounded data model that describes the relation linking the observed luminance function and some state variables of the observed flow. This route has demonstrated to be much more robust to scalar image. Several studies in this vein have strengthened our skills in this domain. All the following approaches have been either formulated within a statistical Markov Random Fields modeling or either within a variational framework. For a thorough description of these approaches see .

**ICE data model and div-curl regularization**This fluid
motion estimator has been developed during the PhD thesis of
Thomas Corpetti. It is constructed on a data model derived
from the Integration of the Continuity Equation (ICE data
model) and includes a second order regularization scheme
enabling to preserve blobs of divergence and curl. Intensive
evaluations of this estimator on flow prototypes mastered in
laboratory have shown that this estimator led to the same
order of accuracy as the best PIV techniques but for an
increase information density. This ability to get dense flow
fields allowed us estimating proper vorticity or divergence
maps without resorting to additional post-processing
interpolation schemes.

**Schlieren Image velocimetry**We have addressed the
problem of estimating the motion of fluid flows visualized
with the Schlieren technique. Such an experimental
visualization system is well known in fluid mechanics and it
enables the visualization of unseeded flows. This technique
authorizes the capture of phenomena that are impossible to
visualize with particle seeding such as natural convection,
phonation flow, breath flow and allows the setting of large
scale experiments. Since the resulting images exhibit very
low intensity contrasts, classical motion estimation methods
based on the brightness constancy assumption
(correlation-based approaches, optical flow methods) are
completely inefficient. The global energy function we have
defined for Schlieren images is composed of i) a specific
data model accounting for the fact that the observed
luminance is related to the gradient of the fluid density,
and ii) a specific constrained div-curl regularization term.
To date there exists no motion estimator allowing estimating
accurately dense velocity fields on Schlieren images.

**Low order fluid motion estimator**During the PhD of Anne
Cuzol, we have worked on the definition of a low-dimensional
fluid motion estimator. This estimator is based on the
Helmholtz decomposition, which consists in representing the
velocity field as the sum of a divergence-free component and
a curl-free one. In order to provide a low-dimensional
solution, both components have been approximated using a
discretization of the vorticity (curl of the velocity vector)
and divergence maps through regularized Dirac measures
. The resulting so-called
irrotational (resp. solenoidal) field is then represented by
a linear combination of basis functions obtained by a
convolution product of the Green kernel gradient and the
vorticity map (resp. the divergence map). The coefficient
values and the basis function parameters are obtained by
minimizing a function formed by an integrated version of the
mass conservation principle of fluid mechanics.

**Potential functions estimation and finite mimetic
differences**In collaboration with the Mannheim University
and the group of C. Schnoerr, we have studied a direct
estimation approach of the flow potential functions
(respectively the
*stream*function and the
*velocity*potential) from two consecutive images. The
estimation has been defined on the basis of a high order
regularization scheme and has been implemented through
mimetic difference methods. With these approaches the
discretization preserves basic relationships of continuous
vector analysis. Compared to previous discretization scheme
based on auxiliary div-curl variables, the considered
technique appeared to be numerically much more stable and led
to an improve accuracy.

**2D and 3D atmospheric motion layer estimation**In this
study, we have explored the problem of estimating mesoscales
dynamics of atmospheric layers from satellite image
sequences. Due to the intrinsic sparse 3-dimensional nature
of clouds and to large occluded zones caused by the
successive overlapping of cloud layers, the estimation of
accurate layered dense motion fields is an intricate issue.
Relying on a physically sound vertical decomposition of the
atmosphere into layers, we have proposed two dense motion
estimators for the extraction of multi-layer horizontal (2D)
and 3D wind fields. These estimators are expressed as the
minimization of a global function that includes a data-driven
term and a spatio-temporal smoothness term. A robust data
term relying on shallow-water mass conservation model has
been proposed to fit sparse observations related to each
layer. In the 3D case, the layers are interconnected through
a term modeling mass exchanges at the layers surfaces
frontiers.

A novel spatio-temporal regularizer derived from the shallow-water momentum conservation model has been considered to enforce temporal consistency of the solution along time. These constraints are combined with a robust second-order regularizer preserving divergent and vorticity structures of the flow. Besides, a two-level motion estimation scheme has been settled to overcome the limitations of the multiresolution incremental estimation scheme when capturing the dynamics of fine mesoscale structures. This alternative approach relies on the combination of correlation and optical-flow observations. An exhaustive evaluation of the novel method has been first performed on a scalar image sequence generated by Direct Numerical Simulation of a turbulent bi-dimensional flow. Based on qualitative experimental comparisons, the method has also been assessed on a Meteosat infrared image sequence.

Classical motion estimation techniques usually proceed on pairs of two successive images, and do not enforce temporal consistency. This often induces an estimation drift which is essentially due to the fact that motion estimation is formulated as a local process in time. No adequate physical dynamics law, or conservation law, related to the observed flow, is taken into account over long time intervals by the usual motion estimators. The estimation of an unknown state variable trajectory on the basis of specified dynamical laws and some incomplete and noisy measurements of the variable of interest can be either conducted through optimal control techniques or through stochastic filtering approach. These two frameworks have their own advantages and deficiencies. We rely indifferently on both approaches.

**Stochastic filtering for fluid motion tracking**We have
proposed a recursive Bayesian filter for tracking velocity
fields of fluid flows. The filter combines an ÃÂto
diffusion process associated to 2D vorticity-velocity
formulation of Navier-Stokes equation and discrete image
error reconstruction measurements. In contrast to usual
filters, designed for visual tracking problem, our filter
combines a continuous law for the description of the
vorticity evolution with discrete image measurements. We
resort to a Monte-Carlo approximation based on particle
filtering. The designed tracker provides a robust and
consistent estimation of instantaneous motion fields along
the whole image sequence. In order to handle a state space of
reasonable dimension for the stochastic filtering problem,
the motion field is represented as a combination of adapted
basis functions. The basis functions are derived from a
mollification of Biot-Savart integral and a discretization of
the vorticity and divergence maps of the fluid vector field.
The output of such a tracking is a set of motion fields along
the whole time range of the image sequence. As the time
discretization is much finer than the frame rate, the method
provides consistent motion interpolation between consecutive
frames. In order to reduce further the dimensionality of the
associated state space when we are facing a large number of
motion basis functions, we have explored a new dimensional
reduction approach based on dynamical systems theory. The
study of the stable and unstable directions of the continuous
dynamics enables to construct an adaptive dimension reduction
procedure. It consists in sampling only in the unstable
directions, while the stable ones are treated
deterministically
.

When the likelihood of the measurement can be modeled as Gaussian law, we have also investigated the use of so-called ensemble Kalman filtering for fluid tracking problems. This kind of filters introduced for the analysis of geophysical fluids is based on the Kalman filter update equation. Nevertheless, unlike traditional Kalman filtering setting, the covariances of the estimation errors, required to compute the Kalman gain, rely on an ensemble of forecasts. Such a process gives rise to a Monte Carlo approximation for a family of stochastic non linear filters enabling to handle state spaces of large dimension. We have recently proposed an extension of this technique that combines sequential importance sampling and the propagation law of ensemble kalman filter. This technique leads to ensemble Kalman filter with an improve efficiency. This appears to be a generalization of the optimal importance sampling strategy we proposed in the context of partial conditional Gaussian trackers , .

**Variational assimilation technique**

We investigated the use of variational framework for the tracking from image sequence of features belonging to high dimensional spaces. This framework relies on optimal control principles as developed in environmental sciences to analyze geophysical flows , . Within the PhD of Nicolas Papadakis , , we have first devised a data assimilation technique for the tracking of closed curves and their associated motion fields. The proposed approach enables a continuous tracking along an image sequence of both a deformable curve and its associated velocity field. Such an approach has been formalized through the minimization of a global spatio-temporal continuous cost functional, with respect to a set of variables representing the curve and its related motion field. The resulting minimization sequence consists in a forward integration of an evolution law followed by a backward integration of an adjoint evolution model. The latter pde includes a term related to the discrepancy between the state variables evolution law and discrete noisy measurements of the system. The closed curves are represented through implicit surface modeling , whereas the motion is described either by a vector field or through vorticity and divergence maps according to the type of targeted application. The efficiency of the approach has been demonstrated on two types of image sequences showing deformable objects and fluid motions.

More recently assimilation technique for the direct estimation of atmospheric wind field from pressure images have been proposed . These techniques rely on a brightness variation model of the intensity function. They do not include anymore motion measurements provided by external motion estimators. The resulting estimator allows us to recover accurate fluid motion fields and enables tracking dense vorticity maps along an image sequence.

Nowadays, visual servoing is a widely used technique is robot control. It consists in using data provided by a vision sensor for controlling the motions of a robot . Various sensors can be considered such as perspective cameras, omnidirectional cameras, 2D ultrasound probes or even virtual cameras. In fact, this technique is historically embedded in the larger domain of sensor-based control so that other sensors than vision sensors can be properly used. On the other hand, this approach was first dedicated to robot arms control. Today, much more complex system can be considered like humanoid robots, cars, submarines, airships, helicopters, aircrafts. Therefore, visual servoing is now seen as a powerful approach to control the state of dynamic systems.

Classically, to achieve a visual servoing task, a set of
visual features
has to be selected from visual measurements
extracted from the image. A control law is then
designed so that these visual features reach a desired value
related to the desired state of the system. The
control principle is thus to regulate to zero the error
vector
. To build the control law, the knowledge of the
so-called
*interaction matrix*
is usually required. This matrix links the time
variation of
to the camera instantaneous velocity

where the term describes the non-stationary behavior of . Typically, if we try to ensure an exponential decoupled decrease of the error signal and if we consider the camera velocity as the input of the robot controller, the control law writes as follow

with a proportional gain that has to be tuned to minimize the time-to-convergence, the pseudo-inverse of a model or an approximation of and an estimation of .

The behavior of the closed-loop system is then obtained, from ( ), by expressing the time variation of the error

As can be seen, visual servoing explicitly relies on the choice of the visual features and then on the related interaction matrix; that is the key point of this approach. Indeed, this choice must be performed very carefully. Especially, an isomorphism between the camera pose and the visual features is required to ensure that the convergence of the control law will lead to the desired state of the system. An optimal choice would result in finding visual features leading to a diagonal and constant interaction matrix and, consequently, to a linear decoupled system for which the control problem is well known. Thereafter, the isomorphism as well as the global stability would be guaranteed. In addition, since the interaction matrix would present no more nonlinearities, a suitable robot trajectory would be ensured.

However, finding such visual features is a very complex problem and it is still an open issue. Basically, this problem consists in building the visual features from the nonlinear visual measurements so that the interaction matrix related to becomes diagonal and constant or, at least, as simple as possible.

On the other hand, a robust extraction, matching (between the initial and desired measurements) and real-time spatio-temporal tracking (between successive measurements) have to be ensure but have proved to be a complex task, as testified by the abundant literature on the subject. Nevertheless, this image process is, to date, a necessary step and often considered as one of the bottlenecks of the expansion of visual servoing. That is why more and more non geometric visual measurements are proposed .

Sparse representation methods aim at finding
representations of a signal with a small number of components
taken from an over-complete dictionary of elementary
functions or vectors. Sparse representation are of interest
in a number of applications in Physics and signal processing.
In particular, they provide a simple characterization of
certain families of signals encountered in practice. For
example, smooth signals can be shown to have a sparse
representation in over-complete Fourier or wavelet
dictionaries. More recently, it has been emphasized in
that the solutions of certain
differential equations (
*e.g.,*diffusion or transport equation) have a sparse
representation in dictionaries made up of curvelets.

Finding the sparse representation of a signal typically requires to solve an under-determined system of equations under the constraint that the solution is composed of the minimum number of non-zero elements. Unfortunately, this problem is known to be NP-hard and sub-optimal procedures have to be devised to find practical solutions. Among the various algorithms that find approximate solutions, let us mention for example the matching pursuit, orthogonal matching pursuit or basis pursuit algorithms.

Choosing appropriate models and fixing hyper-parameters is a tricky and often hidden process in optic-flow estimation. Most of the motion estimators proposed so far have generally to rely on successive trials and a empirical strategy for fixing the hyper parameters values and choosing the adequate model. Besides of its computational inefficiency, this strategy may produce catastrophic estimate without any relevant feedback for the end-user, especially when motions are difficult to apprehend as for instance for complex deformations or non-conventional imagery. Imposing hard values to these parameters may also yield poor results when the lighting conditions or the underlying motions differ from those the system has been calibrated with. At the extreme, the estimate may be either too smooth or at the opposite non-existent strong motion discontinuities.

Bayesian model selection offers an attractive solution to
this problem. The Bayesian paradigm implicitly requires the
definition of several competing observation and prior
*probabilistic*model(s). The observation model relates
the motion of the physical system to the spatial and temporal
variations of the image intensity. The prior models define
the spatio-temporal constraints that the motion have to
satisfy. Considering these competing models, the Bayesian
theory provides methodologies to select the best models under
objective performance criterion (minimum probability of
error, minimum mean square error, etc). Moreover, due to the
generality of this problem, numerous algorithms and
approximations exist in the literature to implement efficient
and effective practical solutions: Monte-Carlo integration,
mean-field and Laplace approximations, EM algorithm,
graphical models, etc.

By designing new approaches for the analysis of fluid-image sequences the fluminancegroup aims at contributing to several application domains of great interest for the community and in which the analysis of complex fluid flows plays a central role. The group focuses mainly on two broad application domains:

Environmental sciences;

Experimental fluid mechanics and industrial flows.

The first huge application domain concerns all the
sciences that aim at observing the biosphere evolution such
as meteorology, climatology or oceanography but also remote
sensing study for the monitoring of meteorological events
or human activities consequences. For all these domains
image analysis is a practical and unique tool to
*observe, detect, measure, characterize or analyze*the
evolution of physical parameters over a large domain. The
design of generic image processing technique for all these
domains might offer practical software tools to measure
precisely the evolution of fluid flows for weather
forecasting or climatology studies. It might also offer
possibilities of closed surveillance of human and natural
activities in sensible areas such as forests, river edges,
and valley in order to monitor pollution, floods or fire.
The need in terms of local weather forecasting, risk
prevention, or local climate change is becoming crucial for
our tomorrow's life. At a more local scale, image sensors
may also be of major utility to analyze precisely the
effect of air curtains for safe packaging in
agro-industrial.

In the domain of
**experimental fluid mechanics**, the visualization of
fluid flows plays a major role, especially for turbulence
study since high frequency imaging has been made currently
available. Together with analysis of turbulence at
different scales, one of the major goals pursued at the
moment by lot of scientists and engineers consists in
studying the ability to manipulate a flow to induce a
desired change. This is of huge technological importance to
enhance or inhibit mixing in shear flows, improve energetic
efficiency or control the physical effects of strain and
stresses. This is for instance of particular interest
for:

military applications, for example to limit the infra-red signatures of fighter aircraft;

aeronautics and transportation, to limit fuel consumption by controlling drag and lift effects of turbulence and boundary layer behavior;

industrial applications, for example to monitor flowing, melting, mixing or swelling of processed materials, or preserve manufactured products from contamination by airborne pollutants, or in industrial chemistry to increase chemical reactions by acting on turbulence phenomena.

.

This code allows the computation from two consecutive
images of a dense motion field. The estimator is expressed as
a global energy function minimization. The code enables the
choice of different data model and different regularization
functional depending on the targeted application. Generic
motion estimator for video sequences or dedicated motion
estimator for fluid flows can be specified. This estimator
allows in addition the users to specify additional
correlation based matching measurements. It enables also the
inclusion of a temporal smoothing prior relying on a velocity
vorticity formulation of the Navier-Stoke equation for Fluid
motion analysis applications. The different variants of this
code correspond to research studies that have been published
in IEEE transaction on Pattern Analysis and machine
Intelligence, Experiments in Fluids, IEEE transaction on
Image Processing, IEEE transaction on Geo-Science end Remote
Sensing. The binary of this code can be freely downloaded on
the
fluidweb site
http://

This software enables to estimate a stack of 2D horizontal
wind fields corresponding to a mesoscale dynamics of
atmospheric pressure layers. This estimator is formulated as
the minimization of a global energy function. It relies on a
vertical decomposition of the atmosphere into pressure
layers. This estimator uses pressure data and classification
clouds maps and top of clouds pressure maps (or infra-red
images). All these images are routinely supplied by the
EUMETSAT consortium which handles the Meteosat and MSG
satellite data distribution. The energy function relies on a
data model built from the integration of the mass
conservation on each layer. The estimator also includes a
simplified and filtered shallow water dynamical model as
temporal smoother and second-order div-curl spatial
regularizer. The estimator may also incorporate
correlation-based vector fields as additional observations.
These correlation vectors are also routinely provided by the
Eumetsat consortium. This code corresponds to research
studies published in IEEE transaction on Geo-Science and
Remote Sensing. It can be freely downloaded on the
fluidweb site
http://

This software extends the previous 2D version. It allows
(for the first time to our knowledge) the recovery of 3D wind
fields from satellite image sequences. As with the previous
techniques, the atmosphere is decomposed into a stack of
pressure layers. The estimation relies also on pressure data
and classification clouds maps and top of clouds pressure
maps. In order to recover the 3D missing velocity
information, physical knowledge on 3D mass exchanges between
layers has been introduced in the data model. The
corresponding data model appears to be a generalization of
the previous data model constructed from a vertical
integration of the continuity equation. This research study
has been recently accepted for publication in IEEE trans. on
Geo-Science and Remote Sensing. A detailed description of the
technique can be found in an Inria research report. The
binary of this code can be freely downloaded on the
fluidweb site
http://

This code enables the estimation of a low order
representation of a fluid motion field from two consecutive
images.The fluid motion representation is obtained using a
discretization of the vorticity and divergence maps through
regularized Dirac measure. The irrotational and solenoidal
components of the motion fields are expressed as linear
combinations of basis functions obtained through the
Biot-Savart law. The coefficient values and the basis
function parameters are obtained as the minimizer of a
functional relying on an intensity variation model obtained
from an integrated version of the mass conservation principle
of fluid mechanics. Different versions of this estimation are
available. The code which includes a Matlab user interface
can be downloaded on the
fluidweb site
http://

[In collaboration with G. Artana and P. Minini (Univ. Bueno Aires)]

We have addressed the problem of estimating the motion of fluid flows visualized with the Schlieren technique. Such an experimental visualization system, well known in fluid mechanics, enables the visualization of unseeded flows. It thus allows the capture of phenomena which are impossible to visualize with particle seeding such as natural convection, phonation flow, breath flow, as well as the visualization of large scale structures. Since the resulting images exhibit very low intensity contrasts, classical motion estimation methods based on the brightness constancy assumption (correlation-based approaches, optical flow methods) are inefficient. In order to extract motion fields from these specific images, we have introduced a new energy function composed of i) a specific data model accounting for the fact that the observed luminance is related to the gradient of the fluid density, and ii) a specific constrained div-curl regularization term. The minimization of this energy provides what we believe to be the only existing motion estimator that works properly on Schlieren images.

We have proposed a new multiscale PIV method based on turbulent kinetic energy decay. The technique is based on scaling power laws describing the statistical structure of turbulence. A spatial regularization constraints the solution to behave through scales as a self similar process via second-order structure function and a given power law. The real parameters of the power-law, corresponding to the distribution of the turbulent kinetic energy decay, have been estimated from a simple hot-wire measurement. The method has been assessed in a turbulent wake flow and grid turbulence through comparisons with HWA measurements and other PIV approaches. Results have indicated that the present method is superior because it accounts for the whole dynamic range involved in the flows.

We have proposed a novel collaborative motion estimation scheme dedicated to the measurement of velocity in fluid experimental flows through image sequences. The proposed technique satisfies the Navier-Stokes equations and combines the robustness of correlation techniques with the high density of global variational methods. It can be considered either as a reenforcement of fluid dedicated optical flow methods towards robustness, or as an enhancement of correlation approaches towards dense information. This results in a physics-based technique that is robust with respect to noise and outliers, while providing a dense motion field. The method has been applied on synthetic images and on real experiments in turbulent flows carried out to allow a thorough comparison with a state of the art variational and correlation methods.This work has been recently accepted for publication in the journal Experiments in Fluids.

Our work focusses on the design of new tools for the problem of 3D reconstruction of a turbulent flow motion. This task includes both the study of physically-sound models on the observations and the fluid motion, and the design of low-complexity and accurate estimation algorithms. On the one hand, state-of-the-art methodologies such as “sparse representations" will be investigated for the characterization of the observation and fluid motion models. Sparse representations are well-suited to the representation of signals with very few coefficients and offer therefore advantages in terms of computational and storage complexity. On the other hand, the estimation problem will be placed into a probabilistic Bayesian framework. This will allow the use of state-of-the-art inference tools to effectively exploit the strong time-dependence of the fluid motion. In particular, we will investigate the use of “ensemble Kalman" filter to devise low-complexity sequential estimation algorithms.

One thesis has started on this topic in October 2010 and two new postdocs will study different aspects of these problematics.

We have worked on a stochastic interpretation of the motion estimation problem. The usual optical flow constraint equation (that assumes a conservation of the luminance along time), embedded for instance within a Lucas-Kanade estimator, can indeed be seen as the minimization of the variance of a stochastic process under some strong constraints (the luminance as a function of a stochastic process is a martingal with isotropic diffusion). This constraint can be relaxed by imposing weaker assumption on the luminance function and also in introducing anisotropic intensity-based uncertainty assumptions. The amplitude of these uncertainties are jointly computed with the unknown velocity at each point of the image grid. Different estimators have been designed depending on the various hypothesis assumed for the luminance function. The substitution of our new observation terms on a simple Lucas-Kanade estimator improves significantly the quality of the results (on the basis of optical flow datasets with ground truth available on the web) and leads to a local motion estimator without any parameters. We believe that the definition of a dense estimator based on a similar idea is very promising.

Based on scaling laws describing the statistical structure of turbulent motion across scales, we propose a multiscale and non-parametric regularizer for optic-flow estimation. Regularization is achieved by constraining motion increments to behave through scales as the most likely self-similar process given some image data. In a first level of inference, the hard-constrained minimization problem is optimally solved by taking advantage of Lagrangian duality. It results in a collection of first-order regularizers acting at different scales. The optimal regularization hyper-parameters at the different scales are obtained by solving the dual problem while in a second inference level, the data-model error variance is obtained using a marginalized maximum likelihood estimation framework. In a third level of inference, the most likely self-similar model given the data is optimally selected by maximization of Bayesian evidence. The motion estimator accuracy is first evaluated on a synthetic image sequence of simulated bi-dimensional turbulence and then on a real meteorological image sequence. Results obtained with the proposed physical based approach exceed the best state of the art results. This work has been published in 2009 in several conference (including the "International Conference on Computer Vision" (ICCV) 2009, "Turbulent Mixing and Beyond" (TMB) 2009 and the "International Conference on Geosciences and Remote Sensing" (IGARSS) 2009). It is currently under review for publication in IEEE transaction on Pattern Analysis and Machine Intelligence (PAMI), in the meteorological journal Tellus, Serie A and in Experiments in Fluids.

In this work we have devised proper scale-space representations for multi-scale motion estimation of turbulent flows from images. Decomposing motion on a wavelets basis provides an efficient framework to estimate motion from images, while constraining the solution to possess a given regularity and a divergence-free property. In this perspective, we have expanded the motion field on a basis of divergence-free vectorial functions based on bi-orthogonal wavelet. By truncating the expansion, we efficiently solved at large scales the optic-flow estimation problem. To deal with small-scale motion estimation and therefore to face the well-known optic-flow aperture problem, motion is regularized using high-order regularizers defined simply by varying the number of vanishing moments of the wavelets. Moreover, fractal behaviors of the motion field increments have also been introduced as prior models for regularization. Indeed, decomposing motion on a wavelet basis provides an efficient way to constrain the solution to be self-similar. This is achieved by modeling the decay of the wavelet coefficients through scales.

Bayes rule provides a nice framework for motion estimation from image sequences. We rely on a hierarchical modeling linking the image intensity function variable, the motion field variable, hyper-parameters composed of the likelihood and prior model inverse variances and of robust parameters, and finally the observation and prior model. The variable dependence can thus be expressed as a 4-level hierarchy. Applying the Bayes rule on this hierarchy, we obtain three levels of inference, which enable us to obtain, by marginalizing out intermediate variables, a direct dependence of the variable of interest to the image intensity function. Thus, the estimates of regularization parameters, of robust parameters associated to semi-quadratic norms of a family of M-estimators, and of observation and prior models are inferred in a maximum likelihood sense while maximizing jointly the motion field a posteriori probability. The quality of the method is demonstrated on synthetic and real two-dimensional turbulent flows and on several computer vision scenes of the "Middlebury" data-base. This work is currently under review for publication in IEEE transaction on Image Processing (IP).

We also worked on new generic algorithms to solve Bayesian estimation problems. We are currently developing for a new technique for model selection based on the minimization of the free energy of the system variables. In , we study the properties of a new iterative algorithm for the estimation of model hyper-parameters in scenarios with hidden data.

**Contributions:**We have pursued the study of efficient
sparse decomposition algorithms. In particular, we have
addressed the problem of finding good sparse
representations into a probabilistic framework. First, we
have showed that one of the standard formulations - the
Lagrangian formulation - of this problem can be interpreted
as a limit case of a maximum a posteriori (MAP) problem
involving Bernoulli-Gaussian variables. Then, we have
proposed different tractable implementations of this MAP
problem and explained some well-known pursuit algorithms
(MP, OMP, StOMP, CoSaMP and SP) as particular cases of the
proposed algorithms. Experimentations led on synthetic data
show a good general behavior of the proposed methods. A
paper presenting this novel sparse representation method
has been presented to the conference EUSIPCO 2010,
.

Exploiting further this probabilistic framework, we have
then considered the design of
*soft*pursuit algorithms. In particular, instead of
making hard decisions on the support of the sparse
representation and the amplitude of the non-zero
coefficients, our soft procedures iteratively update
*probability*on the latter values. The proposed
algorithms are designed within the framework of the
mean-field approximations and resort to the so-called
variational Bayes EM algorithm to implement an efficient
minimization of a Kullback-Leibler criterion. This
contribution has been presented to the international
conference ICASSP 2010,
.

We investigated the study of a recursive Bayesian filter for tracking velocity fields of fluid flows. The filter combines an Ito diffusion process associated to 2D vorticity-velocity formulation of Navier-Stokes equation and discrete image error reconstruction measurements. In contrast to usual filters designed for visual tracking problems, our filter combines a continuous law for the description of the vorticity evolution with discrete image measurements. We resort to a Monte-Carlo approximation based on particle filtering. The designed tracker provides a robust and consistent estimation of instantaneous motion fields along the whole image sequence.

When the likelihood of the measurement can be modeled as Gaussian law, we have also investigated the use of the so-called ensemble Kalman filtering for fluid tracking problems. This kind of filters introduced for the analysis of geophysical fluids is based on the Kalman filter update equation. Nevertheless, unlike traditional Kalman filtering setting, the covariances of the estimation errors, required to compute the so-called Kalman gain, relies on an ensemble of forecasts. Such a process gives rise to a Monte Carlo approximation for a family of non-linear stochastic filters enabling to handle state spaces of large dimension. We have recently proposed an extension of this technique that combines sequential importance sampling and the propagation law of a n ensemble Kalman filter. This technique leads to an ensemble Kalman filter with an improved efficiency. We are currently investigating the introduction of a nonlinear direct image measurements operator within this ensemble Kalman scheme. This modification of the filter provides very promising results on 2D numerical and experimental flows. We are currently assessing its application to oceanic satellite images for the recovering of ocean streams. We are studying also the impact on the stochastic dynamics of turbulent noise defined as auto-similar Gaussian random fields and the introduction within an incremental ensemble analysis scheme of multiscale motion measurements.

This work focuses on the tracking and analysis of convective clouds systems from Meteosat Second Generation images. The highly deformable nature of convective clouds, the complexity of the physic processes involved, but also the partially hidden measurements available from image data make difficult a direct use of conventional image analysis techniques for tasks of detection, tracking and characterization. We face these issues using variational data assimilation tools. Such techniques enable to perform the estimation of an unknown state function according to a given dynamical model and to noisy and incomplete measurements. The system state we are setting in this study for the system clouds representation is composed of two nested curves corresponding to the exterior frontiers of the clouds and to the interior coldest parts (heart) of the convective clouds. Since no reliable simple dynamical model exists for such phenomena at the image grid scale, the dynamics on which we are relying has been directly defined from image based motion measurements and takes into account an uncertainty modeling of the curves dynamics along time. In addition to this assimilation technique, we also investigate how each cell of the recovered clouds system can be labeled and associated to characteristic parameters (birth or death time, mean temperature, velocity, growth, etc.) of great interest for meteorlogists.

The complexity of the laws of dynamics governing 3D atmospheric flows associated with incomplete and noisy observations make the recovery of atmospheric dynamics from satellite image sequences very difficult. We addressed the challenging problem of estimating physical sound and time-consistent horizontal motion fields at various atmospheric depths for a whole image sequence. Based on a vertical decomposition of the atmosphere, we proposed a dynamically consistent atmospheric motion estimator relying on a multi-layer dynamic model. This estimator is based on an optimal control scheme with uncertainty terms (weak constraint variational data assimilation) and is applied on noisy and incomplete pressure difference observations derived from satellite images. The dynamic model is a simplified vorticity-divergence form of a multi-layer shallow-water model. Average horizontal motion fields are estimated for each layer. The performance of the proposed technique has been assessed using synthetic examples and using real world meteorological satellite image sequences. In particular, it is shown that the estimator enables exploiting the finest spatio-temporal image structures and succeeds in characterizing motion at small spatial scales of the image grid. This work has been recently published in the journal Tellus Series A: Dynamic Meteorology and Oceanography.

One of the possibilities to neglect the influence of
some degrees of freedom over the main characteristics of a
flow consists in representing it as a sum of
K-orthonormal spatial basis functions weighted with
temporal coefficients. To determine the basis function of
this expansion one of the usual approaches relies on the
Karhunen-Loeve decomposition (referred as proper orthogonal
decomposition – POD – in the fluid mechanics domain). In
practice, the spatial basis functions, also called modes,
are the eigen-vectors of an empirical auto-correlation
matrix which is built from “snapshots" of the considered
physical process.

In 2010, we have focussed on the case where one does not
have a direct access to snapshots of the considered
physical process. Instead, the POD has to be built from the
partial and noisy observations of the physical process.
Instances of such scenarios include situations where real
instantaneous vector-field snapshots are estimated from a
sequence of
*images*. We have been working on several approaches
dealing with such a new paradigm. A first approach consists
in extending standard penalized motion-estimation
algorithms to the case where the sought velocity field is
constrained to span a low-dimensional subspace. Giving a
probabilistic interpretation to this problem, we have
designed novel optimization procedures in the framework of
maximum a posteriori estimation problem. We are currently
working on an EM-algorithm implementation of this
approach.

In a second approach, we considered the design of the POD as the solution of a minimum mean square estimation problem based on the distribution of the (unknown) velocity field given a sequence of images. This alternative formulation allowed us to take explicitly the uncertainty on the velocity field into account into our optimization process. We are currently working on several practical implementations of this problem, relying on Monte-Carlo integration and Krylov subspaces.

In a third axis we have studied two variational data assimilation techniques for the estimation of low order dynamical models for fluid flows. Both methods are built from optimal control recipes and rely on POD representation associated to Galerkin projection of the Navier Stokes equations. The proposed techniques differ in the control variables they involve. The first one introduces a weak dynamical model defined only up to an additional uncertainty time dependent function whereas the second one, handles a strong dynamical constraint in which the coefficients of the dynamical system constitute the control variables. Both choices correspond to different approximations of the relation between the reduced basis on which is expressed the motion feld and the basis components that have been neglected in the reduced order model construction. The techniques have been assessed on numerical data and for real experimental conditions with noisy Image Velocimetry data.

This work aims at investigating the use of optimal control techniques for the coupling of Large Eddies Simulation (LES) techniques and 2D image data. The aim is to reconstruct a 3D flow from a set of simultaneous time resolved 2D image sequences visualizing the flow on a set of 2D plans enlightened with laser sheets. This approach will be experimented on shear layer flows and on wake flows generated on the wind tunnel of the Cemagref Rennes. Within this study we whish also to explore techniques that will allow enriching large-scale dynamical models by the introduction of uncertainty terms or through the definition of subgrid models from the image data. This research theme is related to the issue of turbulence characterization from image sequences. Instead of predefined turbulence models, we aim here at tuning from the data the value of coefficients involved in traditional LES subgrid models or in longer-term goal to learn empirical subgrid models directly from image data. An accurate modeling of this term is essential for Large Eddies Simulation as it models all the non resolved motion scales and their interactions with the large scales.

First tests have been conducted with two-dimensional Direct Numerical Simulations (DNS) of mixing layer coupled with noisy observations. By modifying the initial condition of the system, the proposed method recovers the state of an unknown function with good accuracy.

We have proposed a filtering methodology for the visual tracking of closed curves. Opposite to works of the literature related to this issue, we consider here a curve dynamical model based on a continuous time evolution law with different noise models. This led us to define three different stochastic differential equations that capture the uncertainty relative to curve motions. This new approach provides a natural understanding of classical level-set dynamics in terms of such uncertainties. These evolution laws have been combined with various color and motion measurements to define probabilistic state space models whose associated Bayesian filters can be handled with particle filters. This on going work will be continued within extensive curve tracking experiments and extended to the tracking of other very high dimensional entities such as vector fields and surfaces.

We have addressed the analysis and modelling of non canonical turbulent mixing layers between a uniform flow and a shear flow. From a parametric study by bidimensional direct numerical simulations two mixing layer configurations between a uniform flow and a shear flow have been selected. These two configurations share the same shear flow but have a different uniform flow.

The shear flow was obtained with curved gauze. However the theoretical shear parameter predicted by the literature is different from the value obtained by experiments. In order to study these discrepancies, the flow through a gauze was studied by particle image velocimetry. This allowed the general modeling of the uniform flow through curved wire gauze, leading to linear mean velocity profiles. From a hot-wire anemometry study of the two flow configurations it was observed that one flow behaves like a mixing layer whereas the other flow yields a wake behaviour. The mixing layer indicates an increasing turbulent kinetic energy along its longitudinal development, while the wake exhibits an asymmetry.These results have been recently submitted to a journal paper and a conference.

A novel anemometer has been designed for the simultaneous measurement of velocity and temperature in airflows with a single hot wire probe. The principle of periodically varying the overheat ratio of the wire has been selected and applied through a tunable electronic chain. Specific methods have been developed for the calibration procedure and the signal processing. The accuracy of the measurements has been assessed by means of Monte-Carlo simulations. The description of this new technique has been published in Measurement Science and Technology in 2010 .

A new dynamical calibration technique has been developed for hot-wire probes. The technique permits, in a short time range, the combined calibration of velocity, temperature and direction calibration of single and multiple hot-wire probes. The calibration and measurements uncertainties were modeled, simulated and controlled, in order to reduce their estimated values. A patent is currently under consideration to protect this new technique. Results obtained with the dynamic calibration have been submitted to a journal paper.

[In collaboration with G. Artana and P. Minini (Univ. Bueno Aires)]

Selecting directly from images the most likely scaling motion priors enables the recovery of physical quantities related to the energy flux and the flow regularity. Such measurements are of major interest for turbulence studies. In particular, determining the energy flux across scales and characterizing intermittency is very important to assess the relevance of the statistical models proposed for atmospheric turbulence. Although, the measurement of flux and atmospheric flow regularity has already been obtained previously using in situ data, it required an important measurement campaign lasting several years based on sensors placed on airplanes. Therefore, the proposed motion estimation technique described above represents an attractive tool since it enables the direct estimation of these quantities from a couple of images. A paper concerning an atmospheric turbulence study using Meteosat Second Generation (MSG) images is currently under revision for publication in a journal. Experimental studies of three-dimensional turbulence behind a grid or in the wake of a cylinder have also been published in the conference on Particle Imagery and Velocimetry (PIV) 2009. New experiments for the assessment of turbulence statistical models are currently going on in collaboration with the laboratory of fluid mechanics and turbulence scientists in Argentina. They focus on two-dimensional turbulence of soap films visualized with a Schlieren imagery system. The goal of this work is to validate experimentally the theoretical model predicting non-intermittent inverse energy cascades in pure two-dimensional flows

We focus here on the particular case of the 2D plane Poiseuille flow, i.e. a flow between two infinite parallel planes. The most usual way to control this flow is to use boundary transpiration. In that case, suction and blowing of fluid with zero net mass flow are performed along a wall. Our goal is to control this flow to stabilize it at a high Reynolds number ( ) since it is well known that this flow is unstable for a such Reynolds number value. Usually, the streamwise shear stress component at a point belonging to the wall is used as the output of the system in order to control it in a closed-loop fashion. We have proposed a vision-based approach to control this flow (see section ). This approach is based on optical flow techniques. Theoretical proofs has been presented in order to show the improvements provided by the vision-based approach over the commonly used output feedback LQG control based on shear stress measurement in terms of the state vector estimation and flows control. Indeed the output feedback LQG limitations concern the initialization of the estimator and measurements noise. The initialization issue is not of concerned in the vision-base approach. In addition, the vision-based approach has been shown to be robust to measurement noise when a large number of flow measurements is available, which is possible in real practical situations. Visual servoing is revealed to be a major improvement with respect to traditional approaches based on observers. This work has been recently submitted to conferences.

This works concerns the Phd of Xuan-Quy Dao and can be seen as an extension of the works carried out by Roméo Tatsambon. The core of this work will concern non linear control to increase the stability domain on the closed-loop system. We will also consider the more complex case where the desired state of the flow is not perfectly known. In this case, we will turn the problem into an optimal control problem where a given criterion has to be minimized under the constraints to verify the state equations of the closed-loop system. This criterion may depend on a particular application. However, we will consider generic criteria like the drag and the lift. An important work will be thus necessary to obtain these criteria in the visual features space. The vorticity will be also consider. This later case being simple to express, the control law will be also simple to derive. Of course, in both cases, with or without knowledge of the desired visual features, and accordingly to the chosen visual features, the stability domain of the closed-loop system will be studied.

Most of computer vision-based applications rely on low
level algorithms such as matching, features tracking, or
optical flow computations for example. Such approaches are
based on a relation that links the luminance of a physical
point at time
tto its luminance at a later time
t+
dtdue to a relative motion of the
observer with respect to that scene or to other events like
illumination changes. Nevertheless, due to the complexity
of this relation, most often, the algorithms aforementioned
are based on the temporal luminance constancy hypothesis
leading to well known
*optical flow constraint equation*
. However, it is well known
that this constraint can be easily violated. In our
previous paper
, we have revisited this
constraint and provided a new optical flow constraint. It
has been validated in the context of 3D target tracking by
visual servoing. Our goal is to validate it the context of
optical flow where more complex motion and illumination
changes occur.

*duration 36 months.*

This contract takes place in the context of the joint Microsoft-Inria research laboratory. It involves two other Inria project-teams, Willow and Lear. The projects is composed of three fairly disjoint sub-projects. Fluminance contributes to two of them: (i) Mining dynamical remote data with applications in computational ecology and environmental science; (ii) Mining TV broadcasts with applications to sociology. In the first sub-project, we aim at combining various low and mid-level video analysis tools (shot detection, camera motion characterization, visual tracking, object recognition, human action recognition) for the analysis and annotation of human actions and interactions in video segments to assist -and provide data for- studies of consumer trends in commercials, political event coverage in newscasts, and class- and gender-related behavior patterns in situation comedies, for example. In the second one, we aim at designing new tools for the detection of salient changes in multi-temporal satellite images (with application to assessment of natural damages, consequences of climate modifications, and changes caused by human action in urban or natural environments), and, in the longer term, for the detection, identification and tracking of dynamics meteorological events, with application to risk assessment and weather forecast.

*duration 36 months.*

This ANR project entitled “Spatio-temporal Analysis of deformable structures in Meteosat Second Generation images” aims at developing methods for the analysis of deformable structures in meteorological images. More precisely, within this project we will focus on two meteorological phenomenon: the convective cells and sea breeze circulation. The first type of cloud system is responsible of dangerous meteorological events such as strong showers. Their monitoring is thus very important. See breezes influence deeply the climate of coastal regions. The comprehension of the daily and seasonal evolution of see breeze fronts is of great importance for local weather forecasting. The goal of this project will be to propose tools based on appropriate physical evolution laws for the tracking and analysis of these events. This project involves computer vision scientists from different groups, climatologists and meteorologists.

*duration 36 months.*

This project aims at studying both, theoretically and experimentally, the phenomena involved in vocal fold oscillations, towards the elaboration of performing vocal production models, for application in voice and speech technology. It gathers partners from Brazil (PUCRS, UFF), Argentina (UBA) and France (LIMSI,ICP, IRISA).

*duration 36 months.*

The purpose of this project is to further study ensemble methods -, and to develop their use for both assimilation of observations and prediction. Among the specific questions to be studied are the theory of Particle Filters and Ensemble Kalman Filters, the possibility of taking temporal correlation into account in ensemble assimilation, the precise assessment of what can and cannot be achieved in ensemble prediction, and the objective validation of ensemble methods.

The partners of this project are Laboratoire de Météorologie Dynamique/ENS (leader), Météo-France and three INRIA groups (ALEA, ASPI, FLUMINANCE).

*duration 36 months.*

Changing scale is a well-known topic in physics (geophysics, fluid mechanics and turbulence, theoretical and statistical physics, mechanics, porous media, etc.) It has lead to the creation of powerful sophisticated mathematical tools: renormalization, homogenization, etc. These ideas are also used in numerical analysis (the so-called multigrid approach) for solving efficiently partial differential equations. Data assimilation in geophysics is a set of methods that allows to combine optimally numerical models in large spaces with large dataset of observations. At the confluence of these two topics, the goal of this project is to study how to embed the change of scales (a multiscale point of view) issue into the framework of geophysical data assimilation, which is a largely unexplored subject.

The partners of this 3 years project are the CEREA/ CLIME INRIA group (leader), the LSCE/CEA, the INRIA groups MOISE and FLUMINANCE.

*duration 48 months.*

The project Geo-FLUIDS focuses on the specification of tools to analyse geophysical fluid flows from image sequences. Geo-FLUIDS aims at providing image-based methods using physically consistent models to extract meaningful features describing the observed flow and to unveil the dynamical properties of this flow. The main targeted application domains concern Oceanography and Meteorology . The project consortium gathers the INRIA research groups: FLUMINANCE (leader), CLIME, IPSO, and MOISE. The group of the “Laboratoire de Météorologie Dynamique” located at the ENS Paris, the IFREMER-CERSAT group located at Brest and the METEOFRANCE GMAP group in Toulouse.

The Fluminance is involved in the French network gdr“Structure de la Turbulence et Mélange”.

The Flumiance group participates to the French network gdr“Contrôle Des Décollements”.

.

The HURACAN associated team is centered on the analysis and the control of fluid flows from image sequences. The research objectives of this team are organized into two distinct work axes. The first one aims at defining and studying visual servoing techniques for fluid flows control. In addition to the definition of efficient visual servoing schemes this axis of work gathers research issues related to fluid flows velocity measurement from images and to flows excitation through plasma actuators. The second research axis focuses on the coupling between large scales representations of geo-physical flows and image data. More precisely, it aims at studying means to define directly from the image sequences the small scales terms of the dynamics. This research axis includes the study of coupling models and data defined at different scales, problems of multiscale velocities estimation respecting turbulence phenomenological laws and issues of experimental validation.

*Editorial boards of journals*

E. Mémin is Associate Editor for the Image and Vision Computing Journal.

*Technical program committees of conferences*

E. Mémin: tpcmember of SSVM'11.

P. Héas: tpcmember of ECCV'10 .

Anne Cuzol: tpcmember of RFIA'10 .

*Ph.D. reviewing*

E. Mémin: Benoit Combes (IRISA, Université de Rennes I), Souopgui (LJK, Université J. Fourier Grenoble), Florent Brunet (LASMEA, Université Blaise Pascal, Clermont Ferrand)

D. Heitz: T.D. Nguyen (INSA Rennes).

*Project reviewing, consultancy, administrative
responsibilities*

C. Collewet is member of the cemagref evaluation committee

C. Collewet is member of the ecotechnologies department committee

Master sti“Signal, Telecommunications, Images”, University of Rennes 1, (P. Héas : statistical image analysis).

ESIR, Numerical methods and optimization (C. Herzet)

insaRennes, 2nd year Mechanical engineering (D. Heitz: Fluid Mechanics)

Master Recherche “Informatique”, University of Rennes 1, (E. Mémin, Analyse du mouvement).

diic inc, Ifsic, University of Rennes 1 (E. Mémin : Motion analysis)

P. Héas gave an invited talk at General physics seminars of ENS Lyon on "Turbulence statistical and dynamical priors for inverse modeling of motion in image sequences".

E. Mémin was invited to give a talk on "Fluid flows analysis from image sequences" at the LSIIT Strasbourg.