Ariana is a joint project-team of INRIA, and of CNRS and the University of Nice-Sophia Antipolis via the Computer Science, Signals and Systems Laboratory (I3S) in Sophia Antipolis (UMR 6070). It was created in 1998.

The Ariana project-team aims to provide image processing tools to aid in the solution of inverse problems arising in a wide range of concrete applications, mainly in Earth observation and cartography, for example cartographic updating, land management, and agriculture, while at the same time advancing the state of the art in the image processing methods used to construct those tools. Certain applications in biological and medical imaging are also considered, using the same tools as in remote sensing.

Two joint patents have been deposited in October 2009 by Galderma and INRIA (Ariana) on hyperspectral imaging for skin pigmentation classification.

Interview of Josiane Zerubia in July 2009 for a series on IEEE Fellows in Earthzine (IEEE Foundation) :
http://

Ting Peng, an Ariana PhD student from 2005–2008, was awarded one of the five 2008 European Best IEEE Geoscience and Remote Sensing Society PhD prizes for her work on `New Higher-Order Active Contour Models, Shape Priors, and Multiscale Analysis and their Application to Road Network Extraction from Very High Resolution Satellite Images'. Hers was a joint PhD with LIAMA in Beijing, China. Her thesis advisors were Ian Jermyn and Josiane Zerubia from Ariana, and Veronique Prinet and Baogang Hu from LIAMA. The thesis was partially funded by Thales Alenia Space.

Following a Bayesian methodology as far as possible, probabilistic models are used within the Ariana project-team, as elsewhere, for two purposes: to describe the class of images to be expected from any given scene, and to describe prior knowledge about the scene in the absence of the current data. The models used fall into the following three classes.

Markov random fields were introduced to image processing in the Eighties, and were quickly applied to the full range of inverse problems in computer vision. They owe their popularity to their flexible and intuitive nature, which makes them an ideal modelling tool, and to the existence of standard and easy-to-implement algorithms for their solution. In the Ariana project-team, attention is focused on their use in image modelling, in particular of textures; on the development of improved prior models for segmentation; and on the lightening of the heavy computational load traditionally associated with these techniques, in particular via the study of varieties of hierarchical random fields.

The development of wavelets as an alternative to the pixel and Fourier bases has had a big impact on image processing due to their spatial and frequency localization, and the sparse nature of many types of image data when expressed in these bases. In particular, wavelet bases have opened up many possibilities for probabilistic modelling due to the existence of not one but two natural correlation structures, intra- and inter-scale, leading to adaptive wavelet packet models and tree models respectively. In Ariana, attention is focused on the use of tree models for denoising and deconvolution; adaptive wavelet packet models for texture description; and on the use of complex wavelets for their improved translation invariance and directional selectivity.

One of the grand challenges of computer vision and image processing is the expression and use of prior geometric information. For satellite and aerial imagery, this problem has become increasingly important as the increasing resolution of the data results in the necessity to model geometric structures hitherto invisible. One of the most promising approaches to the inclusion of this type of information is stochastic geometry, which is a new and important line of research in the Ariana project-team. Instead of defining probabilities for different types of image, probabilities are defined for configurations of an indeterminate number of interacting, parameterized objects located in the image. Such probability distribution are called `marked point processes'. For instance, two examples that have been developed in Ariana use interacting cuboids of varying length, width, height and orientation for modelling buildings; and interacting line segments of varying length and orientation for modelling road and other networks.

The use of variational models for the regularization of inverse problems in image processing is long-established. Attention in Ariana is focused on the theoretical study of these models and their associated algorithms, and in particular on the -convergence of sequences of functionals and on projection algorithms. Recent research concerns the definition of and computation in a function space containing oscillatory patterns, a sort of dual space to BV space, which captures the geometry of the image. These variational methods are applied to a variety of problems, for example image decomposition.

In addition to the regularization of inverse problems, variational methods are much used in the modelling of boundaries in images using contours. In Ariana, attention is focused on the use of such models for image segmentation, in particular texture segmentation; on the theoretical study of the models and their associated algorithms, in particular level set methods; and on the incorporation of prior geometric information concerning the regions sought using higher-order active contour energies.

Wavelets are important to variational approaches in two ways. They enter theoretically, through the study of Besov spaces, and they enter practically, in models of texture for segmentation, and in the denoising of the oscillatory parts of images.

One of the most important problems studied in the Ariana project-team is how to estimate the parameters that appear in the models. For probabilistic models, the problem is easily framed, but is not necessarily easy to solve, particularly in the case when it is necessary to extract simultaneously from the data both the information of interest and the parameters. For variational models, there are few methods available, and the problem is consequently more difficult.

These are perhaps the most basic of the applications with which Ariana is concerned, and two of the most studied problems in image processing. Yet progress can still be made in these problems by improving the prior image models used, for example, by using hidden Markov trees of complex wavelets or by decomposing the image into several components. Ariana is also interested in blind deconvolution.

Many applications call for the image domain to be split into pieces, each piece corresponding to some entity in the scene, for example, forest or urban area, and in many cases for these pieces to be assigned the appropriate label. These problems too are long-studied, but there is much progress to be made, in particular in the use of prior geometric information.

As the resolution of remote sensing imagery increases, so the full complexity of the scene comes to the fore. What was once a texture is now revealed to be, for example, an arrangement of individual houses, a road network, or a number of separate trees. Many new applications are created by the availability of this data, but efficient harvesting of the information requires new techniques.

Earth observation and cartography is not solely concerned with 2D images. One important problem is the construction of 3D digital elevation models (DEMs) from high-resolution stereo images produced by satellites or aerial surveys. Synthetic aperture radar (SAR) imagery also carries elevation information, and allows the production of more accurate DEMs thanks to interferometry techniques, for example.

Every day, vast quantities of data are accumulated in remote sensing data repositories, and intelligent access to this data is becoming increasingly problematic. Recently, the problem of retrieval from large unstructured remote sensing image databases has begun to be studied within the project.

The software MAD v1.0 was transferred to Galderma R&D Sophia Antipolis.

The software PHASEBAR V1.0 was transferred to the French Space Agency (CNES).

The software PHASEBAR V1.0 was deposited with the APP in April 2009. It performs line network (and in particular road network) extraction, from VHR satellite images, and offers automatic parameter fixing.

The software MAD v1.0 was deposited with the APP in September 2009. It deals with the classification of skin hyper-pigmentation using multi-spectral images.

The softwares SARDecoder SinglePol v1.0 and SARDecoder DualPol v1.0 were deposited with the APP in October 2009. They are developed for classifying the water, wet and dry regions on high resolution single- and dual-polarized Synthetic Aperture Radar (SAR) images. It was developed in collaboration with G. Moser and S. Serpico from the University of Genoa in Italy, and V. Krylov from Moscow State University in Russia.

Two patents were deposited with Galerma in October 2009: “Procédé et dispositif d'analyse d'images hyper-spectrales” (B09-3904FR) and “Dispositif et procédé de compensation de relief d'images hyper-spectrales” (B09-3905FR).

In this work, we are developing a C++ library for manipulating marked point processes for image analysis. We have developed routines for models based on discs and ellipses. These models include our previous work on tree detection and flamingo counting. Various kernels for use in defining an RJMCMC scheme to optimize the models are included in the library. We are currently addressing models based on segments and rectangles.

This work was performed in collaboration with Mr. Tamas Blaskovics and Professor Zoltan Kato of the University of Szeged, Hungary.

The phase field higher-order active contour framework for shape modelling developed in Ariana lends itself to a probabilistic interpretation, the phase field energies being taken as the Gibbs energies of a Markov random field (MRF). This opens the way to parameter and model estimation, stochastic algorithms, and much else. As described in , starting from the continuum phase field energy, both the domain and codomain of the phase field must be discretized. Taking advantage of the fact that the phase field assumes the values ±1except near region boundaries, the codomain can be discretized to binary values. When coupled with spatial discretization on a lattice, a binary Markov random field is produced; standard techniques, such as Gibbs sampling and simulated annealing, can then be applied. Approximate relations between the parameters of the phase field model and those of the binary field are also available, meaning, in particular, that parameter ranges leading to particular shapes being stable, deduced from stability analyses of the continuum model, can be carried over to the MRF.

Cluster algorithms were compared to Gibbs sampling, as used in . The standard versions of these algorithms , do not apply to frustrated models, so instead the generalized clustering algorithm proposed in was tested. Surprisingly, this algorithm performed badly, being very slow to converge and tending to cluster the whole image. Thus although theoretically sound, in practice this algorithm is not useful for the MRF models used here. The development of efficient sampling algorithms that better incorporate the correlation structure is the next step.

For energy minimization, simulated annealing and quadratic pseudo-boolean optimization (QPBO)
,
were compared. Frustration means that graph cuts cannot find the
global energy minimum; QPBO is an alternative for general binary energies. It labels a subset of lattice nodes, these being guaranteed to form part of the global minimum. If the number of
unlabelled nodes is small, their labels can be found,
*e.g.*by simulated annealing. QPBO worked well for the MRFs used here, but its memory requirements render it impractical for large images; on the other hand, it was faster than simulated
annealing. The quality of the results did not vary much between these algorithms, and were similar to those obtained using gradient descent with the phase field model.

One interesting observation is that, in common with algorithms for other NP-hard problems, QPBO's performance undergoes a phase transition, as a function of degree of frustration relative to external field strength. This is true even when the external field is constant, and when the zero-field ground state is constant. This means that QPBO is of no use in finding the ground states of frustrated systems. This will be studied in detail in future work.

This work was performed in collaboration with Professors Anuj Srivastava and Eric Klassen of Florida State University, USA. It was partly funded by INRIA Associated Team SHAPES.

The `shape space' approach to curves and surfaces embedded in
constructs a `shape space'
Sfrom a space of parameterized geometrical objects
Cby quotienting by a group
Gof `shape-preserving' transformations. An equivariant Riemannian metric
gon
Cis pushed down to
Sand used to measure shape similarity. Two key questions in this framework are: what should
gbe and what coordinates on
Csimplify the form of this
gas much as possible?

Regarding
g, there is now a body of work that uses a one-parameter family of metrics
G_{c},
, on
Cknown as the `elastic metric'. This metric, which measures both changes in curve orientation and stretching, is suitable for many applications. In previous work
on the case
n= 2,
was singled out as special because it was amenable to analytical treatment. This does not generalize to
n>2, however, and in this work
we have elucidated the reason as part of a general answer to the
second question. The results are as follows. For
n= 2, for all
c1, the Riemannian curvature of
Cis zero except at the origin, where it is singular. When
c= 1, the singularity disappears and
Cis isometric to
. For
n>2, the situation is quite different. For
c1, the curvature of
Cis nowhere zero, and is again singular at the origin. This explains the failure of
to generalize to higher dimensions. On the other hand, for
c= 1,
Cis isometric to
for all
n. The choice
c= 1thus leads to simplified calculations for any
n, and has been applied, for example, in
to the extract of shapes from 2d point clouds, and to modelling the
shapes of protein backbones. Figure
shows an example of clustering of curves in
using this metric.

For surfaces, the situation is more difficult because of the complexity of
Diff (
S
^{2}). The generalization of the orientation term in the elastic metric is straightforward, but for the stretching term there are several possibilities. One involves using the
unique one-parameter family of ultralocal metrics on the space of Riemannian metrics on a manifold. The next question is which coordinates to use to express this metric.

This study was supported by INRIA Associated Team ODESSA. It was conducted in collaboration with Serguei Komech, IIPT Moscow (Russian Academy of Sciences).

In this work, we address shape classification. A shape
is a convex bounded set in
. We consider a basic descriptor
_{0}(
S)defined as the ratio of the volume of the
-neighbourhood of the shape to the shape volume. The initial shape is then transformed by a map, parameterized by an angle
, which extends the shape along the direction
by a factor
and contracts the shape along the orthogonal direction by a factor
. We thus obtain a function for our descriptor
(
S,
)(see figure
). A rotation of the shape is equivalent to a shift of this curve. A reflection
of the shape leads to a reflection of the curve. To obtain invariance with respect to rotation and reflection, we consider the following metric:

We have tested this metric for shape retrieval using the Kimia database. The results are convincing for discriminating complex shapes.

This Ph.D. was funded by a MESR and also supported by INRIA/FSU associated team `SHAPES' [
http://

Object detection from optical satellite and aerial images is one of the most important tasks in remote sensing image analysis. The problem arises in many applications, both civilian and military. Nowadays the resolution of aerial images ranges from several tens of centimetres down to several centimetres. At these resolutions, the geometry of objects is clearly visible, and needs to be taken into account in analysing the images.

Stochastic Marked Point Processes (MPP) are known for their ability to include this type of information. Previously, MPP models have been successfully used for object extraction from
images of lower resolution, where the objects have simplified geometries and were thus represented using simply shape objects,
*e.g.*discs, ellipses, or rectangles.

This internship was funded by an AAP INRA INRIA contract. It was conducted in collaboration with Pierre Couteron and Christophe Proisy from UMR AMAP/IRD, Montpellier.

In this work, we evaluated the performance of marked point processes for detecting individual trees in highly dense tropical forests. We have considered a multiple birth-and-death dynamics. The model was previously defined for evaluating poplar plantations. It consists of a prior avoiding overlapping objects and a data term based on a radiometric distance between pixels inside the detected crowns and pixels surrounding them. We have evaluated two kinds of data: panchromatic IKONOS images and LIDAR data. The two sets of results are consistent. With both panchromatic and LIDAR data, detection is accurate on the tallest trees defining the canopy, while performance decreases for shorter trees. We are currently working on a fusion process to use the information from both sensors simultaneously. In addition, we evaluated the results with respect to ground truth.

This work is being performed in collaboration with Dr. Michel Gauthier-Clerc from La Tour du Valat [
http://

This work addresses the problem of 3D object extraction from hand-held camera images, so we face all the associated effects: object occlusion, shadows, depth, etc. We propose to solve the
problem in an object-based framework,
*i.e.*using Markov marked point processes (MMPP). We are applying our model at first to almost rigid objects, which represent the penguins. We run the tests on images of penguin colonies
provided by our ecologist collaborators. The MMPP is based on defining a configuration space of our objects of interest, to which is attached a Gibbs energy function. Minimizing this energy
function corresponds to detecting the objects of interest. We use the recently developed Multiple Birth and Death dynamics for the energy minimization because of its convergence speed. Our
work is based on simulating images of the 3D scene containing our objects (using OpenGL), which by projection give images that we compare with the original image. We have developed two new
data terms based on the histogram and correlogram distances. Due to the computational complexity of dealing with 3D objects, we are investigating a parallel algorithm based on graph theory so
that we can take advantage of multicore architectures. The projection parameters are a key element of the image simulation step, and have to be estimated from a single image. We show in
figure
some preliminary results on simulated images. This work will be used in the
future to detect and count penguins in Antartica.

By `camera pose estimation' we mean the problem of determining the position and orientation of a camera with respect to the coordinate frame of the imaged scene. When acquiring this information from external instruments is too expensive for the application of interest, or is simply impossible because the picture is already taken, one must resort to computer vision techniques and use as best as possible the visual data available.

When the images contain artifacts like buildings or other non-natural structures, classical approaches find straight lines, known angles, orthogonalities, or reference points, and use them to invert partially or totally the perspective distortion. Images of natural scenes do not offer such references, and other approaches need to be investigated.

We assume a set of features to be distributed on a planar surface (the world plane) as a Poisson point process, and to know their positions in the image plane. Then we propose an algorithm to recover the pose of the camera, in the case of two degrees of freedom (slant angle and distance from the ground). The algorithm is based on the observation that cell areas of the Voronoi tessellation generated by the points in the image plane represent a reliable sampling of the Jacobian determinant of the perspective transformation up to a scaling factor, the density of points in the world plane, which we demand as input. In the process, we develop a transformation of our input data (areas of Voronoi cells) so that they show almost constant variance between locations, and find analytically a correcting factor to considerably reduce the bias of our estimates. We perform intensive synthetic simulations and show that with a few hundred random points the errors in angle and distance are not more than a few percent. This work will be used in the future to detect and count penguins in Antartica.

This study was supported by INRIA Associated Team ODESSA and an ECONET project. It was conducted in collaboration with P. Lukashevich, A. Krauchonak, and B. Zalesky from the UIIP in Minsk, E. Zhizhina from the IITP in Moscow, and J.D. Durou from IRIT in Toulouse.

In this project, we aim at reconstructing buildings in 3D from one or several aerial or high resolution satellite images. The main idea is to avoid solving the so-called inverse problem. We will simulate configurations of buildings and test them with respect to the data. The generation of configurations will be performed using multiple birth-and-death dynamics . A Gibbs point process is defined including prior information about building configurations. To define the data term, the building configuration is projected into the data plane(s), using models of shading and shadows. This projection is performed using OpenGL for a fast 2D rendering of the scene. The data term is based on the consistency of shadows in the image and on the configuration projection in the image plane, whereas the prior penalizes building overlaps. The preliminary results are encouraging. The next steps consist of refining the data model by embedding information about gradients, and improving the convergence speed by defining proper birth maps for generating new buildings.

The generation of 3D representations of urban environments from aerial and satellite data is a topic of growing interest in image processing and computer vision. Such environments are helpful in fields including urban planning, wireless communications, disaster recovery, navigation aids, and computer games. Laser scans have become more popular than multiview aerial/satellite images thanks to the accuracy of their measurements and the decrease in the cost of their acquisition. In particular, full-waveform topographic LIDAR constitutes a new kind of laser technology providing interesting information for urban scene analysis .

We study new stochastic models for analysing urban areas from optical and LIDAR data. We aim to construct concrete solutions to both urban object classification (
*i.e.*detecting buildings, vegetation, etc.) and the 3D reconstruction of these objects. Probabilistic tools are well adapted to handling such urban objects, which may differ
significantly in terms of complexity, diversity, and density within the same scene. In particular, Jump-Diffusion based samplers offer interesting perspectives for modelling complex
interactions between the various urban objects.

This internship was funded by a French Space Agency (CNES)contract

In order to extract object networks from high resolution remote sensing images, we previously defined a stochastic marked point process model. This model represents a random set of objects identified jointly by their position in the image and their geometric characteristics. It includes a prior energy term that corresponds to the penalization of overlapping objects (hard-core process) and a data energy term that quantifies the fit of the process objects to the image. To adapt the proposed model to any image, we introduce an energy weight which must be adjusted according to the processed image. We therefore studied methods for the estimation of the energy weight parameter of the proposed model. A method based on the SEM algorithm (Stochastic Expectation-Maximization) was proposed , the process likelihood being approximated by the pseudo-likelihood. This method has shown good performance on a simple model of point processes where the objects are circular.

In our work we generalize this estimation procedure to the case of more general geometrical shape models. We introduced an ellipse process to extract tree crowns and flamingos. The simulation of the proposed algorithm showed the relevance of the adopted model. We then introduced a rectangle process for extracting building footprints and tents. In addition, we introduced a more complex prior for rectangle alignment to improve the accuracy of the results (the buildings of major cities are usually aligned). Figures and show the extraction results for different object shapes.

This work was done in collaboration with Dr Giuseppe Sacarpa and Prof. Gianni Poggi from University of Naples.

The ideas for the work introduced here arise from a former image modelling framework based on the
*Hierarchical Multiple Markov Chain*model (H-MMC)
, and its related unsupervised algorithm for hierarchical
texture-based segmentation named
*Texture Fragmentation and Reconstruction*(TFR). The TFR algorithms aims to detect the different textures in an image using a fragmentation and reconstruction approach: in the first
stages of the algorithm, the image is over-segmented to provide a set of elementary regions, whose connected components are described using spectral and contextual features and clustered to
form basic texture patterns. Later on, the resulting regions are sequentially merged, according to a suitable metric, to compose texture patterns at higher scales and contextually retrieve
the underlying hierarchical image structure.

The TFR algorithm mainly uses spectral features to provide fine segmentation and to describe interactions among image regions. In many real world applications, where the focus on spectral properties can strongly limit the completeness of scene description, the TFR algorithm shows its major drawbacks: in particular, this is the case with satellite/aerial images containing a significant quantity of man-made structures (buildings, roads, fields, etc.).

Based on this observation, a new study has begun to include geometrical features in the TFR framework, while keeping the complexity of the overall technique low. The key idea is to provide a geometrical characterization of image elements by means of regular elementary shapes such as curves, rectangles, and ellipses. To this end, we resorted to a simplified marked point process framework that applies to segmentation maps rather than dealing directly with image data.

In this first stage of the work, a fast algorithm for feature extraction based on elementary shapes has been developed: starting from a segmentation map, each connected component is independently fitted to an elementary shape, by applying a marked point process that uses a simple data likelihood function, weighting the trade-off between the uniformity of the interior and the diversity of the exterior labels. To limit the complexity of the process, a prior morphological analysis is performed on the initial map: the skeletons of the connected components are extracted, and used to estimate the principal dimensions and orientation of the reference shape. Its centre is also estimated by computing an approximation of the geodesic centre of the original shape. This information is then used to reduce the range over which the marks need be defined, thus speeding up the process.

This research was partially funded by the French Space Agency (CNES). The data were provided by CNES/DLR (TerraSAR-X images) and the Italian Space Agency (COSMO-SkyMed images).

The classification of high-resolution SAR images represents a challenging problem, whose complexity is due not only to the usual presence of speckle (typical of all SAR data) but especially to the impact of various ground materials and structures on the data statistics. Unlike coarser resolution SAR, high-resolution sensors provide a spatially much more detailed view of the observed scene and allow the effects of different ground materials to be observed. This requires the development of novel classification methods able jointly to model the statistics of such spatially heterogeneous data and effectively exploit the related contextual information.

In this work , we proposed a contextual method that combines the Markov random field (MRF) approach to Bayesian image classification and a finite mixture technique for probability density function (pdf) estimation. We employed the `dictionary-based stochastic expectation maximization' (DSEM) scheme to model the statistics of single polarizations, and then combined the estimated marginals into joint distributions by means of copulas. The novelty of this approach is in using the finite mixture technique together with copulas for the likelihood term in the MRF classifier in order to model the spatial-contextual information associated to heterogeneous high-resolution SAR.

Experimental results with TerraSAR-X and COSMO/SkyMed data showed high estimation accuracy (above 90% correct classification), thus validating both the effectiveness of DSEM, when used to model the statistics of TerraSAR-X data, and the accuracy of the proposed MRF-DSEM technique in the classification of high-resolution SAR satellite imagery.

This work was done in collaboration with Dr. Gabriele Moser and Porf. Sebastiano Serpico from DIBE, Genoa University.

Current and future Earth observation (EO) missions, such as COSMO/SkyMed, Pléiades, and TerraSAR-X, have huge potential as precious information sources for monitoring human settlements and related infrastructures, thanks to their very high resolution (around 1m) and short revisit time (as little as 12–24h). In practice, in order to exploit effectively this potential, powerful processing techniques are needed. This work focuses on the development of advanced image processing and analysis methods, based on pattern recognition and stochastic modelling, as an aid to multi-risk monitoring of infrastructures and urban areas.

First, special attention is devoted to SAR data (VHR COSMO/SkyMed imagery), and to modelling the statistics of high resolution SAR images. Accurately modelling these statistics is a
critical task for most SAR image processing problems, for example, denoising, feature extraction, and classification. The modelling is based on the development of probability density
estimation techniques. Finite mixtures will be used to model the combination of the probability density components related to distinct materials, while dictionary and generalized Gaussian
methods will be adopted as flexible models for the statistics of each component. Dealing with images of urban and infrastructure areas, particular attention will be devoted to modelling
separately impulsive components related to point scatterers in the scene and absolutely continuous components corresponding to distributed scattering entities. These methods have already been
developed in the context of medium-resolution SAR, and were recently adapted to and validated on COSMO/SkyMed images. As compared to lower-resolution data, VHR images of urban areas or human
settlements allow a spatially much more detailed view of the observed scene. This allows different materials (
*e.g.*roofs, concrete, water, grass) present in the imaged area more strongly to affect the backscattering coefficient, thus yielding more complicated statistics, resulting from the
mixture of many unknown components. Then, novel image classification methods, integrated with the probability density estimation techniques, will be proposed to map the ground areas affected
by a given event (
*e.g.*flooded areas or burnt forest areas).

Second, SAR data will be used jointly with optical data in order to analyse and effectively exploit the complementary properties of these two remote-sensing data types. Specifically, an
approach using Markov random fields (MRF) will be used because of its ability to fuse data. In addition, a separate data-fusion approach will be exploited in order to develop multiscale
classification methods that extract from the available input image(s) multi-scale features (
*e.g.*via wavelet transforms) and fuse them in the classification process to achieve both robustness to noise at coarser scales and sensitivity to spatial details at finer-scales.

All the proposed methods will be extensively validated via experiments on real COSMO/SkyMed and multi-sensor images. Experimental results will be assessed both qualitatively and quantitatively.

This work is funded by a PACA Region grant in collaboration with Thales Alenia Space, and by INRIA from a contract with the French Space Agency (CNES).

Previous work in the project developed phase field higher-order active contour models for network regions (regions in the image domain consisting of narrow branches joining together at
junctions), and applied those models to the extraction of road networks from medium and very high resolution images. These networks are `undirected': the flow in them proceeds in both
directions. Many of the networks that appear in applications (
*e.g.*river networks in remote sensing, vascular networks in medical imaging) are, however, directed. Each network branch has a `flow direction', and each junction therefore has
`incoming' and `outgoing' branches. The existence of such a flow typically changes the geometry of the network, because often the flow is in some sense conserved.

In this work, we have extended the models of undirected networks described in
and
. The phase field
still represents the region corresponding to the network, and still interacts nonlocally so as to favour network configurations. The novel element is a tangent vector field
vrepresenting the `flow' through the network branches
. The vector field is coupled to
in such a way that it is strongly encouraged to be zero outside the region; to have unit magnitude inside the region; to have zero divergence; and, more weakly, to be smooth. The
transition from unit magnitude inside the region to zero magnitude outside, coupled with small divergence, encourages the vector field to be parallel to the region boundary. Both the
divergence and smoothness constraints then tend to propagate this parallelism to the interior of network branches. Small divergence and parallelism, coupled with the constraint on the
magnitude, aids prolongation of network branches; allows a larger range of stable widths; controls rate of change of width along a branch; and encourages asymmetric junctions for which total
incoming branch width equals total outgoing branch width. Figure
shows that the directed model outperforms the undirected model for both
synthetic and real images.

The research of Mikael Carlavan is supported by the ANR under the project `ANR Detectfine' (Laboratory I3S, CNRS/UNSA, and INRIA).

Confocal microscopy is an increasingly popular technique for the 3D imaging of biological specimens. However the images acquired with this technique are degraded by blur and Poisson noise. Several deconvolution methods have been proposed to reduce these degradations, including the Richardson-Lucy iterative algorithm regularized using a Total Variation prior. This gives good results in image restoration, but does not allow the retrieval of fine, oriented details (including textures) in the specimens. For several years, wavelet transforms have been used in image processing to restore the fine details of complex scenes. Recently, several authors have proposed improving the directional resolution of classical wavelets by using a wavelet transform with two trees. We propose to use a recent extension of these dual-tree transforms to the complex case, and to extend the iterative Richardson-Lucy algorithm to this prior. Results on real data are shown in figure .

This Ph.D. was funded by a CORDI Fellowship and is part of the P2R Franco-Israeli project (2005-2009) [
http://

Three dimensional (3D) fluorescence microscopy through optical sectioning is a very powerful technique for visualizing biological specimens. In optical sectioning, the microscope objective is focused at different depths of the observed sample. However, the observation process is never perfect and there is blur, noise, and aberrations in the 3D images. Classical approaches that attempt to restore these images assume that the underlying degradation process is known. However, in fluorescence microscopy, often the degradation is specimen dependent and varies with the imaging conditions. Blind restoration approaches tackle this much more difficult and realistic situation where the degradation is unknown . Blind deconvolution is an ill-posed underdetermined problem. For thin specimen imaging, an alternate minimization (AM) approach has been proposed within a Bayesian framework, restoring the lost frequencies beyond the diffraction limit by using regularization on the object and a constraint on the point spread function (PSF). Furthermore, new methods are proposed to learn the free-parameters, like the regularization parameter, which conventionally is set by hand-tuning.

When imaging into deeper sections of the specimen, the approximation of an aberration-free imaging process, as was assumed previously, is no longer good. This is because the refractive
index mismatch between the specimen and the immersion medium of the objective lens becomes significant with depth under the cover slip. An additional difference in the path is introduced in
the emerging wave front of the light due to this difference in the index, and the phase aberrations of this wave front are significant
. The spherical aberrated (SA) PSF in this case becomes dependent
on the axially varying depth. We use geometrical optics to model the refracted wave front phase, and so the aberrated PSF, and we show for some simulated images that an object's location and
its original intensity distribution can be recovered from the observed intensities (
*cf*figure
).

This work was made in collaboration with Gilles aubert from Dieudonné Institute.

Detecting filaments
,
*i.e.*open curves in two and three dimensional images, is a crucial task in image analysis. In biological images (see figure
), a filament may represents a cell membrane whose visibility is compromised by
the presence of other structures like isolated points or some noise. Moreover due to microscope effects these curves can appear blurred. From these images
, biologists wish to obtain an image as close as possible to the
original one: an image with high intensity on the thin filaments and zero otherwise. This is a segmentation problem that cannot be solved by classical detection methods based on the gradient
operator, such as active contours or total variation models, since in general filaments in dimension 2 are not closed curves. In particular, the initial image
I_{0}, before taking the convolution, is not a special bounded variation (
SBV) function, but rather a Radon Measure concentrated on lines. This crucial difference make these singularities hard to detect by means of variational methods. We propose to look at the
divergence operator of a suitable vector field as a candidate detector. We make an initial predetection by using the gradient of the solution of a classical Neumann problem with data
I, the observed image. We obtain an initial vector field
U_{0}, whose divergence copies the singularities we want to detect, but which is still far from being the original image. We retrieve the original filaments by minimizing an energy
involving the total variation of the divergence, which allows singularities on thin structures.

This study was partially supported by ANR project Micro-Réseaux. It was performed in collaboration with IMFT (Franck Plouraboué and Hakim El Boustani).

We consider tomographic volumes of the brain vascular network and show that the vascular network shows different properties in tumours and in healthy tissue. We compute a tumour segmentation based on these properties. We first compute the skeleton of the binarized vascular network. We then compute the watershed associated to the opposite of a distance transform of this skeleton (see figure ). By rejecting the null hypothesis, we have shown that the size of the different areas from healthy tissue and tumour are statistically significantly different. In addition, we have shown that the distribution of sizes follows a Pareto distribution. Based on these results, we have defined a binary Markov random field on a graph, where each node represents a given region, and the edges are defined by the connectivity between regions. The data term is defined using region size, whereas a prior term incorporates spatial homogeneity of the solution. Optimization, based on a simulated annealing scheme, then provides a segmentation of the tumour area.

This work was partially funded by a contract with Galderma R&D Sophia Antipolis [
http://

The analysis of images of the skin is important for dermatologists to evaluate precisely the evolution of a disease and the efficiency of a treatment. Usually, dermatologists use colour images or select a few bands of interest in multi-spectral images. In this work, we use the whole spectrum of bands in multi-spectral images in order to quantify skin hyper-pigmentation. We compare two types of methods: classification using a support vector machine, and source separation.

In the literature on multi- and hyper-spectral image classification, use of an SVM is often associated with a data reduction step to avoid the Hughes phenomenon. We show that using data reduction with projection pursuit before SVM classification improves the results for skin hyper-pigmentation. The projection pursuit is computed using the Kullback-Leibler divergence. Moreover, a pre-processing spectral analysis step is applied to partition the spectrum into groups.

For the source separation approach, we use independent component analysis. As shown in figure , SVM in combination with projection pursuit and independent component analysis are the methods which give the best results for skin analysis. The main difference between these two semi automatic methods is the impact of the operator on the classification. The skin images contained some artefacts due to body shape and lighting in some areas. Therefore, a method was also introduced to compensate these effects, by substracting an appropriate near infra-red channel. This work resulted in two joint patent deposits (Galderma/INRIA).

Classification of hyperspectral images of the skin. Contract #3999.

Parameter estimation for marked point processes for object extraction from high resolution satellite images. Contract #2150 part 1.

Higher-order active contours with application to the extraction of networks (roads and rivers) from high resolution satellite images. Contract #2150, part 2.

Modelling of high resolution SAR image statistics. Contract to be numbered at the end of December 09.

In collaboration with the Creative research group in the I3S Lab of CNRS/UNS (Marc Antonini). Financial support for the Ph.D.of Mikael Carlavan on "Optimisation of the Compression-Restoration chain for satellite images".

Airbone devices for survey and detection. In collaboration with ATE (PI), Dronexplorer, Nexvision, Coreti. This project has been labelled by the "Pole Pegase". Accepted in July 2009, it should begin on December 30th, 2009.

Semi-automatic methods for forestry cartography using aerial and high resolution satellite images. Contract #1467.

In collaboration with Thales Alenia Space (Cannes).

In collaboration with Pierre Couteron, Christophe Proisy and Nicolas Bardet from UMR AMAP IRD/INRA (Montpellier).

In collaboration with the J. A. Dieudonné Laboratory of CNRS/UNS (Gilles Aubert, Luis Almeida, David Chiron, Laurence Guillot), the Pasteur Institute (Jean-Christophe Olivo-Marin), and SAGEM DS Argenteuil (Yann Le Guilloux, Daniel Duclos).

In collaboration with Jean-François Aujol from CMLA, ENS Cachan.

In collaboration with IMFT (F. Plouraboue (PI), R. Guibert), CERCO (C. Fonta), and ESRF (P. Cloetens, G. LeDuc, R. Serduc).

In collaboration with the Pasteur Institute (Jean-Christophe Olivo-Marin, Vannary Meas-Yedid), the MIPS laboratory of Université de Haute Alsace (Alain Dieterlen, Bruno Colicchio), the
LIGM of Université Paris-Est (Caroline Chaux, Jean-Chritophe Pesquet, Hugues Talbot), and INRA Sophia-Antipolis (Gilbert Engler). This project has been labeled by the “pôle Optitec” and “pôle
BioValley”. Web site:
http://

The Ariana project-team was a participant in the European Union Sixth Framework Network of Excellence MUSCLE (Multimedia Understanding through Semantics, Computation and Learning),
contract FP6-507752, in collaboration with 41 other participants around Europe, including four other INRIA project-teams. Web site:
http://

In collaboration with the Pasteur Institute (Jean-Christophe Olivo-Marin [PI]), the Weizmann Institute (Zvi Kam) and the Technion (Arie Feuer [PI]). Web site:
http://

In collaboration with the Vision Group of Florida State University (A. Srivastava (PI), E. Klassen, A. Barbu, and J. Su). Web site:
http://

In collaboration with the Dobrushin Laboratory of the Institute for Information and Transmission Problems of the Russian Academy of Science, Moscow (E. Zhizhina (PI), E. Pechersky, R.
Minlos, S. Komech), the Image Processing and Pattern Recognition Laboratory of the United Institute of Informatics Problems of the National Academy of Science of Belarus, Minsk (B. Zalesky
(PI), P. Lukaskevich, A. Krauchonak), and IRIT, Toulouse (J.D. Durou). Web site:
http://

The members of the Ariana project-team participated actively in GdR ISIS and GdR MSPCV.

Members of the Ariana project-team participated in the Fête de la Science at INRIA in November, and gave talks to 10 to 14 year-old teen-agers at the College des Muriers in Cannes.

The Ariana project-team organized numerous seminars in image processing during 2009. Eighteen researchers were invited from the following countries: Austria, Belgium, China, France, Ireland, Italy, Portugal, Spain, and the USA. For more information, see the Ariana project-team web site.

Members of the Ariana project-team participated actively in the visits to INRIA Sophia Antipolis of students from the Grandes Écoles (ENPC, ISAE/Sup'Aéro); helped students of the Classes Préparatoires with TIPE in France; and gave information on remote sensing image processing to high school students in Mauritius.

Csaba Benedek gave seminars at MTA SZTAKI, Budapest, Hungary, in June. He presented his work at the WACV'09 conference, in Snowbird, Utah, USA, in December.

Laure Blanc-Féraud gave a lesson at the summer school on `Inverse Problems in Signal and Image Processing' in Peyresq, France 2009. She attended the IEEE IGARSS'09, and presented a poster and chaired a session at the Gretsi'09.

Mikael Carlavan gave a talk at the conference IGARSS'09, Cape Town, South Africa in July. He also gave a talk at the conference GRETSI'09, Dijon, France in September.

Xavier Descombes visited the Dobrushin Laboratory (IITP, Russian Academy of Science) for two weeks in July. He attended and gave presentations at the EMMCVPR conference in Bonn in August and at the GRETSI conference in Dijon in September. He was an invited speaker in the 5th International Workshop on `Data - Algorithms - Decision Making' in Pilsen in November. He organised several meetings for the Associated Team ODESSA and participated in ANR Micro-Réseaux project meetings. He took part in meetings with the French Space Agency (CNES) in October and December and with Galderma in June and November.

Aymen El Ghoul gave seminars at ENSA Tunis and at Sup'Com, Tunis, in January and April respectively. He presented a paper at the conference EUSIPCO'09 in Glasgow, UK, in August.

Neismon Fahe gave a talk and demo at the INRIA Sophia `Software Days' in March, and made a presentation at the IFN in Nogent-sur-Vernisson in May.

Daniele Graziani presented a paper at the conference ISBI 09 in Boston in June, and gave a talk at the European Conference on Elliptic and Parabolic Problems at Gaeta, Italy, in May.

Ahmed Gamal-Eldin attended a summer school entitled `Stochastic geometry, spatial statistics and random fields', at the Söllerhaus in Hirschegg, Austria. It was organized by The Institute of Stochastics of Ulm University in cooperation with Lomonosov Moscow State University.

Giovanni Gherdovich attended the CNRS summer school `Conception de nouveaux outils mathématiques pour l'analyse d'images et la vision par ordinateur' in Figeac in June. At this meeting he gave a poster presentation. In October he gave a talk at the Maths Department of the University of Nice Sophia-Antipolis.

Ian Jermyn gave a seminar in the School of Systems Engineering at Reading University, UK, in March. He visited the Statistics Department of Florida State University, USA, in March/April. He gave a seminar in the Department of Mathematical Engineering at the Catholic University of Louvain, Belgium, and another in the Laboratoire Jacques-Louis Lions of the Université Pierre et Marie Curie and CNRS, both in April. He visited the Computer Science Department of Nagoya City University and gave a seminar, and then attended ICCV'09 and presented a paper at the NORDIA'09 workshop, in Japan in September/October. He visited the French Space Agency (CNES) in Toulouse in October.

Vladimir Krylov presented a paper at the conference SPIE Electronic Imaging'09, San Jose, California, USA in January and gave a talk at the French Space Agency (CNES) in Toulouse in October.

Maria Kulikova gave a talk at Florida State University (FSU) in February. In April, she presented her work in a `Cross Seminar' at INRIA. She took part in the meetings of the Associated Team ODESSA with Prof. E. Zhizhina from the Institute of Information Transmission Problems (IITP), in July at IITP in Moscow. In December, she gave a talk at the conference SITIS, in Marrakech, Morocco.

Praveen Pankajakshan gave a talk at the `Journée Restauration CCT TSI' at CNES, Toulouse, in January. In April, he presented his work in a `Cross Seminar' at INRIA. He gave a talk at the Pasteur Institute in Paris, in June, and he had a poster at the International Symposium on Biomedical Imaging (ISBI) in Boston, USA, in July.

Josiane Zerubia gave talks at CS and Astrium in Toulouse in January, and also visited Telecom Paris Tech. In February, she gave a talk at TESA in Toulouse and
participated in the CNES `Research and Technology Day' in Labege. In March, she participated in the IGN `Research Days' in Saint Mandé, and presented the work of Ariana to the `Comité de
Pilotage' of I3S/INRIA. In May, she gave a talk at Vega Tech in Toulouse, and attended events for the 20
^{th}anniversary of ERCIM and the 50
^{th}anniversary of SFPT (held at CNAM), both in Paris. In June, she visited Galderma in Sophia Antipolis. In August, she visited the GIPSA Lab at INP Grenoble. In September, she
gave a plenary talk at the ISPRS CMRT'09 Workshop in Paris (talk available at
www.

C. Benedek was a reviewer for the journals IEEE TIP, IEEE TGRS, IEEE TCSVT, PRL, IJIST, and the conference ICIP09.

Laure Blanc-Féraud is associate editor of the “revue Traitement du Signal”. She also reviews papers for IEEE TIP, IEEE TSP, JOSA, and for the conferences IEEE ICIP, ICASSP, and ISBI as associate member of the IEEE BISP TC, and the conference ACVIS. She reviews proposals for the ANR Programme Blanc, and PEPS projects for CNRS. She reviewed a proposal for a grant for financial support for four years for researchers in laboratories in Antwerp University, Belgium. She was a reviewer for two HDR theses and one Ph.D. thesis, and a member of one HDR committee and two Ph.D. committees.

Xavier Descombes was a reviewer for GRETSI'09 and RFIA'09. He was a regular reviewer for IEEE TIP, IEEE TPAMI, IEEE TGARS, Traitement du Signal and IJRS. He was a reviewer for the programm CIBLE of Région Rhones Alpes and for the ANR Programme Blanc. He was reviewer of two Ph.D. dissertations and took part in a third Ph.D. committee.

A. El Ghoul was a reviewer for the journal MATCOM.

R. Gaetano was a reviewer for the journals IEEE TIP and Elsevier Signal Processing.

F. Lafarge was a reviewer for the journals IEEE TPAMI, IEEE TSP, IJCV, and JPRS.

Daniele Graziani was a reviewer for the journal JMIV.

Ian Jermyn performed a book review for Springer, and was a regular reviewer for IEEE TSP, IEEE TIP, IEEE TPAMI, and Image and Vision Computing. He was a reviewer for the conferences EMMCVPR'09 and ICCV'09. He participated in four Ph.D. committees.

P. Pankajakshan was a reviewer for the journals Elsevier DSP, IET Electronic Letters, and BioTechniques.

Josiane Zerubia was president of one Ph.D. committee, a reviewer for another, and a committee member for two more. She was a regular reviewer for IEEE TGRS, GRS Letters, and SFPT (Revue Française de Photogrammétrie et de Télédétection). She was a reviewer or a programme committee member for GRETSI'09, ACVIS'09, ICASSP'09, ICCV'09, ISBI'09, ICIP'09, EMMCVPR'09 and SPIE-ISPRS'09 (`Image and Signal Processing for Remote Sensing').

Laure Blanc-Féraud is part of the Steering Committee of GdR ISIS. She is part of the Administrative Council of Gretsi, and is a member of its board. She is a member of the evaluation committee of the ANR DEFI program and an associate member of the IEEE BISP technical committee. She is part of the CNECA 3 (equivalent of CNU for agricultural ministry). She is a permanent member of the Organizing Committee of the Gretsi Peyresq annual Summer School.

Xavier Descombes is a member of the scientific committee of the `Pôle de compétitivité Optitec', and a member of the strategic committee of PopSud. He is computer systems coordinator for the Ariana project. He is PI of an ECONET project and of the Associated Team ODESSA.

Ian Jermyn is a member of the Comité de Suivi Doctoral at INRIA Sophia Antipolis, and of the International Relations Working Group of the Scientific and Technological Orientation Council of INRIA. He is co-computer systems coordinator for the Ariana project. He was a co-guest editor of the IEEE TPAMI Special Issue on Shape Analysis.

Josiane Zerubia is an IEEE Fellow. She is a member of the Biological Image and Signal Processing Technical Committee and of the Image, Video and Multidimensional Signal
Processing Technical Committee of the IEEE Signal Processing Society. She is an Associate Editor of the collection `Foundation and Trends in Signal Processing' [
http://

Laure Blanc- Féraud gave a course at Poly'Tech Nice-Sophia Antipolis (UNS) in M2 informatique (VIMM) and Electronique (TNS) (17h CM). She also taught in the Biocomp Masters programme at Poly'Tech Nice-Sophia Antipolis (UNS) (12h CM).

Xavier Descombes taught `Image analysis' (10h) at Poly'Tech Nice-Sophia, and `Image processing' and `Advanced techniques in space imagery' (20h) at ISAE/SUPAERO.

Giovanni Gherdovich was teaching assistant for `Image/Compression Project' (16h) and `Introduction to Programming' (36h) at Poly'Tech Nice-Sophia Antipolis.

Ian Jermyn taught `Image analysis' (10h) at Poly'Tech Nice-Sophia Antipolis, and `Advanced techniques in space imagery' (5h) at ISAE/SUPAERO.

Maria Kulikova was teaching assistant for “Document Creation with (X)HTML/XML/XSLT/LaTeX/Office” for the first year students at Ecole Poly'Tech Nice-Sophia Antipolis (64h).

Josiane Zerubia was director of the module `Deconvolution and denoising in confocal microscopy' for the Masters 2 course BioComp at the University of Nice-Sophia Antipolis (24h of which 12 taught). She was director of the course `Advanced techniques for space imagery' at ISAE/SUPAERO (40h, of which 20h taught). She also taught 3h for the Masters 1 course on image processing at ENS Lyon/UNS, for which she is responsible for 12 hours.

Mikael Carlavan: `Optimization of the compression/restoration chain for satellite images', University of Nice-Sophia Antipolis. Defence expected in 2012.

Aymen El Ghoul: `Phase fields for the extraction of networks from remote sensing images', University of Nice-Sophia Antipolis. Defence expected in 2010.

Amhed Gamal-Eldin: `Marked point processes models of 3D objects: an application to the counting of King Penguins, University of Nice-Sophia Antipolis. Defence expected in 2011.

Athanasis Georgantas: `Global reconstruction of urban scenes', EDITE, Telecom Paris Tech. Defence expected in 2012.

Giovanni Gherdovich: `Diffusion and birth and death dynamics for the optimization of marked point processes: an application to the counting of King Penguins', University of Nice-Sophia Antipolis. Defence expected in 2011.

Sylvain Prigent:`The contribution of multi and hyperspectral imaging to skin pigmentation evaluation', University of Nice-Sophia Antipolis. Defence expected in 2012.

Aurélie Voisin: `Development of advanced image-processing and analysis methods as a support to multi-risk monitoring of infrastructures and urban areas', University of Nice-Sophia Antipolis. Defence expected in 2012.

Jia Zhou: 'The contribution of object recognition from forest canopy images to the construction of an allometric theory of the structure of trees and of natural, heterogeneous forests', University of Montpellier 2. Defence expected in 2012.

Alexis Baudour: `Segmentation and deconvolution of 3D images', University of Nice-Sophia Antipolis. Defended on May 18, 2009.

Maria S. Kulikova: `Shape recognition for scene analysis', University of Nice-Sophia Antipolis. Defended on December 16, 2009.

Praveen Pankajakshan: `Blind biological image deconvolution', University of Nice-Sophia Antipolis. Defended on December 15, 2009.

Ting Peng, an Ariana PhD student from 2005–2008, was awarded one of the five 2008 European Best IEEE Geoscience and Remote Sensing Society PhD prizes for her work on `New Higher-Order Active Contour Models, Shape Priors, and Multiscale Analysis and their Application to Road Network Extraction from Very High Resolution Satellite Images'. Hers was a joint PhD with LIAMA in Beijing, China. Her thesis advisors were Ian Jermyn and Josiane Zerubia from Ariana, and Veronique Prinet and Baogang Hu from LIAMA. The thesis was partially funded by Thales Alenia Space.