<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1 plus MathML 2.0 plus SVG 1.1//EN" "http://www.w3.org/2002/04/xhtml-math-svg/xhtml-math-svg.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
    <meta http-equiv="Content-Type" content="application/xhtml+xml; charset=utf-8"/>
    <title>Project-Team:WILLOW</title>
    <link rel="stylesheet" href="../static/css/raweb.css" type="text/css"/>
    <meta name="description" content="Overall Objectives - Statement"/>
    <meta name="dc.title" content="Overall Objectives - Statement"/>
    <meta name="dc.subject" content=""/>
    <meta name="dc.publisher" content="INRIA"/>
    <meta name="dc.date" content="(SCHEME=ISO8601) 2016-01"/>
    <meta name="dc.type" content="Report"/>
    <meta name="dc.language" content="(SCHEME=ISO639-1) en"/>
    <meta name="projet" content="WILLOW"/>
    <script type="text/javascript" src="https://raweb.inria.fr/rapportsactivite/RA2016/static/MathJax/MathJax.js?config=TeX-MML-AM_CHTML">
      <!--MathJax-->
    </script>
  </head>
  <body>
    <div class="tdmdiv">
      <div class="logo">
        <a href="http://www.inria.fr">
          <img style="align:bottom; border:none" src="../static/img/icons/logo_INRIA-coul.jpg" alt="Inria"/>
        </a>
      </div>
      <div class="TdmEntry">
        <div class="tdmentete">
          <a href="uid0.html">Project-Team Willow</a>
        </div>
        <span>
          <a href="uid1.html">Members</a>
        </span>
      </div>
      <div class="TdmEntry">Overall Objectives<ul><li class="tdmActPage"><a href="./uid3.html">Statement</a></li></ul></div>
      <div class="TdmEntry">Research Program<ul><li><a href="uid5.html&#10;&#9;&#9;  ">3D object and scene modeling, analysis,
and retrieval</a></li><li><a href="uid7.html&#10;&#9;&#9;  ">Category-level object and scene recognition</a></li><li><a href="uid8.html&#10;&#9;&#9;  ">Image restoration, manipulation and enhancement</a></li><li><a href="uid9.html&#10;&#9;&#9;  ">Human activity capture and classification</a></li></ul></div>
      <div class="TdmEntry">Application Domains<ul><li><a href="uid13.html&#10;&#9;&#9;  ">Introduction</a></li><li><a href="uid14.html&#10;&#9;&#9;  ">Quantitative image analysis in science and humanities</a></li><li><a href="uid15.html&#10;&#9;&#9;  ">Video Annotation, Interpretation, and Retrieval</a></li></ul></div>
      <div class="TdmEntry">
        <a href="./uid17.html">Highlights of the Year</a>
      </div>
      <div class="TdmEntry">New Software and Platforms<ul><li><a href="uid21.html&#10;&#9;&#9;  ">NetVLAD: CNN architecture for weakly supervised place recognition</a></li><li><a href="uid22.html&#10;&#9;&#9;  ">Unsupervised learning from narrated instruction videos</a></li><li><a href="uid23.html&#10;&#9;&#9;  ">ContextLocNet: Context-aware deep network models for
weakly supervised localization</a></li><li><a href="uid24.html&#10;&#9;&#9;  ">Long-term Temporal Convolutions for Action Recognition</a></li></ul></div>
      <div class="TdmEntry">New Results<ul><li><a href="uid26.html&#10;&#9;&#9;  ">3D object and scene modeling, analysis, and retrieval</a></li><li><a href="uid37.html&#10;&#9;&#9;  ">Category-level object and scene recognition</a></li><li><a href="uid45.html&#10;&#9;&#9;  ">Image restoration, manipulation and enhancement</a></li><li><a href="uid48.html&#10;&#9;&#9;  ">Human activity capture and classification</a></li></ul></div>
      <div class="TdmEntry">Bilateral Contracts and Grants with Industry<ul><li><a href="uid59.html&#10;&#9;&#9;  ">Facebook AI Research Paris: Weakly-supervised interpretation of image and video data (Inria)</a></li><li><a href="uid60.html&#10;&#9;&#9;  ">Google: Learning to annotate videos from movie scripts (Inria)</a></li><li><a href="uid61.html&#10;&#9;&#9;  ">Google: Structured learning from video and natural language (Inria)</a></li><li><a href="uid62.html&#10;&#9;&#9;  ">MSR-Inria joint lab: Image and video mining for science and humanities (Inria)</a></li></ul></div>
      <div class="TdmEntry">Partnerships and Cooperations<ul><li><a href="uid64.html&#10;&#9;&#9;  ">National Initiatives</a></li><li><a href="uid66.html&#10;&#9;&#9;  ">European Initiatives</a></li><li><a href="uid70.html&#10;&#9;&#9;  ">International Initiatives</a></li><li><a href="uid73.html&#10;&#9;&#9;  ">International Research Visitors</a></li></ul></div>
      <div class="TdmEntry">Dissemination<ul><li><a href="uid77.html&#10;&#9;&#9;  ">Promoting Scientific Activities</a></li><li><a href="uid144.html&#10;&#9;&#9;  ">Teaching - Supervision - Juries</a></li><li><a href="uid177.html&#10;&#9;&#9;  ">Popularization</a></li></ul></div>
      <div class="TdmEntry">
        <div>Bibliography</div>
      </div>
      <div class="TdmEntry">
        <ul>
          <li>
            <a id="tdmbibentyear" href="bibliography.html">Publications of the year</a>
          </li>
        </ul>
      </div>
    </div>
    <div id="main">
      <div class="mainentete">
        <div id="head_agauche">
          <small><a href="http://www.inria.fr">
	    
	    Inria
	  </a> | <a href="../index.html">
	    
	    Raweb 
	    2016</a> | <a href="http://www.inria.fr/en/teams/willow">Presentation of the Project-Team WILLOW</a> | <a href="http://www.di.ens.fr/willow">WILLOW Web Site
	  </a></small>
        </div>
        <div id="head_adroite">
          <table class="qrcode">
            <tr>
              <td>
                <a href="willow.xml">
                  <img style="align:bottom; border:none" alt="XML" src="../static/img/icons/xml_motif.png"/>
                </a>
              </td>
              <td>
                <a href="willow.pdf">
                  <img style="align:bottom; border:none" alt="PDF" src="IMG/qrcode-willow-pdf.png"/>
                </a>
              </td>
              <td>
                <a href="../willow/willow.epub">
                  <img style="align:bottom; border:none" alt="e-pub" src="IMG/qrcode-willow-epub.png"/>
                </a>
              </td>
            </tr>
            <tr>
              <td/>
              <td>PDF
</td>
              <td>e-Pub
</td>
            </tr>
          </table>
        </div>
      </div>
      <!--FIN du corps du module-->
      <br/>
      <div class="bottomNavigation">
        <div class="tail_aucentre">
          <a href="./uid1.html" accesskey="P"><img style="align:bottom; border:none" alt="previous" src="../static/img/icons/previous_motif.jpg"/> Previous | </a>
          <a href="./uid0.html" accesskey="U"><img style="align:bottom; border:none" alt="up" src="../static/img/icons/up_motif.jpg"/>  Home</a>
          <a href="./uid5.html" accesskey="N"> | Next <img style="align:bottom; border:none" alt="next" src="../static/img/icons/next_motif.jpg"/></a>
        </div>
        <br/>
      </div>
      <div id="textepage">
        <!--DEBUT2 du corps du module-->
        <h2>Section: 
      Overall Objectives</h2>
        <h3 class="titre3">Statement</h3>
        <p>Object recognition —or, in a broader sense, scene understanding—
is the ultimate scientific challenge of computer vision: After 40
years of research, robustly identifying the familiar objects (chair,
person, pet), scene categories (beach, forest, office), and activity
patterns (conversation, dance, picnic) depicted in family pictures,
news segments, or feature films is still beyond the capabilities
of today's vision systems. On the other hand, truly successful object
recognition and scene understanding technology will have a broad
impact in application domains as varied as defense, entertainment,
health care, human-computer interaction, image retrieval and data
mining, industrial and personal robotics, manufacturing, scientific
image analysis, surveillance and security, and transportation.</p>
        <p>Despite the limitations of today's scene understanding technology,
tremendous progress has been accomplished in the past ten years, due
in part to the formulation of object recognition as a statistical
pattern matching problem. The emphasis is in general on the features
defining the patterns and on the algorithms used to learn and
recognize them, rather than on the representation of object, scene,
and activity categories, or the integrated interpretation of the
various scene elements. WILLOW complements this approach with an
ambitious research program explicitly addressing the representational
issues involved in object recognition and, more generally, scene
understanding.</p>
        <p>Concretely, our objective is to develop geometric, physical, and
statistical models for all components of the image interpretation
process, including illumination, materials, objects, scenes, and human
activities. These models will be used to tackle fundamental
scientific challenges such as three-dimensional (3D) object and scene
modeling, analysis, and retrieval; human activity capture and
classification; and category-level object and scene recognition. They
will also support applications with high scientific, societal, and/or
economic impact in domains such as quantitative image analysis in
science and humanities; film post-production and special effects; and
video annotation, interpretation, and retrieval.
Machine learning is a key part of our effort, with a balance of
practical work in support of computer vision application and
methodological research aimed at developing effective algorithms and
architectures.</p>
        <p>WILLOW was created in 2007: It was recognized as an Inria team in
January 2007, and as an official project-team in June 2007. WILLOW
is a joint research team between Inria Paris Rocquencourt, Ecole
Normale Supérieure (ENS) and Centre National de la Recherche
Scientifique (CNRS).</p>
        <p>This year we have hired two new Phd students: Antoine Miech (Inria) and Ignacio Rocco (inria).
Alexei Efros (Professor, UC Berkeley, USA) visited Willow during May-June with his postdoc Phillip Isola and Phd student Richard Zhang.
John Canny (Professor, UC Berkeley, USA) visited Willow within the framework of Inria's International Chair program.</p>
      </div>
      <!--FIN du corps du module-->
      <br/>
      <div class="bottomNavigation">
        <div class="tail_aucentre">
          <a href="./uid1.html" accesskey="P"><img style="align:bottom; border:none" alt="previous" src="../static/img/icons/previous_motif.jpg"/> Previous | </a>
          <a href="./uid0.html" accesskey="U"><img style="align:bottom; border:none" alt="up" src="../static/img/icons/up_motif.jpg"/>  Home</a>
          <a href="./uid5.html" accesskey="N"> | Next <img style="align:bottom; border:none" alt="next" src="../static/img/icons/next_motif.jpg"/></a>
        </div>
        <br/>
      </div>
    </div>
  </body>
</html>
