EN FR
EN FR


Project Team Pulsar


Overall Objectives
Contracts and Grants with Industry
Bibliography


Project Team Pulsar


Overall Objectives
Contracts and Grants with Industry
Bibliography


Section: New Results

SUP Software Platform

Participants : Julien Gueytat, Leonardo Rocha, Daniel Zullo, François Brémond.

SUP is a software platform developed by PULSAR team, written in C and C++ for generating activity recognition systems. These systems should be able to perceive, analyze, interpret and understand a 3D dynamic scene observed through a network of sensors.

These activity recognition systems are a combination of algorithms developed by members of Pulsar or state of the art computer vision libraries. The SUP dissemination is targeted for use in real-world applications requiring high-throughput.

SUP is made as a framework allowing several computer vision workflows to be implemented. Currently, the workflow is static for a given application but our goal is to make it dynamic. A given workflow is the composition of several plugins, each of them implementing an algorithmic step in the video processing chain (i.e. the segmentation of images, the classification of objects, etc.). The design of SUP allows to execute at run-time the selected plugins.

During 2011 several tasks have been accomplished:

  • A stable packaged release is available

  • 3D simulation from a scenario description

  • Existing algorithms have been improved in performance and accuracy

  • Kinect sensor has been added to the hardware supported

Several plugins are available:

  • 2 plugins are wrappers on industrial implementations of video processing algorithms (made available by Keeneo). They allow a quick deployment of a video processing chain encompassing image acquisition, segmentation, blob construction, classification and short-term tracking. These algorithms are robust and efficient algorithms, but with the drawback that some algorithms can lack accuracy.

  • Several implementations by the Pulsar team members which cover the following fields:

    1. Image acquisition from different types of image and camera video streaming.

    2. Segmentation removing the shadows.

    3. Two classifiers, one being based on postures and one on people detection.

    4. Four frame-to-frame trackers, using as algorithm:

      1. a simple tracking by overlapping,

      2. neural networks,

      3. tracking of feature points,

      4. tracking specialized for the tracking of people in a crowd.

    5. Three scenario recognizers, one generic algorithm allowing expression of probabilities on the recognized events, the second one focusing on the recognition of events based on postures and the third one (see section Extendable Event Recognition algorithm: SED in this document) uses the complete ontology of the domain as a parameter (e.g. the definition of objects of interest, scenario models, etc. ).

    6. 3D animation generation, it generates a virtual 3D animation from information provided by different plugins of the processing chain together with 3D contextual environment.

    7. 3D simulation from description, it generates a virtual 3D animation from information provided from a text file with the description of the scenario.

From a software engineering point-of-view, the goal is to obtain a flexible platform being dynamically reconfigurable for the generated scene understanding systems to be autonomous and adaptable for handling changing environment.

SUP relies on DTK, a generic platform developed by the DREAM service at INRIA Research Center Sophia-Antipolis Méditerranée.

The purpose of DTK is to provide a software infrastructure allowing the generation of a new system by the composition of plugins, each plugin being an algorithmic step of the whole processing chain. SUP is oriented to help developers building activity recognition systems and describing their own scenarios dedicated to specific applications. By relying on the DTK software infrastructure, the possibilities are:

  • To simplify the exchanges of algorithms between different INRIA teams using the DTK.

  • To use the facilities already provided by the DTK allowing to compose quickly existing plugins. Currently a python interface is operational, and we plan to take advantage of the graphical composer to prototype quickly new work-flows, or reconfigure existing ones, for the experimentation conducted by the team.

In order to be confident on the results obtained with the SUP platform, an important effort is done to check:

  • The correct behavior of the platform from a software engineering point of view, i.e. that the functionality of the SUP software is correctly provided, or is not broken by modifications.

  • A qualitative evaluation tool (see ViSEvAl in this document) for the algorithms, which compares and assesses the results obtained with the algorithms to ground truth for several reference videos.

Both kinds of test are performed on a daily basis and on several hardware/software architectures.