EN FR
EN FR


Section: New Software and Platforms

Tools for robot learning, control and perception

CARROMAN

Functional Description

This software implements a control architecture for the Meka humanoid robot. It integrates the Stanford Whole Body Control in the M3 architecture provided with the Meka robot, and provides clear and easy to use interfaces through the URBI scripting language. This software provides a modular library of control modes and basic skills for manipulating objects, detecting objects and humans which other research projects can reuse, extend and enhance. An example would be to locate a cylindrical object on a table using stereo vision, and grasping it using position and force control.

  • Contact: David Filliat

DMP-BBO

Black-Box Optimization for Dynamic Movement Primitives

Functional Description

The DMP-BBO Matlab library is a direct consequence of the insight that black-box optimization outperforms reinforcement learning when using policies represented as Dynamic Movement Primitives. It implements several variants of the PIBB algorithm for direct policy search. The dmp_bbo C++ library (https://github.com/stulp/dmpbbo ) has been extended to include the “unified model for regression”, see Section  7.2.3 . The implementation of several of the function approximators have been made real-time compatible.

DyNAMoS

Functional Description

This simulation software comes in the form of a PYTHON module and allows a user to define and simulate complex neural architectures while making use of the parallelism inherent to modern multi-core processors. A special focus lies on on-line learning, processing inputs one by one, in contrast to batch processing of whole databases at a time.

  • Participants: Alexander Gepperth and Mathieu Lefort

  • Contact: Mathieu Lefort

Multimodal Concept Learning with Non-negative Matrix Factorization

Functional Description

The python code provides a minimum set of tools and associated libraries to reproduce the experiments in [98] , together with the choreography datasets. The code is primarily intended for reproduction of the mulimodal learning experiment mentioned above. It has already been reused in several experimentations by other member of the team and is expected to play an important role in further collaborations. It is also expected that the public availability of the code encourages further experimentation by other scientists with data coming from other domains, thus increasing both the impact of the aforementioned publication and the knowledge on the algorithm behaviors.

Explorers

Functional Description

The Explorers framework is aimed at creating, testing and comparing autonomous exploration strategies for sensorimotor spaces in robots. The framework is largely strategy-agnostic, and is aimed as expressing motor babbling, goal babbling and intrinsically motivated exploration algorithms, among other. It is also able to express strategies that feature transfer learning, such as the reuse algorithm.

Of 3-D point cloud

Functional Description

This software scans the 3-D point cloud of a scene to find objects and match them against a database of known objects. The process consists in 3 stages. The segmentation step finds the objects in the point cloud, the feature extraction computes discriminating properties to be used in the classification stage for object recognition.

  • Participants: David Filliat, Alexander Gepperth and Louis-Charles Caron

  • Contact: Alexander Gepperth

OptiTrack

Functional Description

This python library allows you to connect to an OptiTrack from NaturalPoint. This camera permits the tracking of 3D markers efficiently and robustly. With this library, you can connect to the Motive software used by the OptiTrack and retrieve the 3D position and orientation of all your tracked markers directly from python.

  • Participant: Pierre Rouanet

  • Contact: Pierre Rouanet

PEDDETECT

Functional Description

PEDDETECT implements real-time person detection in indoor or outdoor environments. It can grab image data directly from one or several USB cameras, as well as from pre-recorded video streams. It detects mulitple persons in 800x600 color images at frame rates of >15Hz, depending on available GPU power. In addition, it also classifies the pose of detected persons in one of the four categories "seen from the front", "seen from the back", "facing left" and "facing right". The software makes use of advanced feature computation and nonlinear SVM techniques which are accelerated using the CUDA interface to GPU programming to achieve high frame rates. It was developed in the context of an ongoing collaboration with Honda Research Institute USA, Inc.

  • Participant: Alexander Gepperth

  • Contact: Alexander Gepperth

pyStreamPlayer

Functional Description

This Python software is intended to facilitate the application of machine learning algorithms by avoiding to work directly with an embodied agent but instead with data recorded in such an agent. Assuming that non-synchronous data from multiple sensors (e.g., camera, Kinect, laser etc.) have been recorded according to a flexible format defined by the pyStreamPlayer architecture, pyStreamPlayer can replay these data while retaining the exact temporal relations between different sensor measurements. As long as the current task does not involve the generation of actions, this software allows to process sensor data as if it was coming from an agent which is usually considerably easier. At the same time, pyStreamPlayer allows to replay arbitrary supplementary information such as, e.g., object information, as if it was coming from a sensor. In this way, supervision information can be stored and accessed together with sensory measurements using an unified interface. pyStreamPlayer has been used to facilitate real-world object recognition tasks, and several of the major databases in this field (CalTech Pedestrian database, HRI RoadTraffic traffic objects database, CVC person database, KITTI traffic objects database) have been converted to the pyStreamPlaer format and now serve as a source of training and test data for learning algorithms.

  • Participant: Alexander Gepperth

  • Contact: Alexander Gepperth

Aversive++

Functional Description

Aversive++ is a C++ library that eases microcontroller programming. Its aim is to provide an interface simple enough to be able to create complex applications, and optimized enough to enable small microcontrollers to execute these applications. The other aspect of this library is to be multiplatform. Indeed, it is designed to provide the same API for a simulator (named SASIAE) and for AVR-based and ARM-based microcontrollers.