The Inria projet PRIMA has arrived at the end of its final 4 year period on 31 December 2015. A new project team named Pervasive Interaction has been proposed for creation in 2016 to build on the results achieved by PRIMA.
The objective of Project PRIMA is to develop the scientific and technological foundations for human environments that are capable of perceiving, acting, communicating, and interacting with people in order to provide services. The construction of such environments offers a rich set of problems related to interpretation of sensor information, learning, machine understanding, dynamic composition of components and man-machine interaction. Our goal is make progress on the theoretical foundations for perception and cognition, as well as to develop new forms of man machine interaction, by using interactive environments as a source of example problems.
An environment is a connected volume of space. An environment is said to be “perceptive” when it is capable of recognizing and describing things, people and activities within its volume. Simple forms of applications-specific perception may be constructed using a single sensor. However, to be general purpose and robust, perception must integrate information from multiple sensors and multiple modalities. Project PRIMA creates and develops machine perception techniques fusing computer vision, acoustic perception, range sensing and mechanical sensors to enable environments to perceive and understand humans and human activities.
An environment is said to be “active” when it is capable of changing its internal state. Common forms of state change include regulating ambient temperature, acoustic level and illumination. More innovative forms include context-aware presentation of information and communications, as well as services for cleaning, materials organisation and logistics. The use of multiple display surfaces coupled with location awareness offers the possibility of automatically adapting information display to fit the current activity of groups. The use of activity recognition and acoustic topic spotting offers the possibility to record a log of human to human interaction, as well as to provide relevant information without disruption. The use of steerable video projectors (with integrated visual sensing) offers the possibilities of using any surface for presentation, interaction and communication.
An environment may be considered as “interactive” when it is capable of interacting with humans using tightly coupled perception and action. Simple forms of interaction may be based on observing the manipulation of physical objects, or on visual sensing of fingers, hands or arms. Richer forms of interaction require perception and understanding of human activity and context. PRIMA has developed a novel theory for situation modeling for machine understanding of human activity, based on techniques used in Cognitive Psychology . PRIMA explores multiple forms of interaction, including projected interaction widgets, observation of manipulation of objects, fusion of acoustic and visual information, and systems that model interaction context in order to predict appropriate action and services by the environment.
For the design and integration of systems for perception of humans and their actions, PRIMA has developed:
A framework for context aware services using situation models.
Robust, view invariant techniques for computer vision using local appearance.
A distributed autonomic software architecture for multimodal perceptual systems.
The experiments in project PRIMA are oriented towards developing interactive services for smart environments. Application domains include activity monitoring services for healthy living, smart objects and services for the home, and new forms of man-machine interaction based on perception. Creating interactive services requires scientific progress on a number of fundamental problems, including:
Context Awareness, Smart Spaces
Over the last few years, the PRIMA group has pioneered the use of context aware observation of human activity in order to provide non-disruptive services. In particular, we have developed a conceptual framework for observing and modeling human activity, including human-to-human interaction, in terms of situations.
Encoding activity in situation models provides a formal representation for building systems that observe and understand human activity. Such models provide scripts of activities that tell a system what actions to expect from each individual and the appropriate behavior for the system. A situation model acts as a non-linear script for interpreting the current actions of humans, and predicting the corresponding appropriate and inappropriate actions for services. This framework organizes the observation of interaction using a hierarchy of concepts: scenario, situation, role, action and entity. Situations are organized into networks, with transition probabilities, so that possible next situations may be predicted from the current situation.
Current technology allows us to handcraft real-time systems for a specific services. The current hard challenge is to create a technology to automatically learn and adapt situation models with minimal or no disruption of human activity. An important current problem for the PRIMA group is the adaptation of Machine Learning techniques for learning situation models for describing the context of human activity.
Context Aware Systems and Services require a model for how humans think and interact with each other and their environment. Relevant theories may be found in the field of cognitive science. Since the 1980's, Philippe Johnson-Laird and his colleagues have developed an extensive theoretical framework for human mental models , . Johnson Laird's "situation models", provide a simple and elegant framework for predicting and explaining human abilities for spatial reasoning, game playing strategies, understanding spoken narration, understanding text and literature, social interaction and controlling behavior. While these theories are primarily used to provide models of human cognitive abilities, they are easily implemented in programmable systems , .
In Johnson-Laird's Situation Models, a situation is defined as a configuration of relations over entities. Relations are formalized as N-ary predicates such as beside or above. Entities are objects, actors, or phenomena that can be reliably observed by a perceptual system. Situation models provide a structure for organizing assemblies of entities and relations into a network of situations. For cognitive scientists, such models provide a tool to explain and predict the abilities and limitations of human perception. For machine perception systems, situation models provide the foundation for assimilation, prediction and control of perception. A situation model identifies the entities and relations that are relevant to a context, allowing the perception system to focus limited computing and sensing resources. The situation model can provide default information about the identities of entities and the configuration of relations, allowing a system to continue to operate when perception systems fail or become unreliable. The network of situations provides a mechanism to predict possible changes in entities or their relations. Finally, the situation model provides an interface between perception and human centered systems and services. On the one hand, changes in situations can provide events that drive service behavior. At the same time, the situation model can provide a default description of the environment that allows human-centered services to operate asynchronously from perceptual systems.
We have developed situation models based on the notion of a script. A theatrical script provides more than dialog for actors. A script establishes abstract characters that provide actors with a space of activity for expression of emotion. It establishes a scene within which directors can layout a stage and place characters. Situation models are based on the same principle.
A script describes an activity in terms of a scene occupied by a set of actors and props. Each actor plays a role, thus defining a set of actions, including dialog, movement and emotional expressions. An audience understands the theatrical play by recognizing the roles played by characters. In a similar manner, a user service uses the situation model to understand the actions of users. However, a theatrical script is organised as a linear sequence of scenes, while human activity involves alternatives. In our approach, the situation model is not a linear sequence, but a network of possible situations, modeled as a directed graph.
Situation models are defined using roles and relations. A role is an abstract agent or object that enables an action or activity. Entities are bound to roles based on an acceptance test. This acceptance test can be seen as a form of discriminative recognition.
There is no generic algorithm capable of robustly recognizing situations from perceptual events coming from sensors. Various approaches have been explored and evaluated. Their performance is very problem and environment dependent. In order to be able to use several approaches inside the same application, it is necessary to clearly separate the specification of scenario and the implementation of the program that recognizes it, using a Model Driven Engineering approach. The transformation between a specification and its implementation must be as automatic as possible. We have explored three implementation models :
Synchronized petri net. The Petri Net structure implements the temporal constraints of the initial context model (Allen operators). The synchronisation controls the Petri Net evolution based on roles and relations perception. This approach has been used for the Context Aware Video Acquisition application.
Fuzzy Petri Nets. The Fuzzy Petri Net naturally expresses the smooth changes of activity states (situations) from one state to another with gradual and continuous membership function. Each fuzzy situation recognition is interpreted as a new proof of the recognition of the corresponding context. Proofs are then combined using fuzzy integrals. This approach has been used to label videos with a set of predefined scenarios (context).
Hidden Markov Model. This probabilistic implementation of the situation model integrates uncertainty values that can both refer to confidence values for events and to a less rigid representation of situations and situations transitions. This approach has been used to detect interaction groups and to determinate who is interacting with whom and thus which interaction groups are formed.
Currently situation models are constructed by hand. Our challenge is to provide a technology by which situation models may be adapted and extended by explicit and implicit interaction with the user. An important aspect of taking services to the real world is an ability to adapt and extend service behaviour to accommodate individual preferences and interaction styles. Our approach is to adapt and extend an explicit model of user activity. While such adaptation requires feedback from users, it must avoid or at least minimize disruption. We are currently exploring reinforcement learning approaches to solve this problem.
With a reinforcement learning approach, the system is rewarded and punished by user reactions to system behaviours. A simplified stereotypic interaction model assures a initial behaviour. This prototypical model is adapted to each particular user in a way that maximizes its satisfaction. To minimize distraction, we are using an indirect reinforcement learning approach, in which user actions and consequences are logged, and this log is periodically used for off-line reinforcement learning to adapt and refine the context model.
Adaptations to the context model can result in changes in system behaviour. If unexpected, such changes may be disturbing for the end users. To keep user's confidence, the learned system must be able to explain its actions. We are currently exploring methods that would allow a system to explain its model of interaction. Such explanation is made possible by explicit describing context using situation models.
The PRIMA group has refined its approach to context aware observation in the development of a process for real time production of a synchronized audio-visual stream based using multiple cameras, microphones and other information sources to observe meetings and lectures. This "context aware video acquisition system" is an automatic recording system that encompasses the roles of both the cameraman and the director. The system determines the target for each camera, and selects the most appropriate camera and microphone to record the current activity at each instant of time. Determining the most appropriate camera and microphone requires a model of activities of the actors, and an understanding of the video composition rules. The model of the activities of the actors is provided by a "situation model" as described above.
In collaboration with France Telecom, we have adapted this technology to observing social activity in domestic environments. Our goal is to demonstrate new forms of services for assisted living to provide non-intrusive access to care as well to enhance informal contact with friends and family.
Software Architecture, Service Oriented Computing, Service Composition, Service Factories, Semantic Description of Functionalities
Intelligent environments are at the confluence of multiple domains of expertise. Experimenting within intelligent environments requires combining techniques for robust, autonomous perception with methods for modeling and recognition of human activity within an inherently dynamic environment. Major software engineering and architecture challenges include accomodation of a heterogeneous of devices and software, and dynamically adapting to changes human activity as well as operating conditions.
The PRIMA project explores software architectures that allow systems to be adapt to individual user preferences. Interoperability and reuse of system components is fundamental for such systems. Adopting a shared, common Service Oriented Architecture (SOA) architecture has allowed specialists from a variety of subfields to work together to build novel forms of systems and services.
In a service oriented architecture, each hardware or software component is exposed to the others as a “service”. A service exposes its functionality through a well defined interface that abstracts all the implementation details and that is usually available through the network.
The most commonly known example of a service oriented architecture are the Web Services technologies that are based on web standards such as HTTP and XML. Semantic Web Services proposes to use knowledge representation methods such as ontologies to give some semantic to services functionalities. Semantic description of services makes it possible to improve the interoperability between services designed by different persons or vendors.
Taken out of the box, most SOA implementations have some “defects” preventing their adoption. Web services, due to their name, are perceived as being only for the “web” and also as having a notable performance overhead. Other implementations such as various propositions around the Java virtual machine, often requires to use a particular programming language or are not distributed. Intelligent environments involves many specialist and a hard constraint on the programming language can be a real barrier to SOA adoption.
The PRIMA project has developed OMiSCID, a middleware for service oriented architectures that addresses the particular problematics of intelligent environments. OMiSCID has emerged as an effective tool for unifying access to functionalities provided from the lowest abstraction level components (camera image acquisition, image processing) to abstract services such as activity modeling and personal assistant. OMiSCID has facilitated cooperation by experts from within the PRIMA project as well as in projects with external partners.
Local Appearance, Affine Invariance, Receptive Fields
A long-term grand challenge in computer vision has been to develop a descriptor for image information that can be reliably used for a wide variety of computer vision tasks. Such a descriptor must capture the information in an image in a manner that is robust to changes the relative position of the camera as well as the position, pattern and spectrum of illumination.
Members of PRIMA have a long history of innovation in this area, with important results in the area of multi-resolution pyramids, scale invariant image description, appearance based object recognition and receptive field histograms published over the last 20 years. The group has most recently developed a new approach that extends scale invariant feature points for the description of elongated objects using scale invariant ridges. PRIMA has worked with ST Microelectronics to embed its multi-resolution receptive field algorithms into low-cost mobile imaging devices for video communications and mobile computing applications.
The visual appearance of a neighbourhood can be described by a local Taylor series . The coefficients of this series constitute a feature vector that compactly represents the neighbourhood appearance for indexing and matching. The set of possible local image neighbourhoods that project to the same feature vector are referred to as the "Local Jet". A key problem in computing the local jet is determining the scale at which to evaluate the image derivatives.
Lindeberg has described scale invariant features based on profiles of Gaussian derivatives across scales. In particular, the profile of the Laplacian, evaluated over a range of scales at an image point, provides a local description that is "equi-variant” to changes in scale. Equi-variance means that the feature vector translates exactly with scale and can thus be used to track, index, match and recognize structures in the presence of changes in scale.
A receptive field is a local function defined over a region of an image . We employ a set of receptive fields based on derivatives of the Gaussian functions as a basis for describing the local appearance. These functions resemble the receptive fields observed in the visual cortex of mammals. These receptive fields are applied to color images in which we have separated the chrominance and luminance components. Such functions are easily normalized to an intrinsic scale using the maximum of the Laplacian , and normalized in orientation using direction of the first derivatives .
The local maxima in x and y and scale of the product of a Laplacian operator with the image at a fixed position provides a "Natural interest point" . Such natural interest points are salient points that may be robustly detected and used for matching. A problem with this approach is that the computational cost of determining intrinsic scale at each image position can potentially make real-time implementation unfeasible.
A vector of scale and orientation normalized Gaussian derivatives provides a characteristic vector for matching and indexing. The oriented Gaussian derivatives can easily be synthesized using the "steerability property" of Gaussian derivatives. The problem is to determine the appropriate orientation. In earlier work by PRIMA members Colin de Verdiere , Schiele and Hall , proposed normalising the local jet independently at each pixel to the direction of the first derivatives calculated at the intrinsic scale. This results for many view invariant image recognition tasks are described in the next section.
Key results in this area include
Fast, video rate, calculation of scale and orientation for image description with normalized chromatic receptive fields .
Robust visual features for detection, tracking and recognition of faces and emotions , ,
Direct computation of time to collision over the entire visual field using rate of change of intrinsic scale .
We have achieved video rate calculation of scale and orientation normalized Gaussian receptive fields using an O(N) pyramid algorithm . This algorithm has been used to propose an embedded system that provides real time detection and recognition of faces and objects in mobile computing devices. A software package has been filed with the APP and licensed to an industrial partner.
Applications have been demonstrated for detection, tracking and recognition of faces as well detection of emotions and posture at video rates on mobile devices.
Affective Computing, Perception for social interaction.
Current research on perception for interaction primarily focuses on recognition and communication of linguistic signals. However, most human-to-human interaction is non-verbal and highly dependent on social context. A technology for natural interaction requires abilities to perceive and assimilate non-verbal social signals, to understand and predict social situations, and to acquire and develop social interaction skills.
The overall goal of this research program is to provide the scientific and technological foundations for systems that observe and interact with people in a polite, socially appropriate manner. We address these objectives with research activities in three interrelated areas:
Multimodal perception for social interactions.
Learning models for context aware social interaction, and
Context aware systems and services.
Our approach to each of these areas is to draw on models and theories from the cognitive and social sciences, human factors, and software architectures to develop new theories and models for computer vision and multi-modal interaction. Results will be developed, demonstrated and evaluated through the construction of systems and services for polite, socially aware interaction in the context of smart habitats.
First part of our work on perception for social interaction has concentrated on measuring the physiological parameters of Valence, Arousal and Dominance using visual observation form environmental sensors as well as observation of facial expressions.
People express and feel emotions with their face. Because the face is both externally visible and the seat of emotional expression, facial expression of emotion plays a central role in social interaction between humans. Thus visual recognition of emotions from facial expressions is a core enabling technology for any effort to adapt systems for social interaction.
Constructing a technology for automatic visual recognition of emotions requires solutions to a number of hard challenges. Emotions are expressed by coordinated temporal activations of 21 different facial muscles assisted by a number of additional muscles. Activations of these muscles are visible through subtle deformations in the surface structure of the face. Unfortunately, this facial structure can be masked by facial markings, makeup, facial hair, glasses and other obstructions. The exact facial geometry, as well as the coordinated expression of muscles is unique to each individual. In additions, these deformations must be observed and measured under a large variety of illumination conditions as well as a variety of observation angles. Thus the visual recognition of emotions from facial expression remains a challenging open problem in computer vision.
Despite the difficulty of this challenge, important progress has been made in the area of automatic recognition of emotions from face expressions. The systematic cataloging of facial muscle groups as facial action units by Ekman has let a number of research groups to develop libraries of techniques for recognizing the elements of the FACS coding system . Unfortunately, experiments with that system have revealed that the system is very sensitive to both illumination and viewing conditions, as well as the difficulty in interpreting the resulting activation levels as emotions. In particular, this approach requires a high-resolution image with a high signal-to-noise ratio obtained under strong ambient illumination. Such restrictions are not compatible with the mobile imaging system used on tablet computers and mobile phones that are the target of this effort.
As an alternative to detecting activation of facial action units by tracking individual face muscles, we propose to measure physiological parameters that underlie emotions with a global approach. Most human emotions can be expressed as trajectories in a three dimensional space whose features are the physiological parameters of Pleasure-Displeasure, Arousal-Passivity and Dominance-Submission. These three physiological parameters can be measured in a variety of manners including on-body accelerometers, prosody, heart-rate, head movement and global face expression.
In our work, we address the recognition of social behaviours multimodal information. These are unconscious inmate cognitive processes that are vital to human communication and interaction. Recognition of social behaviours enables anticipation and improves the quality of interaction between humans. Among social behaviours, we have focused on engagement, the expression of intention for interaction. During the engagement phase, many non-verbal signals are used to communicate the intention to engage to the partner . These include posture, gaze, spatial information, gestures, and vocal cues.
For example, within the context of frail or elderly people at home, a companion robot must also be able to detect the engagement of humans in order to adapt their responses during interaction with humans to increase their acceptability. Classical approaches for engagement with robots use spatial information such as human position and speed, human-robot distance and the angle of arrival. Our believe is that uni-modal methods may be suitable for static display and robots in wide space area but not for home environments. In an apartment, relative spatial information of people and robot are not as discriminative as in an open space. Passing by the robot in a corridor should not lead to an engagement detection, and possible socially inappropriate behaviour by the robot.
In our experiments, we used a kompai robot from Robosoft . As an alternative to wearable physiological sensors (such as pulse bracelet Cardiocam, etc.) we integrate multimodal features using a Kinect sensor (see figure ). In addition of the spatial cues from the laser telemeter, one can use new multimodal features based on persons and skeletons tracking, sound localization, etc. Some of these new features are inspired from results in cognitive science domain .
Our multimodal approach has been confronted to a robot centered dataset for multimodal social signal processing recorded in a home-like environment . The evaluation on our corpus highlights its robustness and validates use of such technique in real environment. Experimental validation shows that the use of multimodal sensors gives better results than only spatial features (50% of error reduction). Our experimentations also confirm results from : relative shoulder rotation, speed and facing visage are among crucial features for engagement detection.
End users programming, smart home, smart environment
Pervasive computing promises unprecedented empowerment from the flexible and robust combination of software services with the physical world. Software researchers assimilate this promise as system autonomy where users are conveniently kept out of the loop. Their hypothesis is that services, such as music playback and calendars, are developed by service providers and pre-assembled by software designers to form new service frontends. Their scientific challenge is then to develop secure, multiscale, multi-layered, virtualized infrastructures that guarantee service front-end continuity. Although service continuity is desirable in many circumstances, end users, with this interpretation of ubiquitous computing, are doomed to behave as mere consumers, just like with conventional desktop computing.
Another interpretation of the promises of ubiquitous computing, is the empowerment of end users with tools that allow them to create and reshape their own interactive spaces. Our hypothesis is that end users are willing to shape their own interactive spaces by coupling smart artifacts, building imaginative new functionality that were not anticipated by system designers. A number of tools and techniques have been developed to support this view such as CAMP or iCAP .
We are investigating an End-User Programming (EUP) approach to give the control back to the inhabitants. With this approach, smart home services, using sensors, actuators and services would be configured by the inhabitants. Our research focuses on easy to use tools and languages for :
Installation and maintenance of devices and services.
Visualisation and control of the Smart Home.
Service configuration.
Detection and resolution conflicting demands on sensors and actuators by services.
The paper "The Grenoble System for the Social Touch Challenge at ICMI 2015" by Viet Cuong Ta, Wafa Johal, Maxime Portaz, Eric Castelli, Dominique Vaufreydaz has won the "ICMI 2015 Touch Challenge" at the ICMI 2015 conference".
On 5 June 2015, members of PRIMA have organised the inauguration of the EquipEx platform Amiqual4Home.
An Object Oriented Open-Source Middleware for Service Communication Inspection and Discovery
Participants: Patrick Reignier, Dominique Vaufreydaz, Amaury Negre,
Contact: Dominique Vaufreydaz
Keywords: Middleware - Pervasive computing - Service Oriented Software (SOA)
Functional Description
OMiSCID is lightweight middleware for dynamic integration of perceptual services in interactive environments. This middleware abstracts network communications and provides service introspection and discovery using DNS-SD (DNS-based Service Discovery). Services can declare simplex or duplex communication channels and variables. The middleware supports the low-latency, high-bandwidth communications required in interactive perceptual applications. It is designed to allow independently developed perceptual components to be integrated to construct user services. Thus our system has been designed to be cross-language, cross-platform, and easy to learn. It provides low latency communications suitable for audio and visual perception for interactive services.
Functional Description
The AppsGate architecture is based on the HMI Middleware developed in cooperation with the IIHM and Adele groups of the UMR Laboratoire Informatique de Grenoble (LIG). The HMI Middleware is designed to facilitate the development of end-user applications on top of the core software components described in the sections above, while ensuring service continuity and usability. The key features of the HMI Middleware include:
Integration of sensors and actuators managed by a variety of protocols, and provision of a uniform abstraction for these devices as component-oriented-services,
Integration of Web services made available on the cloud by a variety of web service providers, and provision of a uniform abstraction for these services as component-oriented-services,
Communication between the HMI middleware and client applications - typically, user interfaces for controlling and programming the smart home, that run on high-end devices such as smartphones, tablets, and TVs.
Participants: Alexandre Demeure, James Crowley, Eméric Grange, Cédric Gérard, Camille Lenoir and Kouzma Petoukhov
Contact: James Crowley, Alexandre Demeure
http://
SPOK: Simple Programming Kit pour Smart Homes
Keywords: End User Development - Smart Home
Contact: James Crowley, Alexandre Demeurre
SPOK is an End-User Development Environment that permits people to monitor, control, and configure smart home services and devices. SPOK provides the end-user with the following services: (1) A syntax-oriented program editor that enforces the construction of syntactically-correct programs (see sidebar on next page). (2) A program interpreter and a clock simulator to test program execution in “simulated time”. (3) Debugging aids to support the detection and correction of programming errors or system malfunctions along with a Trace Manager. (4) A dashboard to remotely control devices and programs in a centralized and uniform manner.
Compared to the state-of-the art, the key features of SPOK are thee-fold: Expressive power of the SPOK language along with a pseudo-natural concrete syntax, dynamic adaptation to the arrival/departure of devices and services, and debugging aids.
SPOK was developed as part of the EU CATRENE APPSGATE project (CA 110) and is supported by the EquipEx AmiQual4Home, ANR-11-EQPX-00.
Participant: Remi Pincent
Contact: Remi Pincent
The DomiCube is a home-made device designed by 5 retired seniors as the result of a 3 hour focus group. It contains an accelerometer and a gyroscope, and is Bluetooth enabled. It sends events when its state changes (e.g., new orientation, top face, and battery level). The DomiCube was built in the Creativity Lab of the EquipEx AmiQual4Home, ANR-11-EQPX-00.
Keywords: Health - Home care
Contact: Dominique Vaufreydaz
Functional Description
Within the Pramad project, we want to offer a full affective loop between the companion robot and the elderly people at home. This affective loop is necessary within the context of everyday interaction of elderly and the companion robot. A part of this loop is to make the robot express emotions in response to the emotional state of the user. To do that, we need to test our working hypothesis about the visual representation of emotions with the 3D face of robot. EmoPRAMAD is an evaluation tool designed to conduct comparative studies between human faces and the 3D faces expressing a defined set of emotions.
The evaluation conducted though EmoPRAMAD concerns both unimodal (facial only) and bimodal conditions (facial/sound). The emotions set is composed of 4 basic emotions (joy, fear, anger, sadness) and a neutral state. While experimenting, the software collects several parameters in order to evaluate more than correctness of the answers: time to respond, length of mouse moves, etc.
Keywords: Benchmark corpus - Health - Home Care
Contact: Dominique Vaufreydaz
Functional Description
MobileRGBD is corpus dedicated to low level RGB-D algorithms benchmarking on mobile platform. We reversed the usual corpus recording paradigm. Our goal is to facilitate ground truth annotation and reproducibility of records among speed, trajectory and environmental variations. As we want to get rid of unpredictable human moves, we used dummies in order to play static users in the environment (see figure). Interest of dummies resides in the fact that they do not move between two recordings. It is possible to record the same robot move in order to evaluate performance of detection algorithms varying speed. This benchmark corpus is intended for «low level» RGB-D algorithm family like 3D-SLAM, body/skeleton tracking or face tracking using a mobile robot. Using this open corpus, researchers can find a way to answer several questions: System performance under variations in operating conditions? on a mobile robot, what is the maximum linear/angular speed supported by the algorithm? which variables impact the algorithm? evaluate suitable height/angle of the mounted RGB-D sensor to reach goals: monitoring everyday live is different from searching fallen persons on the floor; finally, what is the performance on an algorithm with regards to others?
Participants: Patrick Reignier, Dominique Vaufreydaz and James Crowley
Contact: Dominique Vaufreydaz
Online Movie director is a network online video editing program. It can handle several video and audio streams over the network and resynchronize them to produce a video either for streaming or either for direct video production. The system can record lectures using multiple cameras and microphones. The system uses PRIMA techniques for modelling context to select the most appropriate camera and microphone, based on the current situation.
Keywords: Health - Home care - Handicap
Contact: Dominique Vaufreydaz, Amaury Negre
A part of our efforts in the PAL project has been put toward developing a solution that would ease the integration of our multi-partners' software components.
The design of PAL Middleware responds to a requirement that within the PAL project, each partner is responsible for maintaining 1) its software heritage 2) its resources 3) its competences and fields of research and expertise; 4) current practices in terms of programming language, (c/c++, Java, Python), computing platforms (OSx, Linux, Windows, Android, etc.) and interconnect software components (OSGi, OMiSCID, MPI, PVM, etc.); and 5) its particular needs and constraints.
For it to be widely accepted, the PAL middleware must be designed to be ecologic and pragmatic. Ecologic in the sense that the solution does not perturb the ecology of each ecosystem, pragmatic in the sense that setting up this solution did not require an heavy development effort, also because PAL and is required to reuse existing software solutions.
For developing PALGate we introduced a novel concept: software gate. Unlike software components/services which can be instantiated, a software gate is only a concept, it is defined as an ecologic and hermetic interface between different ecosystems. A software gate is characterized by the subset of functionalities it exposes to other gates, where the functionalities it exposes are provided by the software components/services of its belonging ecosystem. A software gate is hermetic in the sense that only a selected subset of functionalities of an ecosystem are exposed but also because it propagates only filtered information exposed by other gates into its ecosystem. The last characteristic of a software gate is that it makes explicit to other gates the communication mechanisms it uses.
While a software gate is only conceptual, the PAL middleware is an implementation of a gate oriented middleware. The PAL Middleware uses ROS to support the basic communication between gates. Within PALGate, each ecosystem is associated to only one software gate. Practically, PAL middleware 1) is a ROS stack containing gates definition 2) is a set of conventions (e.g. stack organization, package/node/topic/service names, namespaces, etc.) 3) it provides dedicated tools to ease the integration and its usage by partners. A software gate in PAL is a ROS package containing definition of ROS types (i.e. msgs and srvs types), but also exposed ROS communication channels (i.e. topics and RPCs).
With this architecture each partner has to provide the PAL middleware with a package containing the definition of its gate. Then in order a) to expose functionalities out of their ecosystem and b) to propagate information into their ecosystem, each partner must create ROS nodes. These ROS nodes let each partner interface their ecosystem through ROS topics and ROS services without having to change anything about their architecture. For instance if a partner is using Java and OSGi, it can create nodes in ROS Java that will expose/register functionalities through ROS services, publish/subscribe information using ROS topics.
Participants: Frédéric Devernay, Pau Gargallo and Sergi Pujades
Contact: Frédéric Devernay
Participants: Rémi Barraquand, Claudine Combe, Lukas Rummelhard, Amaury Negre, Sergi Pujades-Rocamora and James Crowley
Contact: James Crowley
Functional Description
PrimaCV is a software library for detecting, observing and tracking faces and emotions using the cameras on mobile devices. The PrimaCV library uses a scale invariant pyramid to construct receptive field descriptors for images. These are used by a coarse to fine multiscale "scanning window" face detector constructed as a cascade classifier constructed using an highly optimised version of Ada Boost. Because the system uses coarse to fine search within a scale invariant pyramid it automatically adapts to the number of pixels and scale of the imager. The coarse-to-fine search algorithm has been shown to provide a dramatic gain in performance over classic scanning window detectors. The algorithm produces a probability of a face for each possible scale and position in the image. Local maximum in probability are fed to a Bayesian face tracker.
Normalized imagettes of tracked faces can be fed to procedures for estimating face orientation, recognising identity, estimating parameters of emotions.
Functional Description
Stereoscopy, Auto-calibration, Real-time video processing, Feature matching
Participants: Frédéric Devernay, Loïc Lefort, Elise Mansilla and Sergi Pujades
Contact: Frédéric Devernay
Functional Description
Inhabitants play a key role in buildings global energy consumption but it is difficult to involve them in energy management. Our objective is to make energy consumption visible by simulating inside a serious game the energy impact of inhabitants behaviours. A serious game is currently under development, coupling a 3D virtual environment and a building energy simulator. The 3D virtual environment is based on the JMonkey 3D engine. New houses can be easily imported using SweetHome 3D and Blender. The building energy simulator is EnergyPlus. The 3D engine and the energy engine are coupled using the Functional Mock-up Interface (FMI) standard. Using this standard will allow to easily switch between existing building energy simulators.
Participant: Patrick Reignier
Contact: Patrick Reignier
Participants: Dominique Vaufreydaz and Eméric Grange
Contact: Dominique Vaufreydaz
SmartServoFramework is a C++ multiplatform framework used to drive "smart servo" devices such as Dynamixel or HerkuleX actuators. The Framework, developed by members of the PRIMA team supports Linux (and most Unix systems), Mac OS X and Windows operating systems. SmartServoFramework can run on Raspberry Pi or other similar boards. This framework can be used with any Dynamixel or HerkuleX devices. Dynamixel devices from Robotis and HerkuleX devices from Dongbu Robot are high-performance networked actuators for robots available in wide range of sizes and strengths. They have adjustable torque, speed, angle limits, and provide various feedback like position, load, voltage and temperature.
The domain of service-robots is growing fast and has become the focus of many researchers and industrials alike. Application areas are extremely broad, from logistics to handicap assistance. A large proportion of such robots are expected to share humans' living space and thus must be endowed with navigation capabilities that exceed the standard requirements pertaining to autonomous navigation such as motion safety. In a human populated environment, optimality does not boil down to minimizing resources such as time or distance traveled anymore, the robot motion must abide by social rules and move in a manner which is appropriate.
Most of the approaches proposed so far rely upon the definition of so-called social spaces, i.e. regions in the environment that, for different reasons, the persons consider as psychologically theirs. Such social spaces are primarily characterized using either the position of the person, e.g. “Personal space” , or the activity he is currently engaged in, e.g. “Interaction Space” and “Activity Space” . The most common approach is then to define costmaps on such social spaces: the higher the cost, the less desirable it is for the robot to be at the corresponding position. The costmaps are ultimately used for motion planning and navigation purposes.
While improving upon the standard “non social” navigation methods, this type of approach intrinsically ignores the correlations between interactions as well as the influence of the robot on those interactions. It thus fails to capture several important features of social navigation, such as the distraction and surprise caused to the surrounding individuals. To overcome those limits, we suggest using the psychological concept of attention, which plays a central role when humans navigate around each other. This concept brings a new degree of control over the motion of the robot, namely the invasive and distracting character of the robot motion, which have so far proven hard to tackle with the conventional tools such as social spaces. Beside leading appropriate motion, attention-based navigation enables interaction through motion by predicting the quantity of attention the human will give to the robot.
Building upon a computational model of attention that was earlier proposed in , we have developed the novel concept of attention field. The attention field is straightforward to define: it is a measure of the amount of attention that a given person would allocate to the robot, should the robot be in a given position/state. It is a mapping from the state space of the robot to
In 2015, we have developed a variant of the well-known differential evolution algorithm which deals with optimizing continuous trajectories under multiple constraints. The performance of our approach is now being compared with trajectories obtained by relying only on social spaces. Besides the traditional qualitative approaches to evaluate the discomfort caused by the robot motion, we work on defining more quantitative measures that would enable us to further validate our approach.
As part of the CATRENE project AppsGate, we have developed SPOK, an End User Development Environment, that enables inhabitants to control and program their smart Homes via a web interface. The current version of SPOK includes an editor for editing programs using a pseudo-natural language and an interpreter. A multi-syntax editor as well as additional services such as a debugger and a simulator are expected for the second version.
A multi-syntax editor will allow users to build syntactically correct programs using the syntax that is most appropriate to them or by using a combination of them. These syntaxes include pseudo-natural language (i.e. a constrained natural language) and graphical iconic syntax (as exemplified by Scratch [Maloney et al. 2010]). The interaction techniques used to enter programs may be menu-based, free typing, as well as by demonstration in the physical home or by the way of the simulator. The simulator is the dual digital representation of the real home. It is intended to serve also as a debugger for testing and correcting end-user programs.
Whatever syntax used by end-users, programs are translated into syntactic abstract trees whose leaves reference services provided by the Core HMI and/or by the Extended HMI Middleware. The interpreter, executes end-user programs, using the corresponding syntactic abstract trees as input.
In order to support a dynamically extensible grammar as well as to provide end-users with feedforward at the user interface of the editor, the grammar used by the editor is split into 2 parts: the root grammar and the device specific grammars. The root grammar specifies the generic structures of an end-user program: loops, conditions, etc. The device specific grammars are separated from the root grammar to be able to dynamically build the final grammar to be compliant with what is currently installed and detected by the AppsGate server. Each device type brings with it its own events, status and actions. These grammatical elements are injected into the root grammar when generating the parser and for compiling end-user programs.
The language used by end-users to express their programs is a pseudo-natural language using the rule-based programming paradigm. The left hand side of a rule is composed of events and conditions, and the right hand side specifies the actions to be taken when the left hand-side is true or becomes true. A program may include several rules that can be executed either in parallel or sequentially. Once entered, programs are translated into syntactic abstract trees. The interpreter, executes end-user programs, using the corresponding syntactic abstract trees as input. SPOK is implemented as a mix of OSGi and ApAM components where ApAM is in turn a middleware that runs on top of OSGi.
Reducing housing energy costs is a major challenge of the 21st century. In the near future, the main issue for building construction is the thermal insulation, but in the longer term, the issues are those of renewable energy (solar, wind, etc.) and smart buildings. Home automation system basically consists of household appliances linked via a communication network allowing interactions for control purposes. Thanks to this network, a load management mechanism can be carried out: it is called distributed control. An optimal home energy management system is still a goal to aim for, because lots of aspects are still not completely fulfilled. Most of the energy systems respect only the energy needs, but they don't tackle the user needs or satisfaction. Energy systems also have a lack when it comes to the dynamicity of the environments (the system ability to adapt).The problem is similar for the existing HMI (Human User Interface) of those Home Automation Systems where only experts can understand the data coming from the sensors and most important, the energy plan coming from management system (How? and Why?). The goal of this study is to propose a house energy model that can be both used to predict at some level energy evolution and that can be understood by the end user. The house energy model is based on Fuzzy Cognitive Maps representing cause-effects relations. It is first designed by an expert and then automatically tuned to a particular house using machine learning approaches. Preliminary experiments have been done this year using the Predis datasets.
Modern mobile devices, such as smart phones and tablets, combine a rich set of sensors, internet connectivity, with embedded computational power and memory. The PRIMA group has recently demonstrated that it is possible to construct embedded software that uses the full suite of mobile sensors to recognise activities and learn the daily routines of users.
A first proof of concept has recently been constructed using recognition of places and activities. The system was trained by having student volunteers carry a cell phone running a data acquisition program that recorded signals from accelerometer, gyroscope, ambient sound, ambient light, Cell tower, wifi, bluetooth, and GPS based geolocalisation. The data were labeled by the students with ground truth data about transportation modes, places, and activities. This data was then used to learn recognition routines. Recognition of places, activities, and transportation was used to construct probabilistic models of daily routines using PRIMA's situation modelling techniques, previous demonstrated in constructing situation aware services. The system was demonstrated by constructing a Twitter Bot (a robot that publishes on twitter) that published information about volunteers during their daily activity.
A professional quality software system named CAM - Context Aware Manager - is currently under construction, and will be licensed to the PRIMA startup SItu8ed, for use in context aware mobile services.
In mixed reality, real objects can be used to interact with virtual objects. However, unlike in the real world, real objects do not encounter any opposite reaction force when pushing against virtual objects. The lack of reaction force during manipulation prevents users from perceiving the mass of virtual objects. Although this could be addressed by equipping real objects with force-feedback devices, such a solution remains complex and impractical. In this work, we present a technique to produce an illusion of mass without any active force-feedback mechanism. This is achieved by simulating the effects of this reaction force in a purely visual way. A first study demonstrates that our technique indeed allows users to differentiate light virtual objects from heavy virtual objects. In addition, it shows that the illusion is immediately effective, with no prior training. In a second study, we measure the lowest mass difference (JND) that can be perceived with this technique. The effectiveness and ease of implementation of our solution provides an opportunity to enhance mixed reality interaction at no additional cost.
"Pseudo-haptic feedback" is a technique aiming to simulate haptic sensations without active haptic feedback devices. Peudo-haptic techniques have been used to simulate various haptic feedbacks such as stiffness, torques, and mass. In the framework of Jingtao Chen PhD thesis, a novel pseudo-haptic experiment has been set up. The aim of this experiment is to study the EMG signals during a pseudo-haptic task. A stiffness discrimination task similar to the one published in Lecuyer's PhD thesis has been chosen. The experimental set-up has been developed, as well as the software controlling the experiment. Pre-tests are under way. They will be followed by the tests with subjects.
PRIMA has worked with Schneider Electric on embedded image analysis algorithms for a new generation of far-infrared visual sensors. The objective is to develop an integrated visual sensor with very low power consumption. Such systems can be used to estimate temperature in different parts of a room, as well as to provide information about human presence and human activity.
PRIMA is working with Orange Labs on techniques for observing activity and learning routines in a smart home. Activity is observed by monitoring use of electrical appliances and Communications media (Telephone, Television, Internet). Activities are described using Bayesian Situation Modelling techniques demonstrated in earlier projects. A log of daily activities is used to discover daily routines expressed as temporal sequences of contexts, where each context is expressed as a network of situations.
Experiments will be performed using the LovelyLoft Smart home living lab that has been constructed as part of the EquipEx Amiqual4home.
The partners are G-SCOP, LIG (Prima, IIHM), CEA Liten, PACTE, Vesta Systems and Elithis.
The project focuses on bringing solutions to building actors for upcoming challenges in energy management in residential buildings. Many technical solutions have been proposed so far, but without sufficiently considering sufficiently actors as key. It is generally considered that energy management can be done by measurement and computation means with few contributions of actors. The project explores a new paradigm: a user centric energy management system, where user needs and tacit knowledge drive the search of solutions. These are calculated thanks to a flexible energy model of the living areas. The system is personified by energy consultants with which building actors such as building owners, building managers, technical operators but also occupants, can interact with in order to co-define energy strategies, benefiting of both assets: tacit knowledge of human actors, and measurement with computation capabilities of calculators. Putting actors in the loop, i.e. making energy not only visible but also controllable is the needed step before large deployment of energy management solutions. It is proposed to develop interactive energy consultants for all the actors, which are energy management aided systems embedding models in order to support the decision making processes. MIRROR (interactive monitoring), WHAT-IF (interactive quantitative simulation), EXPLAIN (interactive qualitative simulation), SUGGEST-AND-ADJUST (interactive management) and RECOMMEND (interactive diagnosis) functionalities will be developed.
CEEGE is a multidisciplinary scientific research project conducted by the Inria PRIMA team in cooperation with the Dept of Cognitive Neuroscience at the University of Bielefeld. The primary impacts will be improved scientific understanding in the disciplines of Computer Science and Cognitive Neuro-Science. The aim of this project is to experimentally evaluate and compare current theories for mental modeling for problem solving and attention, as well as to refine and evaluate techniques for observing the physiological reactions of humans to situation that inspire pleasure, displeasure, arousal, dominance and fear.
In this project, we will observe the visual attention, physiological responses and mental states of subject with different levels of expertise solving classic chess problems, and participating in chess matches. We will observe chess players using eye-tracking, sustained and instantaneous face-expressions (micro-expressions), skin conductivity, blood flow (BVP), respiration, posture and other information extracted from audio-visual recordings and sensor readings of players. We will use the recorded information to estimate the mental constructs with which the players understand the game situation. Information from visual attention as well as physiological reactions will be used to determine and model the degree to which a player understands the game situation in terms of abstract configurations of chess pieces. This will provided a structured environment that we will use for experimental evaluation of current theories of mental modelling and emotional response during problem solving and social interaction.
The project is organised in three phases. During the first phase, we will observe individual players of different levels of chess expertise solving known chess problems. We will correlate scan-path from eye tracking and other information about visual attention to established configurations of pieces and known solutions to chess problems. This will allow us to construct a labeled corpus of chess play that can be used to evaluate competing techniques for estimating mental models and physiological responses. In a second phase, we will observe the attention and face expressions of pairs of players of different levels of chess ability during game play. In particular, we will seek to annotate and segment recordings with respect to the difficulty of the game situation as well as situations which elicit particularly strong physiological reactions. In the final phase, we will use these recordings to evaluate the effectiveness of competing techniques for mental modelling and observation of emotions in terms of their abilities to predict the chess abilities of players, game outcomes and individual moves and player self reports. . Results of our work will be published in scientific conferences and journals concerned with cognitive science and cognitive neuroscience as well as computer vision, multi-modal interaction, affective computing and pervasive computing. Possible applications include construction of systems that can monitor the cognitive abilities and emotional reactions of users of interactive systems to provide assistance that is appropriate but not excessive, companion systems that can aid with active healthy ageing, and tutoring systems that can assist users in developing skills in a variety of domains including chess.
Ambient Intelligence, Equipment d'Excellence, Investissement d'Avenir
The AmiQual4Home Innovation Factory is an open research facility for innovation and experimentation with human-centered services based on the use of large-scale deployment of interconnected digital devices capable of perception, action, interaction and communication. The Innovation Factory is composed of a collection of workshops for rapid creation of prototypes, surrounded by a collection of living labs and supported by a industrial innovation and transfer service. Creation of the Innovation Factory has been made possible by a 2.140 Million Euro grant from French National programme "Investissement d'avenir", together with substantial contributions of resources by Grenoble INP, Univ Joseph Fourier, UPMF, CNRS, Schneider Electric and the Commune of Montbonnot. The objective is to provide the academic and industrial communities with an open platform to enable research on design, integration and evaluation of systems and services for smart habitats.
The AmiQual4Home Innovation Factory is a unique combination of three different innovation instruments: (1) Workshops for rapid prototyping of devices that embed perception, action, interaction and communication in ordinary objects based on the MIT FabLab model, (2) Facilities for real-world test and evaluation of devices and services organised as open Living Labs, (3) Resources for assisting students, researchers, entrepreneurs and industrial partners in creating new economic activities. The proposed research facility will enable scientific research on these problems while also enabling design and evaluation of new forms of products and services with local industry.
The core of the AmiQual4Home Innovation Factory is a Creativity Lab composed of a collection of five workshops for the rapid prototyping of devices that integrate perception, action, interaction and communications into ordinary objects. The Creativity Lab is surrounded by a collection of six Living Labs for experimentation and evaluation in real world conditions. The combination of fabrication facilities and living labs will enable students, researchers, engineers, and entrepreneurs to experiment in co-creation and evaluation. The AmiQual4Home Innovation Factory will also include an innovation and transfer service to enable students, researchers and local entrepreneurs to create and grow new commercial activities based on the confluence of digital technologies with ordinary objects. The AmiQual4Home Innovation Factory will also provide an infrastructure for participation in education, innovation and research activities of the European Institute of Technology (EIT) KIC ICTLabs.
The AmiQual4Home Innovation Factory enables a unique new form of coordinated ICT-SHS research that is not currently possible in France, by bringing together expertise from ICT and SHS to better understand human and social behaviour and to develop and evaluate novel systems and services for societal challenges. The confrontation of solutions from these different disciplines in a set of application domains (energy, comfort, cost of living, mobility, well-being) is expected to lead to the emergence of a common, generic foundation for Ambient Intelligence that can then be applied to other domains and locations. The initial multidisciplinary consortium will progressively develop interdisciplinary expertise with new concepts, theories, tools and methods for Ambient Intelligence.
The potential impact of such a technology, commonly referred to as "Ambient Intelligence", has been documented by the working groups of the French Ministry of Research (MESR) as well as the SNRI (Stratégie Nationale de la Recherche et de l'Innovation).
The Amiqual4Home Innovation Factory has been constructed with the Atelier Numerique Technology incubator across the street from the Inria Grenoble Rhone-Alpes Research Center in Monbonnot. The workshops, storage space and multi-functional workspace occupy 300 square meters on the ground floor. The LovelyLoft smart home technologies living lab occupies the apartment on the ground and 1st floor formerly occupied by the building guardian. The entire building has been equipped with an extensive suite of sensors and an open building management system and is currently used as the smart energy living lab.
IRT Silver economy is a multi-year collaboration between the PRIMA team of Inria, Université Grenoble Alpes, CEA LETI and Schneider Electric to develop smart devices and services for healthy ageing. The project is funded by the IRT nanoelec and has begun during the final months of 2015.
Within this project, Inria PRIMA and Schneider Electric are have begun development of a smart LW infrared imaging sensor for fall detection. The target system would build on a embedded integrated sensor system constructed by PRIMA with Schneider electric in 2014 and 2015.
Pramad is a collaborative project about Plateforme Robotique dÁssistance et de Maintien à Domicile. There are seven partners:
R&D/industry: Orange Labs (project leader) and Covéa Tech (insurance company),
Small companies: Interaction games (game designer, note that Wizardbox, the original partner was bought by Interaction games) and Robosoft (robot).
Academic labs: Inria/PRIMA, ISIR (Paris VI) and Hôpital Broca (Paris).
The objectives of this project are to design and evaluate robot companion technologies to maintain frail people at home. Working with its partners, PRIMA research topics are:
social interaction,
robotic assistance,
serious game for frailty evaluation and cognitive stimulation.
Anne Spalanzani is implied in the iceira lab (in cooperation with the CNRS laboratories LAAS, ISIR and Taiwan). The laboratory is hosted by the National University of Taiwan, it is supported for 5 years (2013-2018), and the collaborative research focuses on Human centered Robotics.
Sampen is a associate team managed by Anne Spalanzani and Ren Luo (NTU Taipei) and involves other inria researchers (David Daney for Inria Bordeaux and Marie Babel from Lagadic-Rennes). In the scope of thi associate team, Anne Spalanzani gave a seminar à NTU in may 2015. Vishnu Narayanan (PhD student co-directed by Anne Spalanzani and Marie Babel) and Aurélien Mallein (PhD student supervised by David Daney) spent 3 months at the Iceira Lab (Taipei) to work respectively on Navigation following conventions and Localization using heterogenous sensors.
Duration: june 2012 - June 2015
Coordinator: ST Microelectronics
Other partners: Pace, Technicolor, NXP, Myriad France SAS, 4MOD Technology, HI-IBERIA Ingenieria y Proyectos, ADD Semiconductor, Video Stream Network, SoftKinetic, Optrima, Fraunhofer, Vsonix, Evalan, University UJF/LIG, and Institut Telecom.
The Prima Project team has worked with 15 other partners to develop a new generation of set-top box for smart home applications. In close collaboration with ST Microelectronics and Immotronics, Prima has developed the core middleware components for plug and play integration of smart home devices for distributed smart home services, as well as interactive tools for End User Development of Smart Home services.
AppsGate has developed an Open Platform to provide integrated home applications to the consumer mass market. The set-top box is the primary point of entry into the digital home for television services including cable TV, satellite TV, and IPTV. AppsGate will transform the set-box into a residential gateway, capable of delivering multiple services to the home, including video, voice and data. PRIMA is involved in designing End User Development tools dedicated for the Smart Home
Fraichard Thierry
Date: May 2014 - June 2015
Institution: BIU (Israel)
Sabine Coquillart has served as general co-chair for IEEE VR 2015, Arles, France
Sabine Coquillart will be general co-chair for IEEE VR 2016, Greenville, USA
Sabine Coquillart chairs the steering committee for the EGVE Working Group EUROGRAPHICS Working group on Virtual Environments.
Sabine Coquillart serves as member of the steering committee for the ICAT conference ñ International Conference on Artificial Reality and Telexistence.
Sabine Coquillart has served as member of the steering committee for the 3DCVE Workshop on 3D Collaborative Virtual Environments in Arles, France, March 2015.
Anne Spalanzani served on the organizing committee as Financial chair for IEEE ARSO 2015, Lyon, France
James L. Crowley chaired the Best Paper award committee for ACM ICMI 2015
Sabine Coquillart has served as a member of the program committee for GRAPP'2015 - International Conference on Computer Graphics Theory and Applications ñ Berlin, Germany, March 2015.
Sabine Coquillart has served as a member of the program committee for VRST 2015 - 21st ACM Symposium on Virtual Reality Software and Technology, Beijing, China, Nov. 2015.
Sabine Coquillart has served as a member of the program committee for ICAT-EGVE 2015 - 25th International Conference on Artificial Reality and Telexistence (ICAT 2015) and the 20th Eurographics Symposium on Virtual Environments (EGVE 2015) - Kyoto, Japan, 2015.
James L. Crowley will serve on the program committee for Ubicomp 2016
Dominique Vaufreydaz served on the program committee of the 10th IEEE International Workshop on Multimedia Technologies for E-Learning (MTEL2015).
Dominique Vaufreydaz served on the program committee for the 7th international on Knowledge and Systems Engineering (KSE 2015)
Dominique Vaufreydaz served on the program committee for UBICOMM2015
James L. Crowley served as review for IEEE CVPR 2015
James L. Crowley served as review for ICCV 2015
James L. Crowley served as review for ACM ICMI 2015
James L. Crowley served as review for IEEE ICRA 2015
James L. Crowley served as review for AMI 2015
James L. Crowley served as review for Interact 2015
Dominique Vaufreydaz served as reviewer for International Journal On Advances in Internet Technology, v 8 n 3-4 2015
Dominique Vaufreydaz served as reviewer pour IROS 2015
Anne Spalanzani served as reviewer for IROS 2015,
Anne Spalanzani served as reviewer for Social Robotics
Sabine Coquillart is a member of the Editorial Board of the Advances in Human-Computer Interaction
Sabine Coquillart is a member of the Scientific Committee of the Journal of Virtual Reality and Broadcasting.
Sabine Coquillart is a member of the Advisory/Editorial Board for the International Journal of Computer Graphics SERSC.
Sabine Coquillart is a Member of the Editorial Board of Peer Computer Science open access Journal in Computer Science.
Sabine Coquillart is a member of the Editorial board (computer Sciences) of the Scientific World Journal.
Sabine Coquillart is Review Editor for the ìFrontiers in Virtual Environments journal.
Anne Spalanzani is a member of the editor for a special issue on Assistive and Rehabilitation Robotics of the Journal of Autonomous Robots
James L. Crowley has reviewed papers for IEEE Pervasive
Dominique Vaufreydaz served as reviewer for the journal Autonomous Robots
Dominique Vaufreydaz served as reviewer for the journal Robotics and Autonomous Systems
James L. Crowley, An Ecological Approach to Smart Home Systems, Joint CNRS-ALSTOM Internatational Workshop: From Industry 4.0 to Smart cities, CNRS 1 rue Michel Ange, 26-27 Nov 2015.
James L. Crowley, Context Aware Mobile Assistant, Smart City and Mobility Innovations, Organised by Inria in San Francisco, 11 May 2015
Sabine Coquillart is elected member of the EUROGRAPHICS Executive Committee.
Sabine Coquillart is member of the EUROGRAPHICS Working Group and Workshop board.
James L. Crowley served on the selection committee for Institut Université de France (IUF)
James L. Crowley reviewed proposals for H2020 FET program and other H2020 PHC program
Dominique Vaufreydaz served as evaluator pour l'ANR
James L. Crowley has been elected member of the Conseil d'Administration of the COMUE Université Grenoble Alpes
James L. Crowley has been elected member of the Conseil du Laboratoire of the Laboratoire Informatique de Grenoble.
James L. Crowley serves on the Administrative Office (Bureau) for the Laboratoire Informatique de Grenoble.
James L. Crowley Serves on the Commission d'Habilitation de Dirigé la Recherche (HDR) for the Pole Informatics and Mathematics of the University Grenoble Alpes.
James L. Crowley Serves on the Comité Scientific (COS) d'InriaI Grenoble Rhone-Alpes Research Center
James L. Crowley Serves on Steering Committee of the Inovallée TARMAC technology Incubator.
James L. Crowley is director of the Amiqual4Home Equipment of Excellence (EquipEx).
James Crowley is Director of the Master of Science in Informatics at Grenoble.
Master : Computer Vision, Course 24h EqTD, M2 year, Master of Science in Informatics at Grenoble,
Master : Intelligent Systems, Cours 54h EqTD, ENSIMAG.
Master : Sabine Coquillart teaches a course on Virtual Reality and 3D User Interfaces for the GVR Master 2R, 2013-2014.
Master : Sabine Coquillart teaches a one day course on 3D User Interfaces and Augmented Reality for the Numerical Modeling and Virtual Reality Master 2 in Laval, 2013-2014.
Master: Thierry Fraichard, Introduction to Perception and Robotics, 23h eqTD, M1 MOSIG, Univ. of Grenoble, France.
Master: Thierry Fraichard, Motion in Dynamic Workspaces, 2h eqTD, Computer SCience Master, Bar Ilan Univ., Israel.
Co-responsibility of the Graphic, Vision and Robotics track of the international MOSIG Master programme.
Patrick Reignier has been elected member of the Conseil Academic du Pole Math-Stic of the Université Grenoble Alpes
Patrick Reignier Supervises the industrial part of the “formation en apprentissage” of the ENSIMAG engineering school.
Master: Patrick Reignier, Projet Genie Logiciel, 55h eqTD, M1, ENSIMAG/Grenoble INP, France.
Master : Patrick Reignier, Developpement d'applications communicantes, 18h eqTD, M2, ENSIMAG/Grenoble - INP, France
Master : Patrick Reignier, Introduction aux applications reparties, 18h eqTD, M2, ENSIMAG/Grenoble - INP, France
Master: Patrick Reignier, Programmation Internet, 18h eqTD, M1, ENSIMAG/Grenoble INP, France
Master : Patrick Reignier, Algorithmique, 50h eq TD, M1, ENSIMAG/Grenoble INP, France
Licence: Patrick Reignier, Projet C, 20h eqTD, L3, ENSIMAG/Grenoble INP, France.
Varun Jain, Visual Perception of Emotions, Doctoral Thesis of the Université Grenoble-Alpes, defended 30 March 2015. Thesis director James L. Crowley
Sergi Pujades-Rocamora, Modeles de cameras et algorithmes pour la creation de contenu video 3D, Doctoral Thesis of the Université Grenoble-Alpes, defended 14 October 2015. Thesis director Remi Ronfard (HDR) and Frederic Devernay
Anne Spalanzani defended her HDR "Contribution à la navigation autonome en environnement dynamique et humain" in june 2015
Etienne Balit, Multimodalite et interaction sociale, Doctoral student of University of Grenoble-Alpes, Started October 2013, Patrick Reignier (Professor), Dominique Vaufreydaz.
Viet Cuong Ta, Multiple Users localization in public large-scale space, Doctoral student of University of Grenoble-Alpes, Started October 2013, Eric Castelli (HDR, Mica laboratory Hanoi, Vietnam), Dominique Vaufreydaz.
Remi Paulin, Human-Robot Motion, University of Grenoble-Alpes, stared in October 2013, Thierry Fraichard (HDR).
Chen Jingtao, "Pseudo-haptic feedback without active haptic feedback devices", University of Grenoble-Alpes, Started in October 2014, co-directed by S. Coquillart
Grégoire Nieto, « Lightfield Remote Vision », University of Grenoble-Alpes, Started in January 2015, co-directed by F. Devernay and J. L. Crowley
Romain Bregier, « 3D Bin Picking », University of Grenoble-Alpes, CIFRE doctoral student with Siléane, Started in October 2014, Co-directed by F. Devernay and J.L. Crowley
Alberto Quintero Delgado, University of Grenoble-Alpes, Started in October 2014, , directed by F. Devernay
Julien Cumin, « Fouille de données de l’habitat connecté », University of Grenoble-Alpes, CIFRE doctoral student with Orange Labs, Started in October 2015, Directed by James L. Crowley.
Sandra Nabil, « Video Panoramic 3D », University of Grenoble-Alpes, Started in October 2015, Co-directed by F. Devernay and J.L. Crowley
James L Crowley served on the 2015 Selection committee for the IUF Senior awards.
Dominique Vaufreydaz served on the recruiting juries for ATER and MdC for the Université de Grenoble Alpes.
James L. Crowley served on the Doctoral Jury of Arsène FANSI TCHANGO, Doctoral thesis of the Univ of Nancy, defended 4 Dec 2015.
Dominique Vaufreydaz served as examinateur on the thesis jury of Wafa Johal "Du Style pour des Robots Compagnons : Vers la Plasticité en Interaction Homme-Robot", on 30 Oct 2015