Created in January 2010, the MINTteam is a collaboration between INRIA Lilleand the LIFLand L2EPlabs of Lille 1 Universityand CNRS(Centre National de la Recherche Scientifique).
The MINT team focuses on gestural interaction, i.e. the use of gesture for human-computer interaction (HCI). The New Oxford American Dictionary defines gestureas a movement of part of the body, especially a hand or the head, to express an idea or meaning. In the particular context of HCI, we are more specifically interested in movements that a computing system can sense and respond to. A gesture can thus be seen as a function of time into a set of sensed dimensions that might include but are not limited to positional information (the pressure exerted on a contact surface being an example of non-positional dimension).
Simple pointing gestures have long been supported by interactive graphics systems and the advent of robust and affordable sensing technologies has somewhat broadened their use of gestures. Swiping, rotating and pinching gestures are now commonly supported on touch-sensitive devices, for example. Yet the expressive power of the available gestures remains limited. The increasing diversity and complexity of computer-supported activities calls for more powerful gestural interactions. Our goal is to foster the emergence of these new interactions, to further broaden the use of gesture by supporting more complex operations. We are developing the scientific and technical foundations required to facilitate the design, implementation and evaluation of these interactions. Our interests include (see section 3 Scientific Foundations for a short structured description) :
Gestures captured using held, worn or touched objects (e.g. a mouse, a glove or a touchscreen) or contactless perceptual technologies (e.g. computer vision);
Computational representations of these gestures;
Methods for characterizing and recognizing them;
Transfer functions used for non-isometric object manipulations;
Feedback mechanisms, and more particularly haptic ones;
Engineering tools to facilitate the implementation of gestural interaction techniques;
Evaluation methods to assess their usability.
G. Casiez, N. Roussel, R. Vanbelleghem and F. Giraud received the best paper award at IHM 2010, the French-speaking conference on Human-Computer Interaction, for their paper on the Surfpad pointing facilitation technique (see section ).
About 150 people participated in FITG10at Euratechnologies Lille, the forum on tactile and gestural interactionorganized in June by N. Roussel and D. Marchal in cooperation with DigiPort.
We proposed and validated a Master degree speciality called "IVI" that is composed of courses in the field of Interaction, Vision and Image. Géry Casiez is the head of this speciality, and most of the teaching work is being done by people from the MINT and SHAMANgroup in collaboration with the LAGIS (vision and control lab, university Lille 1). The first class will be graduating in June 2011.
Quan Xu will defend a Ph.D. dissertation on December 15 th, 2010 titled "Contribution à l'étude et au développement de techniques de gestion de fenêtres"
The scientific approach that we follow considers user interfaces as means, not an end: our focus is not on interfaces, but on interaction considered as a phenomenon between a person and a computing system . We observethis phenomenon in order to understand it, i.e. describeit and possibly explainit, and we look for ways to significantly improveit. HCI borrows its methods from various disciplines, including Computer Science, Psychology, Ethnography and Design. Participatory design methods can help determine users' problems and needs and generate new ideas, for example . Rapid and iterative prototyping techniques allow to decide between alternative solutions . Controlled studies based on experimental or quasi-experimental designs can then be used to evaluate the chosen solutions . One of the main difficulties of HCI research is the doubly changing nature of the studied phenomenon: people can both adapt to the system and at the same time adapt it for their own specific purposes . As these purposes are usually difficult to anticipate, we regularly createnew versions of the systems we develop to take into account new theoretical and empirical knowledge. We also seek to integratethis knowledge in theoretical frameworks and software tools to disseminate it.
Whatever is the interface, user provides some curves, defined over time, to the application. The curves constitute a gesture (positionnal information, yet may also include pressure). Depending on the hardware input, such a gesture may be either continuous (e.g. data-glove), or not (e.g. multi-touch screens). User gesture can be multi-variate (several fingers captured at the same time, combined into a single gesture, possibly involving two hands, maybe more in the context of co-located collaboration), that we would like, at higher-level, to be structured in time from simple elements in order to create specific command combinations.
One of the scientific fundations of the research project is an algorithmic and numerical study of gesture, which we classify into three points:
clustering, that takes into account intrinsic structure of gesture (multi-finger/multi-hand/multi-user aspects), as a lower-level treatment for further use of gesture by application;
recognition, that identifies some semantic from gesture, that can be further used for application control (as command input). We consider in this topic multi-finger gestures, two-handed gestures, gesture for collaboration, on which very few has been done so far to our knowledge. On the contrary, in the case of single gesture case (i.e. one single point moving over time in a continuous manner), numerous studies have been proposed in the current literature, and interestingly, are of interest in several communities : HMM , Dynamic Time Warping are well-known methods for computer-vision community, and hand-writing recognition. In the computer graphics community, statistical classification using geometric descriptors has previously been used ; in the Human-Computer interaction community, some simple (and easy to implement) methods have been proposed, that provide a very good compromise between technical complexity and practical efficiency .
mapping to application, that studies how to link gesture inputs to application. This ranges from transfer function that is classically involved in pointing task , to the question to know how to link gesture analysis and recognition to the algorithmic of application content, with specific reference examples.
We ground our activity on the topic of numerical algorithm, expertise that has been previously achieved by team members in the physical simulation community (within which we think that aspects such as elastic deformation energies evaluation, simulation of rigid bodies composed of unstructured particles, constraint-based animation... will bring up interesting and novel insights within HCI community).
Our scientific approach in the design and control of haptic devices is focused on the interaction forces between the user and the device. We search of controlling them, as precisely as possible. This leads to different designs compared to other systems which control the deformation instead. The research is carried out in three steps:
identification:we measure the forces which occur during the exploration of a real object, for example a surface for tactile purposes. We then analyze the record to deduce the key components – on user's point of view– of the interaction forces.
design:we propose new designs of haptic devices, based on our knowledge of the key components of the interaction forces. For example, coupling tactile and kinesthetic feedback is a promising design to achieve a good simulation of actual surfaces. Our goal is to find designs which leads to compact systems, and which can stand close to a computer in a desktop environment.
control:we have to supply the device with the good electrical conditions to accurately output the good forces.
The term desktop systemrefers here to the combination of a window system handling low-level graphics and input with a window manager and a set of applications that share a distinctive look and feel. It applies not only to desktop PCs but also to any other device or combination of devices supporting graphical interaction with multiple applications. Interaction with these systems currently rely on a small number of interaction primitives such as text input, pointing and activation as well as a few other basic gestures. This limited set of primitives is one reason the systems are simple to use. There is, however, a cost. Most simple combinations being already used, few remain to trigger and control innovative techniques that could facilitate task switching or data management, for example. Desktop systems are in dire need of additional interaction primitives, including gestural ones.
Ambient intelligence(AmI) refers to the concept of being surrounded by intelligent systems embedded in everyday objects . Envisioned AmI environments are aware of human presence, adapt to users' needs and are capable of responding to indications of desire and possibly engaging in intelligent dialogue. Ambient Intelligence should be unobtrusive: interaction should be relaxing and enjoyable and should not involve a steep learning curve. Gestural interaction is definitely relevant in this context.
We collaborate with the HOPALE fundation, that is specialized in rehabilitation, for studying new virtual reality based systems that would improve rehabilitation process. We collaborate on the ANR TecSan Reactive project (see section, on such case studies, for rehabilitation after brain vascular accidents. Our work is targeted on specific, ad-hoc, interaction hardware, and also on interaction gesture analysis. We hope that being able to provide evaluation methods for interaction gesture efficiency will help physicians to evaluate patient needs, and make rehabilitation process more accurate. Hopefully, such numerical tools would allow to compare protocols, or evaluate patient recovery. Within the SHIVA Interreg 2-seas project, we work on providing virtual reality-based tools for sculpting. The proposed project targets such activity for rehabilitation, and young children with disabilities. It gathers 4 partners (INRIA Lille leader, Hopale Fundation, University of Bournemouth, Victoria school in Poole). In this project, we plan to built up tools for virtual sculpting through adapted interface, and propose a set of exercises that would involve cognitive skills of user (assembly, object reproduction, etc.).
The group also works in another application area of
serious games, 3D interactive applications for e-business,
within the context of the PICOM competitivity cluster. We
started two years ago some collaboration with Idées-3com
The heart of MINT project is about interaction gesture,
and aims at making relation between application and user more
intimate through the production of tools and methods for
application to use more information from user gesture. There
seems to be, at first sight, very strong difference of
fields, tools, vocabulary, between science and art. Up to
basic intellectual schemes are classically thought to be
different. Yet, a closer look needs to be taken on things.
Through time, art is more and more involved in relation
between people and content. For example,
relational art
In indirect pointing situations such as when using a mouse, control-display gain functions are those that determine the movement of the displayed pointer based on the movement of the pointing device. Although every graphical interactive environment implements several of these functions, they have received little attention and little is actually known about their impact. A first step to evaluate their impact is to determine the transfer functions actually used on modern operating systems. We developed an apparatus and a method to retrieve the gain functions from different operating systems. First, we presented a method for measuring the gain function used on an arbitrary system, based on some specific hardware. Using this hardware, we described the gain functions used by current operating systems. We then analyzed the differences between those functions. The results show important differences between the curves retrieved .
STIMTAC is a tactile devices based on squeeze film air bearing. Its operating principle consists in vibrating a plate at several tens of kilohertz at very low amplitude. An air gap results from the fast expansion and contraction of the air trapped between the vibrating plate and the finger. Previous designs suffered from numerous disadvantages. First, the bulk size of the system was not compatible with space requirements of a desktop environment. This was due to the optical position sensor which locates the position of the fingertip onto the plate. This solution was the best in terms of resolution and precision, but need large optical paths and thus increases the overall size of the device. Second, the power consumption of the tactile plate was prohibitive. This large amount of power increased the plate's temperature hurting users. Moreover, an additional power supply was needed because a simple USB port could not provide so much power to the plate.
This is why a study has been carried out in order to optimize the tactile plate's design. This work was a collaboration with the University of Ghent . New plate's dimensions were determined in order to increase the deformation to voltage ratio. For that purpose, an analytical modeling in addition to a finite element modeling were proposed in order to find a global optimum for the tactile plate. In fact, the optimization method used was the space mapping, which refines the optimal solution given by the optimized analytical modeling by using finite element analysis. The resulting plate could cut voltage requirement down to 6 volts instead of the 15 volts required by the original tactile plate.
We completed this work with a study on power consumption of the tactile plate. We pointed out that a saturation effect increases the required power at a given vibration amplitude. After modelling this effect, we have proposed design rule to avoid saturation blow the rated vibration amplitude and we found a design which helped to cut power losses down to 0.5W . This design allows a power supply from a lightweight power source, or directly from an USB port for example.
Surfpad is a pointing facilitation technique that operates in the tactile domain by taking advantage of the ability to alter the coefficient of friction of a particular touchpad, the STIMTAC . In , we report on two experiments comparing it to the semantic pointing technique and constant control-display gain with and without distractor targets. Our results clearly show the limits of traditional target-aware gain adaptation in the latter case, and the benefits of our tactile approach in both cases. Surfpad can lead to a performance improvement of up to 21% compared to unassisted pointing at small targets with no distractor. It is also robust to high distractor densities, keeping an average performance improvement of nearly 10% while semantic pointing can degrade up to 100%.
UIMarks
is a system that lets users
specify on-screen targets and associated actions by means of
a graphical marking language. It supplements traditional
pointing by providing an alternative mode in which users can
quickly activate these marks. Associated actions can range
from basic pointing facilitation to complex sequences
possibly involving user interaction: one can leave a mark on
a palette to make it more reachable, but the mark can also be
configured to wait for a click and then automatically move
the pointer back to its original location, for example. The
system has been implemented on two different platforms,
Metisse and OS X. We compared it to traditional pointing
on a set of elementary and composite tasks in an abstract
setting. Although pure pointing was not improved, the
programmable automation supported by the system proved very
effective. This work was done in collaboration with Olivier
Chapuis, from INRIA's IN-SITU team. A video illustrating the
system is available from
http://
We propose Push-and-Pull Switching , a window switching technique using window overlapping to implicitly define groups. Push-and-Pull Switching enables switching between groups and re-stacking the focused window to any position to change its group membership. The technique was evaluated in an experiment which found that Push-and-Pull Switching improves switching performance by more than 50% compared to other switching techniques in different scenarios. A longitudinal user study indicates that participants invoked this switching technique 15% of the time on single monitor displays and that they found it easy to understand and use.
Clicking is a key feature that any interaction input system needs to provide. In the case of 3D input devices, such a feature is often difficult to provide (e.g. vision-based, or tracking systems for free-hand interaction do not natively provide any button). In this work, we show that it is actually possible to build an application that provides two classical interaction tasks (selection, and pick-release), without any button-like feature. Buttonless Clicking is based on trajectory and kinematic gesture analysis. In a preliminary study we exhibit the principle of the method. Then, we detail an algorithm to discriminate selection, pick and release tasks using kinematic criteria. We present a controlled experiment that validates our method with an average success rate equal to 90.1% across all conditions.
Multi-touch displays represent a promising technology for the display and manipulation of data. While the manipulation of 2D data has been widely explored, 3D manipulation with multi-touch displays remains largely uncovered. Based on an analysis of the integration and separation of degrees of freedom, we propose a taxonomy for 3D manipulation techniques with multi-touch displays. Using that taxonomy, we introduce DS3( Depth-Separated Screen Space), a new 3D manipulation technique based on the separation of translation and rotation. In a controlled experiment, we compared DS3with Sticky Toolsand Screen-Space. Results showed that separating the control of translation and rotation significantly affects performance for 3D manipulation, with DS3being at least 22% faster .
Multi-touch displays represent a promising technology for the display and manipulation of 3D data. To fully exploit their capabilities, appropriate interaction techniques must be designed. In this work, we explore the design of free 3D positioning techniques for multi-touch displays to exploit the additional degrees of freedom provided by this technology. Our contribution is two-fold: first we present an interaction technique to extend the standard four viewports technique found in commercial CAD applications, and second we introduce a technique designed to allow free 3D positioning with a single view of the scene. The two techniques were evaluated in a preliminary experiment. The first results incline us to conclude that the two techniques are equivalent in term of performance showing that the Z-technique provides a real alternative to the statu quo viewport technique .
Idées-3Com is a start-up specialized in Web 3D content creation for e-shopping and advertising. Several collaborations are currently on the run on gestural 3D interactions, and enhanced 3D metaphors. Idées3com is one of the industrial partner of the ANR TecSan Reactive project. We have a joint I-lab(which was, to our knowledge, the very first at INRIA, to be accepted), started in september 2009, in which we have defined a 3 year join working program. We work on using multi-touch interaction with 3D contents, on some Idées-3Com application domain (virtual room planning). We also maintain a software activity on the company plateform (they use VRML-based BS contact software), in order to make it evolve properly (inclusion of basic physical simulation, collision detection, possible evolution to other architectures)
During 2010, we helped Maxence Dislaire to create his
start-up on multi-touch application for shops. We provide
consulting on Interaction with fingers. Contract : 10 Keuros.
See
http://
This technology development activity aims at proposing a
low-cost free-hand interaction system, that can be used
flexibly in arbitrary configuration, using previous team
results (start oct 2009, end nov 2011). During the first
year of this ADT, a collaboration has been made with Le
Fresnoy (National Studio of Contemporary Art) School (
http://
(ANR 2006 - RNTL - Partners: INSA Rennes, INRIA (MINT, I3D), CNRS (LaBRI, Mouvement et Perception), ESIEA, FT R&D, CEA-LIST, VIRTOOLS, HAPTION, CLARTE, RENAULT, THALES, SOGITEC)
This project aims to propose a software platform for collaborative work, studying it from the point of view of a human interacting in collaboration inside a 3D environment. Part@ge studies collaboration from a multi-criteria point-of-view, in order to propose several innovative solutions: usages associated with collaboration in a 3D environment, technical infrastructures helping collaboration, tools to spread 3D collaborative activities.
(ANR 2008/2011 - Partners: INRIA MINT(ALCOVE), CEA-LIST, Idées-3com). 147.7 Keuros for MINT.
The Reactive project addresses rehabilitation for patients that have suffered cerebrovascular accident (CVA). It aims at proposing new VR-based tools for rehabilitation to improve patient involvement into her/his own rehabilitation, by proposing attractive training exercices and increase transfer of recovered skills, from exercices to real-life situations.
(ANR 2009/2012 - Partners: INRIA (MINT, IPARLA), IMMERSION, Cap Sciences). 162.2 Keuros for MINT.
The InSTInCT project focuses on the design, development, and evaluation of new simple and efficient touch-based interfaces, with the goal of bringing widespread visibility to new generations of interactive 3D applications, aimed in particular at general public audiences. The Alcove team is largely involved in the project.
(Interreg 2-seas, start sept. 2010, MINT research team (lead, L. Grisoni), Hopale Fundation, University of Bournemouth, Victoria school in Poole).
The SHIVA project (Sculpting and Health: between Interaction and Virtual Art), aims at providing virtual reality-based tools for sculpting, targeted on virtual rehabilitation, and children with disabilities. Sculpting was traditionally used within medical context, and has been removed because of practical cost, and also hygienic aspects. In this project, we plan to built up tools for virtual sculpting through adapted interface, and propose a set of exercises that would involved cognitive skills of user (assembly, object reproduction, boolean operations, etc.).
Journal editorial board:
G. Casiez: Journal d'Interaction Personne-Système(AFIHM)
N. Roussel: Journal d'Interaction Personne-Système(AFIHM), co-editor in chief
Journal reviewing:
G. Casiez: International Journal of Human-Computer Studies
L. Grisoni: J. of Comp. Aided Design, IEEE Trans. On Vis. and Comp. Graphics
N. Roussel: ACM Transactions on Computer-Human Interaction
Conference organization:
N. Roussel & D. Marchal (co-organizers): FITG10, Forum on tactile and gestural interaction. Organized in cooperation with DigiPort, this forum consisted in three co-located events: an academic workshop, a showroom and a BarCamp (June 10-12, about 150 participants overall)
B. Semail (member of the organizing committee): VPPC'2010, the IEEE Vehicle Power and Propulsion Conference
Program committees:
L. Grisoni: VRIPHYS 2010, Computer animation and Social agents (CASA) 2010
Conference reviewing:
G. Casiez: ACM CHI, ACM UIST, IHM, ITS, 3DUI, VRIC
C Chaillou : Edutainment 2010, Vriphys 2010
N. Roussel: ACM CHI, ACM UIST, ACM CSCW, ACM EICS, IHC
Evaluation committees and invited expertise:
G. Casiez: member of the hiring committee for assistant professor position in CS at the University of Lille I (3 positions in 2010)
C. Chaillou : reviewer for ANR “Contenus et interactions” & Tecsan; president of the hiring commitee for CR INRIA Lille Nord Europe Unit; member of hiring commitee for INRIA DR; member of hiring commitee for one professor position (computer science, Univ of Lille 1, 2010).
S. Degrande: ANR “Programme Blanc International” reviewer (2010)
L. Grisoni: participation to hiring commitee for assistant professor positions (University Toulouse 2010, Calais 2010) and a "chair" (Visual Studies, Lille 3, 2010)
N. Roussel: member of the AERES evaluation committee for UMR STMS (IRCAM, Paris, Mar 2010) and L3I (La Rochelle, Dec 2010); reviewer for the Aquitaine region, the Digiteo research cluster, and the ANR “Jeunes chercheurs” and “Contenus et interactions” programs
PhD and habilitation committees(not including advised or co-advised PhDs and local habilitations):
G. Casiez: Olivier Bau (Paris-Sud University, June 2010, jury member)
C. Chaillou : Jérémy Bluteau (Université de Grenoble, June 2010)
N. Roussel: Ines Di Loreto (University degli Studi di Milano, Mar 2010, reviewer), Anne Roudaut (Telecom ParisTech, Feb 2010, jury member)
L. Grisoni: Nadine Couture (HDR University Bordeaux I, Dec 2010, reviewer), Adeline Pihuit (University Grenoble, Nov 2010, reviewer), Jérôme Baril (University Bordeaux I, Jan 2010, reviewer)
B. Semail: Christine Prelle (UT Compiègne, reviewer)
Scientific associations:
N. Roussel: AFIHM (French speaking HCI asssociation) Executive Committee member since Sept 2009 and secretary since Sept 2010
L. Grisoni: Association Eurographics France administration commitee from 2004 to 2008; University Lille 1 computer science lab (LIFL) counsil since 2008
B. Semail : member of IEEE and EPE association
Demos and meetings:
Journées publiques Part@ge: In the framework of the national ANR Part@ge project, two “public events” have been organized, intended to share the project's findings with some industrial companies and leading figures. During the second event, in 2010, we made a scientific presentation of our work on a Virtual City Planner.
Euratechnologies: for the inauguration of the INRIA demo space at Euratechnologies in Lille, we create one of the permanent demonstrations. It combines a large screen with multi-touch table; using this hardware, we proposed some virtual shopping experience, in the context of the starting collaboration with Idée-3com company
Le Fresnoy, Panorama exhibition: We have collaborated with artists of Le Fresnoy(national studio of contemporary art) on two interactive art installation, RoadSide Attractions(Samuel Degrande, Laurent Grisoni) and Pharmakon(Laurent Grisoni, Damien Marchal, ADT INRIA GINA), for the Panoramaexhibition that took place from 4 june 2010 to 21 july 2010. These two installations propose advanced interaction between "users" and "computer", targeting artistic expression.
Invited talks:
C. Chaillou : "Taking customers into new worlds: interaction, images ", International Conference on Development and Application Systems May 27-29, 2010 - Suceava, Romania
C. Chaillou : "3D Interaction : from mouse to gestures" Transdigital Seminar 2010, 24-25 juin 2010 - Tourcoing, Le Fresnoy
C. Chaillou :
Haibo Wang : "Vers un suivi en temps réel de la position de la tête et du visage" (defended in september 24th, 2010)
Cédric Syllebranque : "Estimation de propriétés mécaniques d'objets complexes à partir de séquences d'images" (defended in march 5th, 2010).
Quan Xu : "Contribution à l'étude et au développement de techniques de gestion de fenêtres" (defended in december 15th, 2010).
Master in Computer Science (Lille 1 University):
G. Casiez: HCI, multi-touch interaction, Virtual Reality and Interaction, gesture recognition techniques (DTW and HMM)
L. Grisoni : computer graphics (Animation, geometric modeling)
F. Giraud : Fundamentals of Piezo-electricity.
Engineering schools (Polytech'Lille & Telecom Lille 1):
G. Casiez : HCI, Computer graphics, VR, haptics
L. Grisoni : Advanced Computer Graphics, data compression and security (DCT, wavelets, watermarking, cryptology)
C. Chaillou : Hardware and Computer architecture
P. Plénacoste : HCI - Ergonomics
B. Lemaire-Semail : electromagnetism, piezo-electric control
Bachelor in Computer Science (Lille 1 University, Licence):
F. Aubert: 3D Programming, Introduction to Computer Graphics, Introduction to Programming
G. Casiez: Introduction to programming
P. Plénacoste: HCI - Ergonomics.
F. Giraud: Modeling and Control of electrical devices, Introduction to electrical engineering