The goal of the MErLIn project is to contribute to the improvement of the Ergonomic Quality of Interactive Software. Two sub-goals contribute to that general goal:
Study, through empirical studies
Study and improve ergonomic design and evaluation methods, thereby contributing to the overall improvement of technical systems by providing software designers with sound methodological elements helping the incorporation of user-centered concerns within the design process life cycle. It is about increasing available knowledge on such processes, together with defining new methods or complementing existing ones.
Considering interactive computing systems for human use, i.e., ergonomic optimization of interactive software, requires to make progress both on fundamental knowledge and on methods in HCI (Human-Computer Interaction), and Ergonomics. The scientific contributions of the MErLIn project include scientific literature on users and task modeling, on empirical studies, on design and evaluation methods, on ergonomics recommendations, as well as software (e.g., mock-ups, test-prototypes, tools supporting design and evaluation methods). These various contributions are aimed at disseminating current ergonomic results, knowledge, and know-how to the national and international scientific community, but also to standards and to technology transfer through industrial contracts, collaborations and consulting activities.
Currently, the MErLIn project investigates two main research directions:
The study, design, assessment, and set-up of ergonomic methods for designing and evaluating interactive software. This corresponds to the need for integrating available ergonomic results into the computer systems life cycle. The main current topics relate to task-based and criteria-based methods.
The study of usability issues raised by ``new'' computer applications: new user populations, new application domains, new forms of interaction (often new technology raises new usability problems). This corresponds to the need for acquiring novel ergonomic results on innovative computer systems, and to further increase current knowledge on usability. The main current topics relate to multimodal interactions, and virtual reality.
The scientific domains characterizing the activities of the MErLIn project are essentially Ergonomics, particulary Software Ergonomics, and HCI. Four definitions apply to the research activities of the MErLIn project:
Ergonomics
Ergonomics contributes to the design and evaluation of tasks, jobs, products, environments, and systems in order to make them compatible with the needs, abilities, and limitations of people.
Derived from the Greek ergon (work) and nomos (laws) to denote the science of work, ergonomics is a systems-oriented discipline which now extends across all aspects of human activity.
Domains of specialization within the discipline of ergonomics are broadly the following:
Physical ergonomics is concerned with human anatomical, anthropometric, physiological, and biomechanical characteristics as they relate to physical activity (relevant topics include working postures, materials handling, repetitive movements, work related musculoskeletal disorders, workplace layout, safety, and health).
Cognitive ergonomics is concerned with mental processes, such as perception, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system (relevant topics include mental workload, decision-making, skilled performance, human-computer interaction, human reliability, work stress and training as these may relate to human-system design).
Organizational ergonomics is concerned with the optimization of sociotechnical systems, including their organizational structures, policies, and processes (relevant topics include communication, crew resource management, work design, design of working times, teamwork, participatory design, community ergonomics, cooperative work, new work paradigms, virtual organizations, telework, and quality management).
Software Ergonomics (or HCI Ergonomics) inherits from the main characteristics of ergonomics. It is a science that contributes to the knowledge necessary to software design, and more generally to computer-based environments, with the overall perspective of human security and well-being, but also with the perspective of effectiveness, efficiency, and productivity, for instance by facilitating users' tasks, limiting learning time, reducing errors and the cost of errors. Software Ergonomics focuses on the improvement of human-computer interactions mainly in terms of cognition, as the main human activity involved with software interactions is mental. However, as novel interaction techniques (e.g., multimodality) and novel environments (e.g., Virtual Reality) arise, some aspects of physical ergonomics are starting to be considered as well.
HCI
In addition, the MErLIn project aiming at the optimization of Software Ergonomics the Ergonomic Quality of Interactive Software, the following definition applies as well.
Ergonomic Quality of Interactive Software covers all software aspects which have an influence on the users' task completion: it therefore covers usability in the widest sense, or ease of use, i.e., the extent to which the users can easily reach their interaction goals (which usually refers to presentation and dialogue aspects of the interaction (modes, interface features, dialogue, etc.)) but also what is sometimes called utility, i.e., the extent to which the users can reach their task goals (which usually refers to functional aspects of the interaction such as functions, objects, data, etc.). From a software engineering perspective (e.g., architecture models), it could be said that Ergonomic Quality covers not only the classical presentation, dialogue control and application interface aspects, but also some application kernel aspects: those that have an influence on the users reaching their goals.
National and International Context
At the international level, HCI ergonomics has been very dynamic for a number of years: young researchers, strong implication of industrial partners and academia, many job offers, major conferences, large audiences, many scientific journals. Most renowned universities and large software companies have HCI groups.
In France research centers are still very few and are often mono-disciplinary; the development of multidisciplinary (ergonomics and software engineering) research in HCI is quite recent.
Approach
The Human-Computer Interaction, a particular type of Human-Machine Interaction, can be viewed along three complementary aspects: the human point of view, the computer point of view, and the interaction point of view. The MErLIn project considers all three points of view: the human and interaction aspects belong to ergonomics; the computer and interaction aspects belong to the HCI part of computer science. Our research views computer systems (software, interfaces, environments) as a set of tools provided for human use.
The MErLIn project uses methods from Ergonomics and from Computer Science, with a strong background and orientation in experimental approaches and methods (in the sense of experimental sciences, with hypotheses testing and proving, basic orientation of sciences such as medicine, biology, physics, etc.).
The project contributes to the rationalization of the ergonomic methods, from experimental testing in the laboratory or field simulations, using performance data (e.g., learning time, task duration, usage frequency, error frequency, navigation types, level of recall, etc.), analysis of verbal protocols, analysis of preferences. The modeling activities are also centered on the production of computer models.
The appropriateness and accuracy of such models compared to reality always goes through ergonomic evaluations.
Research work starts usually from the observation of real tasks, on selected fields of activity, often in parallel with particular practical problems to be solved. Data gathering is based on activity and interaction analyses, case studies, critical incidents, automatic logs, and records.
Focus: research work at MErLIn has also three additional characteristics.
The focus is on methods dedicated to designers that are not necessarily skilled in ergonomics, even though such methods can also improve the activity of the ergonomists themselves. More specifically, the project deals with the integration of ergonomics approaches within the computer systems life cycle through sets of recommendations, methods, software support tools, and involvement in standardization, teaching, and consulting.
The focus is on users who are not computer specialists. That user population is the major target of current software developments, whether it is the large public (e.g., interactive booths, electronic commerce, mobile systems) or professional experts in various domains (e.g., nuclear power plants, railways systems, textile design). A particular focus is on accessibility which promotes increased effectiveness, efficiency, and satisfaction for people who have a wide variety of capabilities and preferences.
The focus is not only on ``classical'' work situations, but also on new computers uses, not yet all well defined, such as: consumer products (e.g., electronic commerce), information retrieval (e.g., tourism), mobility, etc.
This year, the main application domains have been: design activities (3D design, initially textile industry), regulation activities (railway and subway systems). See section for the specific results and the industrial partners involved.
The research work conducted this year is presented along two main topics:
Ergonomics methods for the evaluation and design of software interactions.
Ergonomics of multimedia and multimodal interactions.
This research follows two initial requirements: an applied requirement of helping designers to evaluate and re-design 3D applications for textile industry (EUREKA `` COMEDIA '' project); a more generic requirement of defining an ergonomic inspection method for the design and the evaluation of 3D applications and virtual reality. To reach this goal, the initial step was to gather ergonomic results in the form of a compilation of ergonomic recommendations for virtual environments, and to define a set of 20 Ergonomic Criteria (E. C.) derived from that compilation. These criteria are presented in a document containing their definitions, their justifications, and some examples of recommendations, examples and counter-examples of applications . We then conducted two experiments to validate the criteria. The requirement of the first experiment was to measure the intrinsic validity of the criteria with an assignment task. The requirement of the second experiment was to measure the contribution of the Ergonomic Criteria during an inspection; for that purpose, we compared three different situations of evaluation (a user test, a free inspection and an inspection with the criteria). The first experiment was based on an assignment task which consisted in the matching of concepts (the criteria) with their potential instances (usability problems). Ten experts in software usability (5.65 years of experience; SD = 5.4) but not experts in Human Virtual Environment Interaction (HVEI) had to assign the criteria to 40 Virtual Environment (VE) usability problems. The results of this experiment show that the subjects produce 68% of correct assignment to main criteria (corresponding to theoretical assignment) and 59.5% to elementary criteria. Further analysis of the results led to show the confusions made between the different criteria. Such results should improve as modifications are made to the definitions, justifications and more illustrative examples and counter-examples are added. The second experiment was based on the comparison of three ergonomic evaluation methods used to evaluate two VEs. The two VEs evaluated ran on a classical computer, in order to avoid too much familiarization bias. One of the VEs used was a tourism application (an interactive visit of the valley of Chamonix), and the other one was an educational 3D application. The first method was a user test; 5 males and 5 females (age = 21.8; S. D. = 1.5) took part in it. The subjects were familiar with classical computers but not with 3D applications. The subjects performed a set of 10 tasks in the tourism application, and they followed the instructions in the educational one. The subjects had 30 minutes per application to perform the different tasks. The usability problems were found by the analyses of the users tests a posteriori (analyses are based on the video taping of experiments). The second method was a free inspection, 9 students, 6 males and 3 females (age = 26; S. D. = 7) in software usability (DESS) performed an ergonomic inspection on the 2 VEs, just on the base of their knowledge. They had to find and describe as many usability problems as possible in each application. The subjects had 30 minutes per application to perform the task. The third situation was similar to the previous one, except different subjects used the E. C. Ten students, 5 males and 5 females (age = 24.5; S. D. = 2.5) in software usability had to read the E. C. before performing the inspection, and could refer to E. C. during the session. The three methods were compared according to several performance aspects. The first one was the evaluation performance. This indicator showed an equivalent evaluation performance between user test and E. C. inspection (E. C.= around 22 to 25 problems found by evaluation according to application; user test = 26 to 29 problems found by evaluation according to application), and revealing evaluated between E. C. inspection and free inspection (free inspection = 8 to 11 problems found depending on the application). The second aspect is the internal consistency (number of same problem found by two or more evaluations) of the different methods. This aspect shows that E. C. inspection is equivalent and more stable (small difference between evaluated applications) than user test (E. C. internal consistency = 61% to 68% according to application; user test internal consistency = 57% to 75% according to application). A sizeable difference between free inspection and E.C. inspection was revealed (free inspection internal consistency = 30% to 45% according to application). The third aspect evaluated is the revealing power of each method. According to all problems found in the 2 VE with the three methods we identified the proportion of problems found with the different methods. The results of this comparison showed that the revealing power of E. C. is equivalent to user test (E. C. revealing power = 60% to 61%; user test revealing power = 56% to 58% according to the applications). The proportion of problems found with free inspection (free inspection revealing power = 34 to 39% according to the applications) is less important than E.C. inspection. To summarize, the results of these two experiments basically validate the utility and usability potential of the E.C. adapted to Human Virtual Environment Interaction. This work was presented in a conference paper and in a thesis .
This year, the study for the PREDIT project with SNCF (French Railways) and RATP (Paris Subway System) which aims at identifying and predicting factors involved in mental workload from the identification of task characteristics, mainly focused on the analysis of 68 semi-directed interviews. These interviews were conducted in three railway regulation centers (Château-Landon, Paris Gare du Nord and Versailles-Chantier). The semi-directed interviews were conducted as follows:
Presentation of the study and of the interviewer;
Presentation of the aim of the interview;
Explanation of the concepts of ``Mental Workload'', Underload and Overload.
After a leading question asking: ``what does mental workload mean for you in your activity?'', we used a ``Why and How'' technique to elicit precise descriptions of workload cases experienced by the subjects, as well as their evaluation of these situations. Thus, the interviews conducted led to a total amount of 107 cases of mental workload: 69 cases of overload and 38 cases of underload. Moreover, among the 68 operators interviewed, 6 said that they never had to cope with over-or underload. A common feature between these 6 operators is that they do not feel time pressure.
A first analysis showed that it appears quite difficult to find clear-cut cases of underload and overload, for two main reasons. One is that workload varies a lot from one person to another (levels of experience), the other is that there are differences within the different sites observed (work organisation and role of the human operators). Hence, in order to better understand why over- or underload occurs, a qualitative analysis was conducted. This first analysis aims at showing which sort of cases are involved and the type of activity generated by these cases. This analysis performed on such cases shed light on differences as a function of type of operator considered.
Regarding overload we observed:
The ``Circulation Chiefs'' (CCL) evoked three types of cases: ``abrupt stop of the traffic follows by a step by step restart'' (11 cases), ``reorganization of traffic'' (3 cases) and ``lack of experience'' (5 cases)
The ``Adjunct CCLs'' also evoked three types of cases: ``setting security measures'' (5 cases), ``setting work measures'' (3 cases) and ``setting both security and work measures'' (3 cases)
The ``Circulation Agents'' evoke overload cases when ``the situation is disrupted'' (19 cases), when they have to ``set security measures linked to work'' (3 cases) and when the operator shows a lack of experience (8 cases).
The ``Pointsmen'' evoked above all troubles in association with ``technical hitch or incidents involving people'' (8 cases) and one case about troubles with the ``return of work''.
Regarding underload we observed:
The ``CCL'': the four cases gathered mainly deal with periods of activity where CCL only have to assume the supervision of the traffic during a long period of time (which could correspond to all duty). Periods concerned by underload are those outside rush, during summer in August, during the weekend, during night and especially during the nights of the weekend where there are few trains and passengers and little or no traffic. Underload is due to the fact that operators show a diminution of attention along the time. They feel tired and sometimes bored.
The ``Adjunct CCLs'': we distinguish two periods of underload: during rush hours and between two moments of activity, in both cases because they have to wait passively.
The ``Circulation Agents'': 5 cases are about night and especially during the weekend; 6 cases concern the Sunday, particularly during summer; 9 cases are on periods outside rush. All these periods have in common to be of minor activity.
The ``Pointsmen'': all 6 cases are linked to one unique task of supervision which takes all duty.
A second analysis leads us to detect tasks which revealed difficulties from the point of view of overload and experience levels of the operators:
for ``CCLs'': 6 tasks:
one problem solving task interrupted conducted by an experimented person,
one problem solving task conducted by novices,
one problem solving task conducted during 4 hours by an experimented person,
two problems solving tasks conducted simultaneously by an experimented person,
two problems solving tasks conducted simultaneously including one attributed to CCL, realised by an experimented person,
one task of supervision of novices conducted by novices.
for ``Adjunct CCLs'': 5 tasks
10 cognitive simultaneous tasks like ``acknowledge works'',
2 cognitive simultaneous tasks like ``acknowledge work instructions'',
one problem solving task,
two simultaneous problems solving tasks
for ``Circulation Agents'': 2 tasks:
execute simultaneously several cognitive tasks during the rush hours,
execute simultaneously three cognitive tasks,
``Pointsmen'': 3 tasks:
reprogramming of itineraries on the Eole table leads to two simultaneous cognitive tasks,
reprogramming of itineraries during an incident leads to a problem solving task for over an hour,
two simultaneous tasks: a situation analysis task and an itinerary reprogramming task.
The analysis of tasks difficulties involving mental underload is currently under way; in addition, selected cases will be described formally using the new MAD version (Méthode Analytique de Description).
These analyses and results are described in a technical report
Part of the PREDIT project conducted with SNCF and RATP is based on task analysis. In order to support such activities in the future, including for other application domains, a software tool to help model user tasks is needed. This is being done in the context of a Ph.D. thesis. The manuscript, being currently finalized, concerns various research issues on which state-of-the-art reviews are provided:
development cycles (Cascad, NABLA, Cycle V) and methodologies (Merise, UML).
taxonomies and evaluations of formalisms for HCI
tasks models (CTT, Diane+, Euterpe, GTA, MAD) with their expressive power and generative capabilites.
Also, the manuscript describes the study and implementation of the model which constitutes the core of the application allowing to describe users' activities. This includes an architecture that contains various services like workload, users events, graphical user interface generation. The core allows the description of task and user characteristics, of abstract and concrete objects, of objects grouping and events. A grammar makes it possible to bind the objects with the tasks by using preconditions and postconditions.
In addition, establishing hierarchical rules for the core was necessary, for instance when tree leaves cannot be abstract tasks; or considering that the identification of an abstract object is unique. Also, the semantic level is managed through the use of Petri Nets in order to define and use the dynamics of the model.
Concerning the software tool itself, the main difficulty is to validate and evaluate the expressions of the conditions. A DTD is used to contain a general grammar. An XML file contains the language of recognition (English or French). It is then necessary to interpret users' input data with these two files. The simulation part is under development; it includes the dynamic part of the tasks, users, conditions, events.
This study, which began in 1999, addresses the issue of how to design online help that will really prove effective and, most of all, that will actually be used. Our approach is based on the assumption that online help systems implementing human experts' strategies will prove most effective.
We first elicited the contextual strategies used by human experts for helping novices in the general public to master the use of standard application software, from the analysis of a corpus of expert-novice help dialogues. To validate the conclusions of this analysis, we performed an experimental ergonomic study using an advanced implementation of the Wizard of Oz technique (i.e., that provided software assistance to the wizard) in order to simulate two help systems, a contextual one and a non contextual one. Results indicate that, for complex interactive tasks, novices' performances were better with contextual online help while, for easier tasks, performances varied according to the subjects' cognitive profiles .
This year, a comprehensive description of our research on contextual online help has been accepted as a chapter in a collective scientific book soon to be published by Octarès.
Besides, we are currently analysing data collected in the course of a supplementary experiment performed in 2003. This study aims at eliciting the possible influence of the modalities used for presenting help information on the effective use of help messages, by comparing the respective efficiency of speech+graphics versus standard text+graphics messages . Recordings of interactions include screen copies and subjects' gaze fixations (head-mounted eyetracker ASL-501). Software tools (under Windows) have been developed by Jérôme Simonin for recording and ``replaying'' interactions, as well as for analysing and annotating them semi-automatically. We are now interpreting the results of both quantitative and qualitative analyses.
At the same time, we are revisiting the basic design issues addressed in our work on contextual online help within the broader framework of adaptive user interfaces, as Jérôme Simonin's PhD research is focused on the following utility and usability issues:
To what extent should users be made aware, or notified explicitly, of the evolution of the system behaviour?
Should notification strategies (i.e., content and form of system messages) vary according to the user characteristics (e.g., knowledge, preferences and interests, etc.) that determine the system evolution?
What amount of control over the interface evolution should users be given?
A comprehensive survey of published research on adaptive user interfaces has been presented at a national scientific workshop . Jérôme Simonin is currently developing specific software for implementing an experimental research program that includes utility and usability studies of adaptive online help. Software developments comprise prototypes that will enable us to collect usage data over several weeks, and assistance tools for human simulation of advanced adaptive help strategies using the Wizard of Oz technique. They also include recording of interactions with any Windows application, their ``replay'', their semi-automatic annotation and analysis. Experimentations will start in the first semester of 2005.
Considering the recent developments of new graphical 3D techniques, Virtual Environments, etc., considering hopes and efforts that those developments stimulate, many questions arise, in particular from the ergonomic point of view, and more especially concerning the HCI techniques that would permit to interact with such environments. Our rationale is that in order to achieve 3D objects manipulation within such environments, classical interaction techniques and devices (mouse and keyboard) are not necessarily the most suitable. Our research deals with user interaction with 3D Virtual Environments (VEs), for design activities. In this context, one must consider the type of activity to be supported, the range of available devices, techniques and metaphors, and the problems and constraints still inherent to these technologies from both human factors and application architectural points of view. Following an initial survey of design activities in the field of cloth design, and a literature survey published last year, the research conducted concerned mainly the identification of user tasks within an immersive environment, the definition of a table associating users tasks to input modalities, and the design and evaluation of a 3D environment supporting multimodal interactions (vocal commands synergistically combined to gaze or head direction and hand gestural deictic or mimetic movements), using a communication metaphor. The environment is reactive at the lowest interaction levels (preselection feedback of directions and 3D objects indications). A reduced set of typical geometric tasks to perform with 3D objects has been selected in order to conduct a user performance test. In an attempt to study which speech terms are used to express a command for a given geometric operation to perform, the interaction uses a generic command-language like syntax in order to avoid any influence on the users' speech productions. The main objectives of the experiment consist in:
evaluating the consistency of such a multimodal interaction style,
testing the validity of low level task-modalities assignment,
gathering data about upper interaction level design, and exploring the variability in use of speech.
Since we did not know the users' expressive corpus for the selected tasks, the test is designed as a ``Wizard of Oz'' experiment. Before the experiment, each user was explained the use of low level task-modalities roleplay with no initial training. Subjects were then installed facing a projected 3D scene with a second display showing successively the geometric tasks to perform on a particular 3D object of the 3D stage. The user speech and gestures productions were then recorded in both digital and analog ways so as to be replayed afterwards for closer examination. The results of the experiment indicate that this new style of interaction and low level task-modalities assignment are efficient. They also indicate that the verbal group used in the command (in french), do not determine the operation to be performed from a system point of view. Such a determination can be obtained through a combination of several terms of the provided sentences. This work has been described in a computer science thesis presented in november 2004 .
Spoken natural language may appeal to users in the general public, since it is the main modality used, together with pointing gestures or gaze, in face-to-face human communication. Our work on multimodal human-computer interaction is based on the two following observations. On the one hand, speech+gesture-based multimodality has been extensively studied, both from a software and an ergonomic point of view. However, speech+graphics as an output form of multimodality has raised fewer research studies, especially as regards the utility and usability of speech as a supplementary modality to graphics. Besides, pointing hand gestures have the same expressive power as gaze as regards the selection of objects in very large displays (e.g., electronic blackboards, reality centres or caves, etc.) or in 3D environments, namely: both modalities can only specify directions in this context, if used spontaneously as in real life. Our current work on multimodality addresses the three following issues:
How to design oral messages that help visual search in cluttered displays?
How to design multimodal command languages that use information on gaze movements to disambiguate oral commands, especially those including deictic phrases?
Are voice+graphics help messages more effective than standard text+graphics ones? Does this form of output multimodality actually improve the effectiveness and efficiency of online help?
Concerning the effectiveness of oral support to visual search, the detailed presentation of our first study has been accepted as a chapter in a collective scientific book edited by Kluwer . This study was focused on determining whether oral information on the location of a visual target in a complex, cluttered, display could improve the efficiency of its identification (accuracy and selection times). Targets were either familiar (visual presentation of the isolated target prior to scene display) or unfamiliar (oral characterisation of the target only, prior to scene display), monomodal (visual or oral) or multimodal (visual and oral).
This initial study was followed up, this year, with two more ambitious experimental studies. The first experiment was focused on the influence of scene spatial layout on the effectiveness of oral messages for visual target detection. This study involved 24 participants. 3600 photographs of real landscapes, people and objects, were selected from a database including over 6000 items formatted and divided up into 120 thematically homogeneous collections (30 photographs per collection). These collections were displayed using four spatial layouts (40 collections/scenes per spatial layout): elliptical, radial, matrix-like, random.
To refine results on participant performance (especially target detection accuracy and selection time), we performed a complementary experiment where subjects' eye movements were captured, recorded and analysed using the same device and software tools as for multimodal online help. Results of both studies are detailed in Suzanne Kieffer's PhD thesis which will be completed by the end of 2004. These studies represent our first contribution to the Micromegas Project
Concerning speech+gaze multimodal interaction, we collected realistic data on spontaneous and controlled eye movements. 10 participants interacted during half an hour with 3D ad hoc applications, using first speech then multimodal (speech+gaze) commands. Applications were created using the ORIS virtual reality development software, and the user interface was simulated using an advanced implementation of the Wizard of Oz technique (i.e., the human wizard benefitted from appropriate software assistance). We are currently analysing the recorded multimodal interactions with a view to gaining an insight into users' gaze strategies during oral interaction with graphical applications, using a specific software tool that we developed in 2003 (under Linux). This software is meant to facilitate such analyses by automating them in part. It records and ``replays'' interactions with any ORIS application in two separate windows. One window displays the user's points of gaze superimposed on the successive displays from the application. The other window displays graphical representations of the temporal evolutions of both pupil diameters and the speech signal. It also displays the names of the graphical application objects looked at by the user, and speech recognition results in both orthographic and phonetic forms
Entertainement and commercial Websites, information kiosks, and public terminals tend to display an increasing number of pictures simultaneously: video and movie snapshots, CD sleeves, book covers, etc. Personal electronic archives and file directories are incleasingly cluttered with unordered collections of photographs, scanned drawings, videos. The only option offered to users by current software for searching large sets of picture files, such as ACDSee, PhotoSuite of ThumbsPlus, amounts to scrollable 2D arrays of icons (or miniatures) or ordered lists of directory and file names.
We have designed and implemented two 3D metaphors for visualizing and searching through large collections of photographs (landscapes, people and objects). One metaphor is based on 3D object manipulation, the other one on user immersion in the 3D representation. These metaphors have been compared regarding their respective efficiency (i.e., task execution times, success and failure rates, spatial orientation effectiveness) and usability (user subjective satisfaction especially). 8 participants carried out two types of realistic search tasks: looking for a visually familiar picture, and browsing through a collection in search of a picture matching a predefined list of criteria. Collections included each 1000 photographs or so.
This study on the design, implementation and ergonomic assessment of novel 3D visualization and interaction metaphors for facilitating search activities in unstructured large sets of visual information is further detailed in Olivier Christmann's DEA Report . It represents another contribution to the Micromégas Project. We are currently considering other visualization metaphors for browsing through large sets of familiar structured multimedia information.
Participation to the PREDIT program (Ministry of Transportation) together with SNCF (French Railways) and RATP (Paris Subway System): study of mental workload based on task characteristics (N. Grondin, V. Lucquiaud, D. L. Scapin).
Participation to the ``Pôle Intelligence logicielle'' of the ``Contrat de plan Etat-Région Lorraine'': projects ``Assistance à l'apprentissage des langues'' and ``Interactions multimodales'' under the theme ``Téléopérations et assistants intelligents'' (N. Carbonell).
Participation to the RTP-CNRS 16 ``Méthodes et Outils pour l'Interaction Homme-Machine''; AS Méthodes et outils pour les systèmes mixtes (C. Bach, D. L. Scapin).
Participation to the RTP-CNRS 32 ``Acceptabilité, ergonomie et usage des TIC'' (N. Carbonell member of the ``Steering committee '').
Participation to the Micromégas project, ACI ``Masses de Données'', since July 2003 (N. Carbonell, D. L. Scapin).
ERCIM Working Group `UI4ALL' (N. Carbonell member of the Steering Committee).
Member of the WWCS (Work With Computer Systems conference) Group (D. L. Scapin).
AFNOR X3SE (Ergonomie des Logiciels Interactifs); (Chair: D.L. Scapin).
ISO/TC 159/SC4/WG5 (Software ergonomics and human-computer dialogues) (D. L. Scapin expert).
ISO/TC 159/SC4/WG6 (Human-centred design processes for interactive systems) (D. L. Scapin expert).
CEN/TC 122/WG 5 (Software ergonomics and human-computer dialogues) (D. L. Scapin expert).
AS-RTP16-2003/2004 `` Méthodes et outils pour les systèmes mixtes ''. (co-organizer D. L. Scapin)
WORKSHOP ``Mixed systems'', IHM 2004, Namur. (co-organizer D. L. Scapin)
Working group `` CESAME '' (GT 4.6 - GDR I3 - CNRS) `` Conception et Evaluation de Systèmes interactifs Adaptables, Mixtes, en Evolution ''. (co-organizer D. L. Scapin)
Behaviour and Information Technology. (Member of the Editing Committee: D. L. Scapin; Reviews: N. Carbonell).
Interacting with Computers, (Member of the Editing Committee: D.L. Scapin).
International Journal of HCI. (Member of the Editing Committee: D.L. Scapin).
International Journal of Human-Computer Studies. (Member of the Editing Committee: D.L. Scapin).
International Journal of Universal Access in the Information Society. (Members of the Editorial Board: N. Carbonell, D. L. Scapin).
Le Travail Humain (Membres du Comité de Consultants: N. Carbonell, D. L. Scapin).
Revue d'Interaction Homme-Machine. (Membre du Comité de Rédaction: D. L. Scapin; guest co-editor, special issue 'IHM'03 extended best papers', N. Carbonell). .
Revue Information, interaction, intelligence. (Membre du Comité de Rédaction: N. Carbonell).
Human Computer Interaction. (Reviews: N. Carbonell).
Artificial Intelligence. (Reviews: N. Carbonell).
Journal of Universal Computer Science. (Reviews: N. Carbonell).
Sixth Annual International Workshop on Internationalisation of Products and Systems (IWIPS 2004) Vancouver, Canada, 8 -10 July (Member Programme Committee D. L. Scapin)
7th International Conference on Work With Computing Systems (WWCS 2004), 29 June - 2 July, 2004 . Kuala Lumpur, Malaysia (Member Programme Committee D. L. Scapin)
SELF Congress, Geneva, Switzerland, Sept. 15-17 2004 (Member Programme Committee D. L. Scapin)
3rd International Workshop on TAsk MOdels and DIAgrams for user interface design. Prague, Czech Republic, November 15-16, 2004. (TAMODIA 2004) (Member Programme Committee D. L. Scapin)
16ème Conférence Francophone sur l'Interaction Homme-Machine (IHM'04), Namur, Belgium, 30/08-03/09/2004: member programme committee (D. L. Scapin); meta-reviewer (D. L. Scapin); co-organizer Doctoral Consortium & Workshop `` Mixed Systems '' (D. L. Scapin)
OZCHI 2004, Annual conference of the Australasian Computer-Human Interaction Special Interest Group, 22 - 24 November 2004, University of Wollongong, Wollongong, (Member Programme Committee, D. L. Scapin)
International Workshop ``Exploring the design and engineering of Mixed Reality Systems'', joint with ACM IUI,2004 & CADUI, 2004, Funchal, January 13-16, 2004: (Member Program Committee D. L. Scapin).
Sixth International ACM Conference on Assistive Technologies (ASSETS 2004), Atlanta, Georgia, 18-20 October, 2004. (Programme Committee member, N. Carbonell)
Sixth International Conference on Multimodal Interfaces (ICMI'04), State College, PA, 13-15 October, 2004. (Programme Committee member, N. Carbonell)
International ACM Conference on Human Factors in Computing Systems (CHI'04), Vienna, Austria, 24-29 April, 2004. (Reviewers: N. Carbonell, D. L. Scapin)
42nd Annual Meeting of the Association for Computational Linguistics (ACL'04), Barcelona, Spain, 21-26 July, 2004, ``Multimodal/Multimedia Processing'' area. (Reviews, N. Carbonell)
10ème Colloque francophone ``Ergonomie et Informatique Avancée'' (ERGO-IA'04), Biarritz, 17-19 novembre 2004. (membres du Comité de Programme, N. Carbonell, D. L. Scapin)
2nd Cambridge Workshop on Universal Access and Assistive Technology (CWUAAT'04), Cambridge, UK, 22-24 March, 2004. (Programme Committee member, N. Carbonell)
Workshop on Modern Technologies for Web-based Adaptive Systems (MTWAS 2004), International Conference on Computational Science (ICCS'04), Krakow, Poland, 7-9 June, 2004. (Programme Committee member, N. Carbonell)
8th ERCIM Workshop ``User Interfaces for All'' (UI4ALL'04), Vienna, Austria, 28-29 June, 2004. (Programme Committee member, N. Carbonell, D. L. Scapin)
International Workshop on Web3D Technologies in Learning, Education and Training (LET-WEB3D), Udine (Italy), 27-28 September, 2004. (Programme Committee member, N. Carbonell)
ACM, Association of Computing Machinery; Special Interest Group on Computer-Human Interaction (SIGCHI). Members: N. Carbonell, D. L. Scapin.
AFIA (Association Française d'Intelligence Artificielle). Membre: N. Carbonell.
AFIHM (Association Francophone d'Interaction Homme-Machine). Membres: D. L. Scapin (membre de la Commission de Pilotage des Manifestations Scientifiques); (membre du C.A.); C. Bach, N. Carbonell, V. Lucquiaud.
ARCo (Association pour la Recherche Cognitive). Membre: N. Carbonell.
HFES (Human Factors and Ergonomics Society). Member:D. L. Scapin; HFES-CSTG (Computers Systems Interest Group). Member: D. L. Scapin.
IEEE (Institute of Electrical and Electronics Engineers). Member: N. Carbonell.
ISCA (International Speech Communication Association). Member: N. Carbonell.
SELF (Société d'Ergonomie de Langue Française). Member: D. L. Scapin.
Marco Winckler: ``StateWebCharts une notation formelle pour la modélisation de la navigation sur les applications Web.'', Doctorat spécialité Informatique, 02/04/04, Université de Toulouse I; D. L. Scapin rapporteur.
Abdul Razak: ``Interation Homme Machine dans le cas d'un handicap moteur.'', Doctorat spécialité Informatique, 18/05/04, Institut National des Télécommunications; D. L. Scapin rapporteur.
Géry Casiez: ``Contribution à l'étude des interfaces haptiques. Le DigiHaptic: un périphérique haptique de bureau à degrés de liberté séparés.'' Doctorat spécialité Instrumentation et Analyses Avancées, 10/10/04, Université des Sciences et Technologies de Lille I; D. L. Scapin rapporteur.
René Chalon: ``Réalité Mixte et Travail Collaboratif: IRVO, un modèle de l'Interation Homme-Machine.'', Doctorat spécialité Informatique, 15/12/04, Ecole Centrale de Lyon ; D. L. Scapin examinateur.
Salma Jamoussi: ``Méthodes statistiques pour la compréhension automatique de la parole.'' Doctorat spécialité informatique, 6/12/2004, Université Henri Poincaré Nancy 1; N. Carbonell examinateur.
Université Henri Poincaré, IUP GMI 3: C. Bach (10h)
Université Henri Poincaré, IUP GMI 3: V. Lucquiaud (10h)
Institut Supérieur de Technologie et de Management (ISTM), Module IHM: C. Bach (15h).
Institut Supérieur de Technologie et de Management (ISTM), Module IHM: V. Lucquiaud (15h).
DEA d'Informatique, Ecole Doctorale IAEM-Lorraine, N. Carbonell, responsable de la filière 'Perception, raisonnement, traitement automatique des langues', membre permanent du jury de soutenance des stages (et I.H.M., 10h).
Maîtrise d'Informatique Fondamentale, Université Henri Poincaré, N. Carbonell (I.A., 12h).
Participation à la conception des programmes de la Licence Mathématiques-Informatique et du Master Informatique (offre LMD des Universités de Nancy) : pour la Licence, responsabilité de l'U.E. 'Arbres et graphes - Algorithmique et programmation' (S6), pour le Master, responsabilité de la spécialité recherche 'Perception, raisonnement, interactions multimodales' et de 4 U.E. (N. Carbonell).
International Workshop ``Exploring the design and engineering of Mixed Reality Systems'', joint with ACM IUI, 2004 & CADUI, 2004, Funchal, January 13-16, 2004 (participation D. L. Scapin, C. Bach)
16ème Conférence Francophone sur l'Interaction Homme-Machine (IHM'04), Namur, Belgium 30/08-03/09/2004 (participation D. L. Scapin, C. Bach)
ErgoIA (Ergonomie et Informatique Avançée), Biarritz, France, 17-19 November, 2004. (participation D. L. Scapin)
International ACM Conference on Human Factors in Computing Systems (CHI'04), Vienna, Austria, 24-29 April, 2004. (N. Carbonell, S. Kieffer, J. Simonin, D. L. Scapin)
Doctoriales de l'Ecole Polytechnique, de l'Université Pierre et Marie Curie, et de la Délégation Générale pour l'Armement, Fréjus, 23-29 mai 2004. (J. Simonin)
Journée d'Etude ``Interfaces adaptatives'', Laboratoire Paragraphe, Paris VIII, 17 juin 2004, Exposé invité ``Interfaces adaptatives Modèles de l'utilisateur'', N. Carbonell & J. Simonin.
8th ERCIM Workshop ``User Interfaces for All'' (UI4ALL'04), Vienna, Austria, 28-29 June, 2004. (N. Carbonell)
Sixth International Conference on Multimodal Interfaces (ICMI'04), State College, PA, 13-15 October, 2004. (N. Carbonell)
Sixth International ACM Conference on Assistive Technologies (ASSETS 2004), Atlanta, Georgia, 18-20 October, 2004. (N. Carbonell)