The Web is no longer perceived as a documentary system. Among its many evolutions, it became a virtual place where persons and software interact in mixed communities. These large scale interactions create many problems in particular the one of reconciling formal semantics of computer science (e.g. logics, ontologies, typing systems, etc.) on which the Web architecture is built, with soft semantics of people (e.g. posts, tags, status, etc.) on which the Web content is built.
Let us take a concrete and very common example of such semantic frictions on the Web. Many Web sites include forums, blogs, status feeds, wikis, etc. In other words, many Web sites include content management systems and rapidly build huge collections of information resources. As these collections grow, several tasks become harder to automate: search, notification, restructuring, navigation assistance, recommendation, trend analysis, etc. One of the main problems is the gap between the fairly informal way content is generated (e.g. plain text, short messages, free keywords) and the need for structured data and formal semantics to automate these functionalities (e.g. efficient indexes, domain thesauri). Mixed structures are starting to appear (e.g. structured folksonomies, hash tags, machine tags, etc.) but automating support in such collaboration spaces requires efficient and complete methods to fully bridge that gap.
As the Web becomes a ubiquitous infrastructure bathing all the objects of our world, this is just one example of the many frictions it will create between formal semantics and social semantics. This is why the Wimmics team proposes to study models and methods to bridge formal semantics and social semantics on the Web.
Our main challenge is to bridge formal semantics and social semantics.
From a formal modeling point of view, one of the consequences of the evolutions of the Web is that the initial graph of linked pages has been joined by a growing number of other graphs. This initial graph is now mixed with sociograms capturing the social network structure, workflows specifying the decision paths to be followed, browsing logs capturing the trails of our navigation, service compositions specifying distributed processing, open data linking distant datasets, etc.
Moreover, these graphs are not available in a single central repository but distributed over many different sources and some sub-graphs are public (e.g. dbpedia
Each type of network of the Web is not an isolated island. Networks interact with each other: the networks of communities influence the message flows, their subjects and types, the semantic links between terms interact with the links between sites and vice-versa, etc.
Not only do we need means to represent and analyze each kind of graphs, we also need the means to combine them and to perform multi-criteria analysis on their combination. Wimmics proposes to address this problem focusing on the characterization of (a) typed graphs formalisms to model and capture these different pieces of knowledge and (b) hybrid operators to process them jointly. We will especially consider the problems that occur in such structures when we blend formal stable semantic models and socially emergent and evolving semantics. We believe Wimmics can contribute to this understanding by combining two research domains:
by proposing a multidisciplinary approach to analyze and model the many aspects of these intertwined information systems, their communities of users and their interactions;
by formalizing and reasoning on these models to propose new analysis tools and indicators, and support new functionalities and better management.
We rely on cognitive studies to build models of the system, the user and the interactions between users through the system, in order to support and improve these interactions.
In the short term, following the user modeling technique known as Personas, we are interested in these user models that are represented as specific, individual humans. Personas are derived from significant behavior patterns (i.e., sets of behavioral variables) elicited from interviews with and observations of users (and sometimes customers) of the future product. Our user models will specialize Personas approaches to include aspects appropriate to Web applications. The formalization of these models will rely on ontology-based modeling of users and communities starting with generalist schemas (e.g. FOAF: Friend of a Friend). In a longer term we will consider additional extensions of these schemas to capture additional aspects (e.g. emotional states). We will extend current descriptions of relational and emotional aspects in existing variants of the Personas technique.
Beyond the individual user models, we propose to rely on social studies to build models of the communities, their vocabularies, activities and protocols in order to identify where and when formal semantics is useful. In the short term we will further develop our method for elaborating collective personas and compare it to the related collaboration personas method and to the group modeling methods which are extensions to groups of the classical user modeling techniques dedicated to individuals. We also propose to rely on and adapt participatory sketching and prototyping to support the design of interfaces for visualizing and manipulating representations of collectives. In a longer term we want to focus on studying and modeling mixed representations containing social semantic representations (e.g. folksonomies) and formal semantic representations (e.g. ontologies) and propose operations that allow us to couple them and exchange knowledge between them.
Since we have a background in requirement models, we want to consider in the short term their formalization too in order to support mutual understanding and interoperability between requirements expressed with these heterogeneous models. In a longer term, we believe that argumentation theory can be combined to requirement engineering to improve participant awareness and support decision-making. On the methodological side, we propose to adapt to the design of such systems the incremental formalization approach originally introduced in the context of CSCW (Computer Supported Cooperative Work) and HCI (Human Computer Interaction) communities.
Finally, in the short term, for all the models we identified here we will rely on and evaluate knowledge representation methodologies and theories, in particular ontology-based modeling. In a longer term, additional models of the contexts, devices, processes and mediums will also be formalized and used to support adaptation, proof and explanation and foster acceptation and trust from the users. We specifically target a unified formalization of these contextual aspects to be able to integrate them at any stage of the processing.
Our second line of work is to formalize as typed graphs the models identified in the previous section in order to exploit them, e.g. in software. The challenge then is two-sided:
To propose models and formalisms to capture and merge representations of both kinds of semantics (e.g. formal ontologies and social folksonomies). The important point is to allow us to capture those structures precisely and flexibly and yet create as many links as possible between these different objects.
To propose algorithms (in particular graph-based reasoning) and approaches (e.g. human-computing methods) to process these mixed representations. In particular we are interested in allowing cross-enrichment between them and in exploiting the life cycle and specificities of each one to foster the life-cycles of the others.
While some of these problems are known, for instance in the field of knowledge representation and acquisition (e.g. disambiguation, fuzzy representations, argumentation theory), the Web reopens them with exacerbated difficulties of scale, speed, heterogeneity, and an open-world assumption.
Many approaches emphasize the logical aspect of the problem especially because logics are close to computer languages. We defend that the graph nature of Linked Data on the Web and the large variety of types of links that compose them call for typed graphs models. We believe the relational dimension is of paramount importance in these representations and we propose to consider all these representations as fragments of a typed graph formalism directly built above the Semantic Web formalisms. Our choice of a graph based programming approach for the semantic and social Web and of a focus on one graph based formalism is also an efficient way to support interoperability, genericity, uniformity and reuse.
A number of evolutions have changed the face of information systems in the past decade but the advent of the Web is unquestionably a major one and it is here to stay. From an initial wide-spread perception of a public documentary system, the Web as an object turned into a social virtual space and, as a technology, grew as an application design paradigm (services, data formats, query languages, scripting, interfaces, reasoning, etc.). The universal deployment and support of its standards led the Web to take over nearly all of our information systems. As the Web continues to evolve, our information systems are evolving with it.
Today in organizations, not only almost every internal information system is a Web application, but these applications also more and more often interact with external Web applications. The complexity and coupling of these Web-based information systems call for specification methods and engineering tools. From capturing the needs of users to deploying a usable solution, there are many steps involving computer science specialists and non-specialists.
We defend the idea of relying on Semantic Web formalisms to capture and reason on the models of these information systems supporting the design, evolution, interoperability and reuse of the models and their data as well as the workflows and the processing.
With billions of triples online (see Linked Open Data initiative), the Semantic Web is providing and linking open data at a growing pace and publishing and interlinking the semantics of their schemas. Information systems can now tap into and contribute to this Web of data, pulling and integrating data on demand. Many organisations also started to use this approach on their intranets leading to what is called linked enterprise data.
A first application domain for us is the publication and linking of data and their schemas through Web architectures. Our results provide software platforms to publish and query data and their schemas, to enrich these data in particular by reasoning on their schemas, to control their access and licenses, to assist the workflows that exploit them, to support the use of distributed datasets, to assist the browsing and visualization of data, etc.
Examples of collaboration and applied projects include: Viseo Joint Laboratory, Corese/KGRAM, Datalift, DBpedia, ALU/BLF Convention, ADT SeGViz.
In parallel to linked open data on the Web, social Web applications also spread virally (e.g. Facebook growing toward 800 million users) first giving the Web back its status of a social read-write media and then leading it to its full potential of a virtual place where to act, react and interact. In addition, many organizations are now considering deploying social Web applications internally to foster community building, expert cartography, business intelligence, technological watch and knowledge sharing in general.
Reasoning on the Linked Data and the semantics of the schemas used to represent social structures and Web resources, we intend to provide applications supporting communities of practice and interest and fostering their interactions.
We use typed graphs to capture and mix: social networks with the kinds of relationships and the descriptions of the persons; compositions of Web services with types of inputs and outputs; links between documents with their genre and topics; hierarchies of classes, thesauri, ontologies and folksonomies; recorded traces and suggested navigation courses; submitted queries and detected frequent patterns; timelines and workflows; etc.
Our results assist epistemic communities in their daily activities such as biologists exchanging results, business intelligence and technological watch networks informing companies, engineers interacting on a project, conference attendees, students following the same course, tourists visiting a region, mobile experts on the field, etc. Examples of collaboration and applied projects include: Kolflow, OCKTOPUS, ISICIL, SAP Convention.
Corese (COnceptual REsource Search Engine) is a Semantic Web Factory. It enables users to load and process RDFS schemas, RDF data and query and update the graph base thus created by using the SPARQL 1.1 Query & Update Language (figure ).
Furthermore, Corese query language integrates original features such as approximate search, extended Property Path, SQL or XPath. It provides SPARQL Template Transformation Language for RDF graphs and a SPARQL based Inference Rule Language for RDF. Corese also provides distributed federated query processing, thanks to a collaboration with Alban Gaignard and Johan Montagnat from I3S.
Corese is a Semantic Web Factory that enables us to design and develop Semantic Web applications; it is available for download. In the past, Corese received two software development grants (ADT) from Inria and in 2014 we have a new grant for two more years. Corese is registered at the APP and in 2007 we decided to distribute it as open source software under license CeCILL-C.
Corese is used and has been used in more than 60 applications, 24 PhD Thesis and is used for education by several institutions. It has been used in European projects such as Ontorule, Palette, SevenPro, SeaLife and in ANR projects such as Kolflow, Ginseng, Neurolog, VIP, ISICIL, e-WOK Hub. Corese is the Semantic Web engine of Discovery Hub
The work on Corese was published in , , , .
Web page: http://
The QAKiS system (figure ) implements question answering over DBpedia. QAKiS allows end users to submit a query to an RDF triple store in English and obtain the answer in the same language, hiding the complexity of the non-intuitive formal query languages involved in the resolution process. At the same time, the expressiveness of these standards is exploited to scale to the huge amounts of available semantic data. Its major novelty is to implement a relation-based match for question interpretation, to convert the user question into a query language (e.g. SPARQL). English, French and German DBpedia chapters are the RDF data sets to be queried using a natural language interface.
Web page:
http://
DBpedia is an international crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the semantic Web as linked open data. The DBpedia triple stores then allow anyone to solve sophisticated queries against Wikipedia extracted data, and to link the different data sets on these data. The French chapter of DBpedia was created and deployed by Wimmics and is now an online running platform providing data to several projects such as: QAKIS, Izipedia, zone47, Sépage, HdA Lab., JocondeLab, etc.
The platform can be found at: http://
It is part of the Semanticpedia convention: http://
We have proposed a methodology to identify and classify the semantic relations holding among the possible different answers obtained for a certain query on DBpedia language specific chapters. The goal is to reconcile information provided by language specific DBpedia chapters to obtain a consistent results set. The results of this research have been published at the LREC conference .
This classification has then been exploited in another work, together with Elena Cabrio and Alessio Palmero Aprosio (FBK Trento, Italy), where Serena Villata has worked on an extension of QAKiS, the system for open domain Question Answering over linked data, that allows to query DBpedia multilingual chapters. Such chapters can contain different information with respect to the English version, e.g. they provide more specificity on certain topics, or fill information gaps. In particular, she extended the results presented last year embedding the new identified relations among the different answers, using argumentation theory to reconcile the information and further improving the system's performances. A demo of the new argumentation module is available online
Moreover, we have proposed, together with Alessio Palmero Aprosio, a system called NLL2RDF to translate in an automated way licenses, such as GPL, in natural language into a machine-readable version using the RDF language. The system is available online
Finally, we have published the benchmark of natural language arguments called NoDE. The benchmark is available online
Together with Leendert van der Torre (University of Luxembourg), we proposed a framework for reasoning about norms using argumentation theory. Norms regulate our everyday life, and are used to assess the conformance of our behavior with respect to the regulations holding in specific contexts. Given the profound importance of norms in our lives, it is fundamental to understand which norms are valid in certain environments, how to interpret them, the legal conclusions of such norms, which norms can be derived from the existing ones, etc. In order to understand norms, people discuss about them to assess the validity or applicability of a certain norm subject to particular conditions, to derive the obligations and permissions to be enforced, or claim that a certain normative conclusion cannot be derived from the existing regulations. Several frameworks have been proposed for legal argumentation, but no comprehensive formal model of legal reasoning from arguments has been proposed yet. The goal of this work is to enrich legal argumentation with a formal account of deontic modalities. These results have been published at the 5th Conference on Computational Argumentation (COMMA 2014).
Moreover, together with Guido Boella (University of Torino, Italy), Pietro Baroni and Massimiliano Giacomin (University of Brescia, Italy), Federico Cerutti (University of Aberdeen, UK), Leendert van der Torre (University of Luxembourg), we have studied also the dynamics of argumentation framework and this research has lead to a publication in the Artificial Intelligence journal .
In the domain of Linked Open Data a need is emerging for developing automated frameworks able to generate the licensing terms associated to data coming from heterogeneous distributed sources. Together with Guido Governatori (NICTA, Australia) and Antonino Rotolo (University of Bologna, Italy), Serena Villata proposed and evaluated a deontic logic semantics which allows to define the deontic components of the licenses, i.e., permissions, obligations, and prohibitions, and generate a composite license compliant with the licensing items of the composed different licenses. The approach is evaluated using the SPINdle defeasible reasoner, where the proposed heuristics have been hard coded in the reasoner. The prosecution of this research line has seen the analysis of the compatibility of a set of licensing terms (always using SPINdle), the analysis of the role of licenses associated to vocabularies, and the development of the Licentia suite of services to reason over licenses and help users to deal with such kind of information. The results of this research line have been published at the International Semantic Web Conference demo session .
Together with Célia da Costa Pereira of I3S, we have investigated syntactic belief revision operators and goal-generation mechanisms to make the practical implementation of a general BDI (Belief-Desire-Intention) model of agency based on possibility theory. Furthermore, we took part in a joint investigation with a research team, led by Cristiano Castelfranchi, of the CNR-ISTC in Rome on the issue of trust in multi-agent systems . We also employed agent-based simulation to test a theory of human stupidity proposed by the late Italian economist Carlo Cipolla ; our paper won the Best Paper Award at IAT 2014.
We carried on our investigation in an approach to RDF mining based on grammatical evolution and possibility theory, whose aim is to mine large RDF graphs by automatically generating and testing OWL 2 axioms based on the known facts. In particular, we addressed the problem of testing candidate OWL 2 axioms against the fact contained in an RDF base and proposed a novel scoring heuristics based on falsification and possibility theory .
Together with Célia da Costa Pereira of I3S and Mauro Dragoni of FBK, Trento, who visited our team for three months from April to June 2014, we have proposed a novel approach to concept-level sentiment analysis based on fuzzy logic. Our system , participated in the Semantic Web Evaluation Challenge (SemWebEval) at ESWC 2014 and was the winner for Task 1 and Most Innovative Approach.
Together with Somsack Inthasone, Nicolas Pasquier and Célia da Costa Pereira of I3S, we developed a data warehouse collecting data for research on biodiversity .
A work on electorcardiographic signal classification using evolutionary algorithms and neural networks carried out while still at the University of Milan, got published as a book chapter .
Differently from search engines, the goal of Question Answering (QA) is to return precise answers to users' natural language questions, extracting information from both documentary text and advanced media content. Up to now, QA research has largely focused on text, mainly targeting factual and list questions. The goal of our work was instead to exploit structured data and metadata describing multimedia content on Linked Open Data to provide a richer and more complete answer to the user, combining textual information with other media content.
We implemented an extension of our QAKiS system
Automated Natural Language Processing (NLP), Web Open Data (Linked Open Data) and social networks are the three topics of the SMILK ANR LabCom including their coupling studied in three ways: texts and Linked Data, Linked Data and social resources, texts and social resources. The purpose of this LabCom is indeed to develop research and technologies on the one hand, retrieve, analyze, and reason about linking data from textual Web resources and other to use open Web data taking into account the social structures and interactions in order to improve the analysis and understanding of textual resources.
As a first step in this direction, during the internship of Fabrice Jauvat we have developed a prototype of a system that - given free text (in particular in the cosmetics domain, extracted from a forum, a magazine, or a Web site) - can first recognize the named entities launching in parallel the RENCO system (developed by our partner in the LabCom), and NERD
This work is done within a Cifre PhD Thesis colocated in the Wimmics team and with SynchroNext Company located in Nice. The work consists in modelling and implementing ontology-based natural language Chatbot in commercial domain which consists of
The design of a commercial knowledge base using the websites' APIs and web services (e.g. Amazon1, eBay2, BestBuy3),
Interpreting and handling links between users' natural language questions by constructing relational graph,
Generation and visualization of textual and media answers.
A. Hallili attended the ESSLLI summer school where a poster was accepted .
Last year a prototype of a GUI of an editor of formal dictionary definitions aimed at lexicographers was developed based on the formalism of Units Graphs and on Meaning-Text Theory. This year, the prototype was demonstrated during the IC 2014 conference . The prototype was also described in a paper reporting the knowledge engineering methodology for representing lexicographic definitions it supports .
Today's Web has given rise to several platforms serving the purpose of collaborative software development. Thanks to these environments, it is possible, among others, for anyone to suggest new requirements for a software under development. A lot of requirements are thus proposed by users and it becomes difficult, after a while, for the persons in charge of the software which development is hosted by the platform to understand this large set of new requirements in its entirety. Therefore we proposed a tool to make large sets of requirements posted on collaborative software development platforms better workable despite the poor content of requirement body. Our aim was to propose an approach to automatically group similar requirements together in order to propose a limited number of requirement categories, thus improving the review process. As requirements expressed on collaborative software development platforms are usually very short and their content not very structured, we proposed to exploit relationships between stakeholders and already processed requirements to break the whole set of new requirements into meaningful categories. Our tool relies on Semantic Web languages and Formal Concept Analysis to provide a 3 steps data analysis process. The data is first extracted from the platform and translated into RDF, then stakeholders' past activities are analyzed to finally get stakeholder categories in order to improve the review of newly posted requirements.
According to the experiments that we conducted, we noticed some limitations in our approach. When the contributing stakeholders are newbies with no previous participation in any blueprint or bug and when there is no sufficient number of evaluated blueprints or bugs. To cope with this limitation, we plan to evaluate stakeholders reputation by looking at their activities on the whole collaborative software development platform (and not only the project under consideration). The results of this research have been published in .
In today's software development methodologies, User Stories (US ) are mostly used as primary requirements artifacts. They are used to express requirements from a final user point of view and at a low abstraction basis using natural language. Over the years, several informal templates have been proposed by agile methods practitioners or academics to guide requirements gathering. Consequently, these templates are used in an ad-hoc manner, each modeler having idiosyncratic preferences. In this context, we performed a study of templates found in literature in order to propose a unified model. We also proposed an RDFS translation of this model in order to allow the annotation of user stories, thus providing search and reasoning capabilities to agile methods practitioners. The results of this research have been published in .
The aim of this PhD work is to improve Coloured Petri Nets (CPNs) and Ontology engineering to support the development of business process and business workflow definitions of various fields. To realize this objective, in the first time, we propose an ontological approach for representing business models in a meta-knowledge base. We introduce four basic types of manipulation operations on process models used to develop and modify business workflow patterns. In the second time we propose a formal definition of semantic constraints and an O(n3)-time algorithm for detecting redundant and conflicting constraints. By relying on the CPN Ontology and sets of semantic constraints, workflow processes are semantically created. Finally, we show how to check the semantic correctness of workflow processes with the SPARQL query language , , .
In this PhD, the work is focused on text processing in social networks and the main objectives are focused on the analysis of the spatial aspect, context enrichment and spatiotemporal analysis of short text messages.
During the first half of the year, we have mainly worked on positioning the research subject beside the state of the art as well as determining relevant domain. After analyzing several works on short text analysis in many domains such as Semantic Web, Data Mining and Natural Language Processing, we have identified a lack in the representation of the spatial aspect. Indeed, the spatial properties of items shared among online communities can be seen on three different aspects: i) The location of the resources which can be identified by its URI/URL, ii) The producer's location, iii) The location related to the content of the messages.
Most existent works have considered as identical the producer's location and the event described by the content of the message, which can lead to wrong results in many cases. For example, a user can be in the United States while describing an event in Africa. The SIOC ontology is one of the most known for representing items shared among online communities; we have proposed an extension of this ontology in which the spatial aspects are clearly represented. However, there exists a big challenge in finding the relevant location that can be associated to the content of a message. We are currently working on an approach that combines NLP technics and GIS to identify the spatial location of an item by analyzing its content.
The main objective of the PhD work is to develop a Shared Workflow Management System (SWMS) using ontology engineering. Everybody can share a semi-complete workflow which is called Workflow template, and other people can modify and complete it to use in their system. This customized workflow is called Personalized workflow. The challenges of a SWMS is to be simple, easy to use, friendly with the user and not too heavy. But it must have all functions of a WMS. There are three major challenge in this work: How to allow the users to customize the workflow template to correspond to their requirements, but their changes must be compliance with the predefined rules in the workflow template? How to build an execution model to evaluate step by step a personalized workflow ?
The research aims of the work are to: i) model spatio-temporal, dedicated social networks using semantic web models (ontologies) taking into account spatial, temporal, social and dedicated dimensions. ii) overcome limitations of traditional Recommender Systems and improve the quality of recommendation by exploiting context (time, location, goal, etc.) and social ties.
The following tasks, proposed in the first year planning, are completed or almost finished, and are highly relevant to the current work, despite the different initial overall aim:
Literature Review Report in Social networks, Spatio-temporal networks, Dedicated networks, Activities modeling techniques, Recommender systems.
Elaboration of the requirements of an ”ideal system” and presentation of an initial approach. The approach named the 5ws approach tries to answer those five questions: who must do, what, when, where and why?
Implementation of the approach with Protégé.
Extend semantic sensor network ontology to meet our requirements. In fact we use this ontology to enrich data from sensors networks which are used to measure different metrics describing physical activities (speed, heart rates, distance, etc.).
The following developments are ongoing: Adaptation of recommender systems for activities recommendation and reusing multi-dimensional recommendation model.
This work is done in collaboration with Pierre Robillard (Polytechnique Montréal).
Last year we worked on an assessment method of the quality of team dynamics based on a taxonomy of episodes of interactions encountered in software development teams – the CoDyMA (Collaborative Dynamics Measurement and Analysis) method. Precisely, we proposed an analysis procedure of episodes based on the Formal Concept Analysis (FCA) approach. This year, we proposed to enrich the CoDyMA method with a procedure for assessing the quality of coordination interactions and the quality of coordination artifacts within a development team. The procedure is based on the ”Coordinative Artifacts” Framework , .
This work is done with Aurore Defays (Université de Liège).
Grounding is the process used by participants to a collective activity to coordinate both the content and process of their communication to be successful . Grounding is also defined as the process of elaborating and maintaining the Common Ground (i.e., mutual knowledge, mutual beliefs, and mutual assumptions) necessary to participants' mutual understanding . Multimodal grounding is the process of grounding using several perceptual modalities. Last year we improved the methodology of analysis of multimodal grounding proposed in .
The objective of the OCKTOPUS ANR project is to increase the potential social and economic benefit of the large and quickly growing amounts of user-generated content, by transforming it into useful knowledge. Since user communities are the basic of user-generated content sites, we start with community detection problem, which is a fundamental research point in social network analysis. Based on the preliminary experience from the previous year, we made several progress this year and published the results in international conferences, specifically:
Topic based interested group detection:
By analyzing a dataset extracted from the popular question answer site ”StackOverflow”, we proposed a heuristic method to enrich questions' tag. We also introduced a tag tree based model to extract topics from questions' tags, then we used the detected topics to label users in order to detect interest groups. We conducted experiments on the dataset and compared with related method. Results show that the proposed method is much simple and fast. This work has been published in .
Question Answer social media management
We proposed a question answer social media system based on social network analysis and social media mining to manage the two main resources in question answer sites: users and contents. We also presented a vocabulary used to formalize both the level of interest and the expertise of users on topics. We tested QASM on a dataset extracted from the popular ”StackOverflow” site. We showed how the formalized knowledge is used to find relevant experts for a question. This work has been published in .
We are planning to introduce temporal analysis into our research problem. According to the previous work, the potential direction could be topic evolution and user interest evolution. We believe this work could benefit community management in question answer sites, for example topic trend detection or user interest management.
The Heave-Ho project won the Inria 2014 Boost Your Code contest.
The goal of the project is to design an overlay network for P2P media streaming based on new HTML5 technologies such as WebRTC.
While conventional Internet applications encounter problems with scaling up as the number of visitors grows, the Heave-Ho project aims to enable website's users to share the resource directly among them. The proposed solution is a perfect fit for real-time video broadcasting. In traditional server/client architecture the server can only handle a limited number of requests; if there are too many clients, some of them will not have access to the video. Using a P2P system, the video can be broadcasted to more clients. The use of sharing techniques based on user location can also cut data transfer costs directly at ISP level, thereby reducing the risk of problems such as data rate limits
In the context of the Discovery Hub project
We also performed a user-centered evaluation of the quality of the results retrieved by the 4 algorithms of Discovery Hub. We decided to focus on the quality of the results and not on the UI. Thus, specific criteria of the quality of the results were defined in this evaluation: the surpriseness and the interestingness of the results. A result is considered as:
- surprising if the user discovers an unknown resource or relation between the topic searched and the selected result, or if she discovers something unexpected ;
- interesting if the user thinks it is similar to the topic explored or if she thinks she will remind or reuse it later on.
We are currently developing an ergonomic method for evaluating exploratory search systems (ESSs) in general. We are performing a first test of the method on Discovery Hub .
We finalized the design and implementation of SPARQL Template Transformation Language
We designed a new service in Corese server that returns HTML.
Using STTL transformations that generate HTML, we are able to set up light weight Semantic Web servers on top of local RDF Datasets or remote Datasets such as DBpedia
We started a work on RDF serialization of (Java) objects for Semantic Web system introspection. In conjunction with the overloading of SPARQL named graph pattern, we are able to query the system on several internal status such as graph index, triple provenance, property path triples, etc.
We dramatically optimized Corese Inference Rule Engine and we were able to run OWL 2 RL
This year we focused on the validation, the update of rules bases and the optimization of the reasonning. The goal of this work is to detect some inconsistencies in selected rule bases with respect to ontology and offer users to correct this. We built a set of SPARQL queries enabling (1) to build specific rule bases for a given context or application, (2) to optimize inference engines based on rule selection with respect to target RDF data sources, (3) to validate and update rule bases. We propose another optimization of the inference engines based on graph of rules dependencies and rules aplication ordering. This work is published in .
We received a two years grant from Inria to support the development of the Corese platform. This action aims at enhancing Corese software for conforming to latest W3C standards and facilitating its usage in distributed environment, we integrated several open source parsers to Corese, including JSON-LD, RDFa, TriG and N-Quads. Corese now is able to process RDF dataset in these formats. A Firefox extension called RDF Triple Collector (RTC) was developed, it can extract triple from web pages (annotated using RDFa), upload triples to Corese server and query data using SPARQL endpoint. A prototype of LDP 1.1 (Linked Data Platform) is implemented using RTC as data collector.
Besides, with the purpose of improving Corese query performance and carrying out research work on distributed environment, we proposed and developed a heuristic-based query planning method within Corese. The approach includes 3 main steps: 1) generate extended SPARQL query triple pattern Graph (ESG), 2) estimate the cost of ESG using pre-defined heuristics and cost models and 3) search ESG to find a good query plan and rewrite the SPARQL query. The approach was evaluated using BSBM benchmark, the results suggest that the developed method optimized 60% of the query execution time averagely .
We propose a process of sociocultural ontology development in order to promise and preserve the culture of a country through sharing the customs and history of different localities. This can be compared with the construction of a platform straddling ”corporate memory” and a ”social network”, but applied in the context of a country. This process is based on the Vygotskian Framework, a theory of Russian psychologist Lev Vygotsky. We worked on an upper-level ontology and mapped it on the Linked Open Data (LOD) cloud. We designed a sociocultural domain ontology for the Senegalese context and the platform design on top of Semantic Mediawiki (SMW). This allows Senegalese communities to share and co-construct their sociocultural knowledge. This work is published in .
In the second step of the PhD thesis of P. F. Diallo, we focus on the consideration of the time in the modeled knowledge. The main objective of this work is to provide a vocabulary (ontology) to handle temporal information on semantic data. Thus, the first step was to create a meta-language which handles temporal knowledge representation in the socio-cultural field that can be used in a wider area. This meta-language allows 1) to model cyclic knowledge (non-convex interval), 2) knowledge about calendar, 3) convex intervals 4) modeling absolute and relative time, 5) modeling relations between intervals, 6) distinction between open and closed intervals, 7) concepts such as time stamps and 8) to set different time granularities. The second step is to propose an RDFS representation of this meta-language. Thus this representation, Human Time Ontology (HuTO), allows us to model complex time statement which are a date, an interval (convex and non-convex), relative and absolute time. HuTO allows also temporal data annotation which is the representation of temporal notions on knowledge (expressed as RDF triple) and allow to reason over it.
HuTO allows us to use a resource as a temporal marker for dating another resource. Our ontology allows the relative dating which is to determine the relative order of resources, without necessarily determining their absolute time. A major contribution of HuTO is the modeling of non-convex intervals but also requests that can treat all types of intervals. For temporal annotation data, HuTO provides an approach that can link two models: one for temporal information and another for knowledge of the modeled area. This approach facilitates the information retrieval when it is on temporal or non-temporal data. Thus HuTO can annotate resources, triples or named graphs.
We have a PhD Thesis grant with Alcatel Lucent Bell Labs on Linked Data Based Exploratory Search.
We also have a PhD Thesis grant with Synchronext on Assistant Conversational Agents with Natural Language and Intuition.
We participate in the CNRS PEPS GéoIncertitude, with researchers of the UMR 7300 ESPACE de Nice and of the IRIT of Toulouse on the modeling of uncertainty in Geography using fuzzy logic and possibility theory.
This work is done in collaboration with Philippe Renevier-Gonin, Christian Brel, Anne-Marie Déry (I3S Rainbow team).
The HCI Group brings together researchers from GLC teams conducting or wishing to conduct research related to HCI. The group specifically addresses the issues of how to conduct user experiments to evaluate the UIs of the software developed in GLC. The group establishes collaborations between researchers in the design and implementation of experiments. Last year a collaboration was initiated between the teams Rainbow and Wimmics on the assessment of (1) an application composition process driven by the composition of UIs, and (2) the prototype OntoCompo supporting this process.
This year, too, a collaboration started to design visualization services assisting caregivers in their night watch tasks.
This work is done in colaboration with Karima Boudaoud and Marc Arnaert (I3S Rainbow team).
PadDOC goal is to contribute to accelerating the digital transition of citizen, local and regional authorities, administrations and enterprises, by : (1) developing an open standard and innovative software and hardware resources to facilitate nearby or distant administrative formalities and procedures; (2) improving the security of the holder's personal data by putting these data under the exclusive control of the holder; (3) by exploiting unmarked communicating supports (such as smartphones or tablets) for all chain actors. PadDOC partners are: Docapost BPO, Anyces, ABC SmartCard and the teams Rainbow, Media-Coding and Wimmics. Wimmics will contribute to: (1) the analysis, design and evaluation of the PadDOC security-oriented user interfaces; (2) the impact assessment of the chain of actors participating in the experiment to validate the viability of the PadDOC social system. The PadDOC project officially began in November 2014.
This work is done in collaboration with Bernard Senach (Hephaistos, Inria), Brigitte Trousse (Focus Lab, Inria), with Agorantic partners.
Started last year, the collaboration continued this year with ITCS and HSS teams from the Agorantic Federative Structure for Research of the Université d'Avignon et des Pays du Vaucluse. Distant and face-to-face meetings were organized to refine the so-called SyReMuse project, the goal of which is to analyze, design, and evaluate a system recommending visit tours to museum visitors (individuals and groups).
This work is done in collaboration with Lise Arena and Bernard Conein (Gredeg).
Axis-2 of the ”Maison des Sciences Humaines et Sociales (MSHS) du Sud-Est (Nice)” is interested in the relationships between ICT, Practices and Communities. Axis-2 objective is to make explicit two aspects of the relationship between digital technology and community building: (1) networks and (2) artifacts. Two Axis-2 groups-projects address these aspects: (1) the group-project ”Social networks and digital networks” and the group-project ”Artifacts and coordination”.
The first group-project examines how the Internet allows reconstructing the dynamics of interaction networks by making explicit interaction phenomena that could not be observed and treated before the event of Big Data. The second group-project studies the impact of cognitive technologies on the social and cognitive coordination between individuals in organizational and community contexts. Wimmics was mainly involved in the second group-project. In this context, we co-organized the COOP 2014 conference and the COOP 2014 workshop on ”The role of artefacts in social coordination”.
AZKAR is a two years french project funded by BPI (Banque Publique d'Investissement), focused on Fast Control of Mobile Robots over the Internet, using web technologies such as WebRTC and semantic web technologies. The project started September 15th 2014. The first step of the project will be the evaluation/benchmarking of video and data solutions over internet, based on the WebRTC technology. The second step will consist in helping the robotic partner in the project (the Robosoft company) to implement these solutions on a real mobile robot that will be deployed in museums or in homes for helping seniors in their daily tasks. Semantic web technologies, will be used in the project for describing the services, the context of the application domain, the content transmitted, etc.
SMILK (Social Media Intelligence and Linked Knowledge) is a joint laboratory (LabCom, 2013-2016) between the Wimmics team and the Research and Innovation unit of VISEO (Grenoble). Natural Language Processing, Linked Open Data and Social Networks as well as the links between them are at the core of this LabCom. The purpose of SMILK is both to develop research and technologies in order to retrieve, analyze, and reason on textual data coming from Web sources, and to make use of LOD, social networks structures and interaction in order to improve the analysis and understanding of textual resources. Topics covered by SMILK include: use of data and vocabularies published on the Web in order to search, analyze, disambiguate and structure textual knowledge in a smart way, but also to feed internal information sources; reasoning on the combination of internal and public data and schemes, query and presentation of data and inferences in natural formats.
This project named "DBpedia.fr" proposes the creation of a French chapter of the base DBpedia used in many English applications, in particular for the publication of cultural collections. Because DBpedia is focused on the English version of Wikipedia it ignores some of the French topics and their data. This projects aims at extracting a maximum of RDF data from the French version and providing a stable and scalable end-point for them. We now consider means to improve both the quantity and the quality of the data. The DBpedia.fr project was the first project of the Semanticpedia convention signed by the Ministry of Culture, the Wikimedia foundation and Inria.
Web site: http://
We organized a joint project between Inria and the Ministry of Culture from September 2013 to November 2014. The goal of this project was to discuss the Semantic Web with a special emphasis on cultural project. We organized three conference. The first, to get some feedback from the main projects that were launched the previous years (DBPedia, HDA-Lab and Joconde-Lab, Data.bnf.fr, Centre Pompidou Virtuel, MIMO, Hadoc, etc.), together with the feedback gathered from a major player in the field, the BBC. The second conference took place inside the Ministry of Culture. It raised the question of trust on the Web following Snowden's revelations and Tim Berners-Lee's campaign to re-decentralize the Web. Finally, the last session of the cycle, at Inria Sophia Antipolis, discussed the future of the Web, and presented the Semantic Web/Linked Data as providing some of the solutions that are currently needed to maintain the Web open, decentralized, trustful and safe.
In order to develop a Transition-to-Web-3.0 cultural policy, the French Ministry of Culture and Communication defined 9 operational actions allowing cultural sector to take into account opportunities and challenges offered by Web 3.0 (also called ”Semantic Web”, or ”Web of Data”), and set up 9 working groups for these actions. Wimmics contributed to the Working Group 5 ”Cultural metadata and Transition to Web 3.0: Exploring the interaction modes with audiences using Web 3.0 potentialities”.
Kolflow is an ANR project (2011-2014), it proposes to extend collective intelligence with smart agents relying on automated reasoning. Smart agents can significantly reduce the overhead of communities in the process of continuously building knowledge. Consequently, continuous knowledge building is much more effcient. Kolflow aims at building a social semantic space where humans collaborate with smart agents in order to produce knowledge understandable by humans and machines.
Partners: Inria Orpailleur & Wimmics, Silex U. Claude Bernard Lyon, GDD U. of Nantes
Web site: http://
OCKTOPUS is an ANR project (2012-2015). The objective of OCKTOPUS is to increase the potential social and economic benefit of the large and quickly growing amounts of user-generated content, by transforming it into useful knowledge. We believe that it is possible to considerably improve upon existing generic Information Retrieval techniques by exploiting the specific structure of this content and of the online communities which produce it. Specifically, we will focus on a multi-disciplinary approach in order to address the problem of finding relevant answers to questions within forums and question-answer sites. To create metrics and predictors of content quality and use them to improve the search experience of a user, we will take advantage of:
the experience of the CRG (the management research institute of Ecole Polytechnique and CNRS) to understand better the incentives of, and interactions between individuals who produce online content within large communities;
the experience of the Wimmics research team to analyze the structural and temporal aspects of the complex typed social graphs found within these communities;
the ability of Alcméon (a start-up developing a search application dedicated to user-generated content) to integrate and test the results of OCKTOPUS within a common demonstration framework, in order to assess their practical usefulness when applied to concrete large-scale datasets.
Partners: Alcméon, CRG, Inria Wimmics.
Web site: http://
We participate to the CrEDIBLE research project funded by the MASTODONS program of the interdisciplinary mission of CNRS which objective is to bring together scientists from all disciplines involved in the implementation of systems sharing of distributed and heterogeneous medical imaging, provide an overview of this area and to evaluate methods of state of the art and technology that affect this area. In this framework, we participated to the organization of a 3-days workshop and we worked with members of the I3S Modalis team (Johan Montagnat) on the distribution of algorithms in the Corese/KGRAM engine.
Catherine Faron Zucker was chairman of one of its session and worked with members of the I3S Modalis team on a survey of existing approaches for the translation of relational data to RDF data.
Web site: https://
Wimmics is partner of International Research Group (GDRI) Zoomathia funded by two CNRS institutes: INEE and INSHS. It aims at studying transmission of zoological knowledge from Antiquity to Middle-Age through material resources (bio residues, artefacts), iconography and texts.
One of the goals of the project is to design a thesaurus and semantically annotate resources, capturing different types of knowledge: zoonyme, historical period, zoological speciality (ethology, anatomy, physiology, psychology, zootechnique, etc.), litterary genre or iconography.
We started to work on 1) the translation of manual annotations of middle-age structured texts from XML to RDF, 2) the automatic extraction of RDF annotations from text using NLP techniques and 3) the exploitation of these semantic metadata to help historians in their studies of knowledge transmission through these texts.
This work is done in collaboration with David Daney and Jean-Pierre Merlet (Coprin/Hephaistos), Patrick Rives (Lagadic).
Last year, Wimmics was involved in a socio-ergonomic field study to inform the design of a device (such as a robotic shopping trolley) assisting elderly and frail persons to do their shopping autonomously. This year this work was synthesized and published in .
Web site: http://
This project was just accepted this year on the topic of Natural Language Argumentation on Twitter: Retrieval of Argumentative Structures and Reasoning.
Partner : Vigiglobe.
Program: CHIST-ERA
Project acronym: ALOOF
Project title: Autonomous Learning of the Meaning of Objects
Duration: October 2014 - October 2017
Coordinator: University of Rome La Sapienza Italy
Other partners: University of Birmingham United Kingdom, Technische Universität Wien Austria.
Abstract: The goal of ALOOF is to enable robots to tap into the ever-growing amount of knowledge available on the Web, by learning from there about the meaning of previously unseen objects, expressed in a form that makes them applicable when acting in situated environments. By searching the Web, robots will be able to learn about new objects, their specific properties, where they might be stored and so forth. To achieve this, robots need a mechanism for translating between the representations used in their real-world experience and those on the Web. We propose a meta-modal representation, composed of meta-modal entities and relations between them. A single entity is composed of modal features extracted from sensors or the Web. A modal completion supports perception in the absence of a complete set of features. The combined features link to the semantic properties associated to each entity. All entities are organized into a structured ontology, supporting formal reasoning. This is complemented with methods for detecting gaps in the knowledge of the robot, for planning where to effectively obtain the knowledge, and for extracting relevant knowledge from Web resources. By situating meta-modal representations into the perception and action capabilities of robots, we will achieve a powerful mix of Web-supported and physical-interaction-based open-ended learning. Our scenario consists of a home setting where robots have to find/retrieve objects while understanding their meaning and relevance in the assigned task. Our measure of progress will be how many gaps, i.e. incomplete information about objects, can be resolved autonomously given specific prior knowledge. We will integrate results on different mobile robot platforms ranging from smaller mobile platforms, over Metralabs Scitos to a home service robot HOBBIT.
Program: International Initiatives
SEEMPAD
Social Exchanges and Emotions in Mediated Polemics - Analysis and Data
International Partner (Institution - Laboratory - Researcher):
University of Montréal, Heron Lab (Canada)
Duration: 2014 - 2017
See also: https://
Generating, annotating and analyzing a dataset that documents a debate. We aim at synchronizing several dimensions: social links (intensity, alliances, etc.); interactions happening (who talks to whom); textual content of the exchanged messages; social-based semantic relations among the arguments; emotions, polarity, opinions detected from the text; emotions, physical state detected from sensors.
During the first year, we have defined the protocol for the first experimental setting, which will represent the first stage of the proof-of-concept. The goal of the first experiment is to address a feasibility study of the annotation of a corpus of natural language arguments with emotions. The experiment involved a group of 20 participants, recruited by the Heron Lab. In particular, the first experiment has considered the following steps:
Starting from an issue to be discussed provided by the animators, the aim of the experiment is to collect the arguments proposed by the participants.
These arguments are then associated with the emotional component detected through apposite devices of the Heron Lab. More precisely, the workload/engagement emotional states and the facial emotions of the participants are extracted during the debate, using an EEG headset and a Face Emotion Recognition tool respectively.
In a post-processing phase on the collected data, we have synchronized the arguments put forward at instant t with the emotional indexes we retrieved.
The output of this post-processing phase (ongoing) will result in an argumentation graph representing each discussion addressed by the discussion groups. These argumentation graphs connect the arguments to each other by a support or an attack relation, and they will be labeled with the source that has proposed the argument, and the emotional state of the source itself and of the other participants at the time when the argument has been put on the table.
Fabien Gandon acts as Inria representative at W3C.
We participate to W3C Data Shape WG, Linked Data Platform WG and Semantic Web Interfaces Community Group.
Software Engineering Laboratory (Head: Pierre Robillard), Polytechnique Montréal, Canada.
Topic of the collaboration: Modeling of software development processes and teams for quality assessment purposes.
We participate to the LIRIMA where we have a long term collaboration with University Gaston Berger at Saint-Louis, Senegal, with Moussa Lo. We host two PhD students in collaboration with UBG: Papa Fary Diallo and Oumy Seye.
Faten Ayachi is professor, director of Computer Science dept. at SUPCOM Engineering School in Tunis, Tunisia. Visit: august 25-31. New security policies in RDBMS: retro-conception algorithms.
Pr Liam J. Bannon (University of Limerick, Ireland), gave a talk about Towards a More Human-centred Informatics? Examining the Role of HCI and CSCW in Computing. It was an invited talk co-organized with the MSHS Project "Artefacts, coordination et communautés numériques", October 16th.
Cristian Adrián Cardellino
June – 2014
Universidad Nacional de Córdoba (Argentina)
Design and development of a data licensing framework for Linked Data
Elena Cabrio was scientific chair for the seminar Frontiers ARG-NLP - Frontiers and Connections between Argumentation Theory and Natural Language Processing (Bertinoro (Forli-Cesena), Italy, July 2014).
Catherine Faron-Zucker was scientific chair for the conference Journées Francophones d'Ingénierie des Connaissances (IC).
Serena Villata was co-chair of the 2014 International Workshop on Semantic Web for the Law (SW4Law 2014), co-located with the 27th International Conference on Legal Knowledge and Information Systems JURIX-2014. She was co-chair of the 15th International Workshop on Computational Logic in Multi-Agent Systems CLIMA 2014, co-located with the European Conference on Artificial Intelligence ECAI 2014. She was co-chair of the Bertinoro Seminar on Frontiers and Connections between Argumentation Theory and Natural Language Processing.
Nhan Le Thanh was chair of the 2nd workshop e-PSP 2014, November 27, Sophia Antipolis
Alain Giboin: International Conference on the Design of Cooperative Systems (COOP 2014), and co-organizer of the COOP Workshop Artefacts and Coordination Activities, co-organizer of JMIC 2015, Journée commune AFIA-ARCo pour la Modélisation de l'Irrationnel dans la Cognition : Modéliser l'Affectif (Joint scientific day for the modeling of the irrational aspects of cognition: Modeling the Emotions).
Elena Cabrio was program chair of CLEF 2014 lab: Multilingual Question Answering over Linked Data (QALD-4), Sheffield, UK, September.
Fabien Gandon was program Chair for ESWC 2014, European Semantic Web Conference, and for TICE 2014, Technologies de l'Information et de la Communication pour l'Enseignement.
Alain Giboin was program committee associate chair of Computer-Supported Cooperative Work and Social Computing (CSCW) 2015.
Andrea Tettamanzi was program co-chair for the area of fuzzy systems for IBERAMIA, the latin American conference on artificial intelligence.
Fabien Gandon was program chair of SemWeb.Pro
Michel Buffa: European Semantic Web Conference (ESWC), WWW Demo program committee, WWW Workshop on Semantic Web Collaborative Spaces (SWCS), ESWC 2014 developers workshop (SEMDEV).
Elena Cabrio: Knowledge Engineering and Knowledge Management conference (EKAW PhD Symposium), the International Conference on Computational Linguistics (COLING 2014), the European Conference in Artificial Intelligence (ECAI), the Association for Computational Linguistics conference (ACL), and the Extended Semantic Web Conference (ESWC).
Olivier Corby: 19th International Conference on Knowledge Engineering and Knowledge Management (EKAW), Third International Workshop on Querying Graph Structured Data (GraphQ), Journées Francophones d'Ingénierie des Connaissances (IC), Des Sources Ouvertes au Web de Données - SoWeDo IC Workshop, International Conference on Conceptual Structures (ICCS), ISWC Developers Workshop, Journées Francophones sur les Ontologies, Reconnaissance de Formes et Intelligence Artificielle (RFIA).
Catherine Faron-Zucker: International Conference on Conceptual Structures (ICCS), International Conference on Knowledge Engineering and Ontology Development (KEOD), Symposium on Educational Advances in Artificial Intelligence (EAAI), International Conference on Computing and Communication Technologies (RIVF 2015), Journées Francophones d'Ingénierie des Connaissances (IC), Technologies de l'Information et de la Communication pour l'Enseignement (TICE), Knowledge and Systems Engineering (KSE), Encuentro Nacional de Ciencias de la Computación (ENC).
Fabien Gandon: Extraction et Gestion des Connaissances (EGC), Hypertext, Journées francophones d'Ingénierie des Connaissances (IC), International Semantic Web Conference (ISWC), Reconnaissance de Formes et Intelligence Artificielle (RFIA), International Conference on Web Intelligence (WI), Making Sense of Microposts (#MSM2014), 3rd Workshop on Semantic Web Collaborative Spaces (SWCS).
Alain Giboin: member of the steering committee of the COOP conference series (International Conferences on the Design of Cooperative Systems). He was PC member of: Symposium on ”Communication multimodale et collaboration instrumentée. Regards croisés sur Énonciations, Représentations, Modalité” (COMMON), International Conferences on the Design of Cooperative Systems (COOP), European Semantic Web Conference (ESWC), Journées francophones d'Ingénierie des Connaissances (IC), International Conference on Semantic Systems (SEMANTiCS), Technologies de l'Information et de la Communication pour l'Enseignement (TICE), 10e Journées francophones Mobilité et Ubiquité (UbiMob).
Isabelle Mirbel: 25th International Conference on Advanced Information Systems Engineering (CAISE), IEEE Eighth International Conference on Research Challenges in Information Science (RCIS), 20th International Working Conference on Requirements Engineering: Foundation for Software Quality (REFSQ), 32e congrès INFORSID.
Alexandre Monin: European Semantic Web Conference (ESWC), World Wide Web Conference (WWW) - WebScience Track, Journées francophones d'Ingénierie des Connaissances (IC), Technologies de l'Information et de la Communication pour l'Enseignement (TICE).
Nhan Le Thanh : 4th International Conference on Model & Data Engineering (MEDI).
Andrea Tettamanzi: 15th International Workshop on Computational Logic in Multi-Agent Systems (CLIMA), Florida Artificial Intelligence Research Society Conference (FLAIRS 2015), Genetic and Evolutionary Computational Conference (GECCO), International Conference on Intelligent Agent Technology (WI IAT), Journées Francophones d'Ingénierie des Connaissances (IC), International Joint Conference on Artificial Intelligence (IJCAI), Parallel Problem Solving from Nature (PPSN).
Serena Villata: European Conference on Artificial Intelligence (ECAI), International Conference on Legal Knowledge and Information Systems (JURIX), European Conference on Semantic Web (ESWC).
Elena Cabrio: member of the evaluation committee assigning the Best Dissertation Prize of ATALA (the French Natural Language Processing association).
Olivier Corby: European Conference on Semantic Web (ESWC).
Catherine Faron-Zucker: European Conference on Semantic Web (ESWC). She was reviewer of a follow up volume published at LNCS of eBISS 2014.
Alain Giboin: 26e Conférence francophone sur l'Interaction Homme-Machine (IHM), Computer Human Interaction (CHI 2015).
Fabien Gandon: European Conference on Social Intelligence (ECSI), International Conference on Semantic Systems (SEMANTiCS), World Wide Web Conference (WWW).
Isabelle Mirbel : Ingénierie des Systèmes d'Information (Hermès).
Elena Cabrio: Journal of Web Semantics.
Catherine Faron-Zucker: International Journal of Human-Computer Studies, Revue d'Intelligence Artificielle.
Fabien Gandon: Intellectica, dossier Philosophie du Web et Ingénierie des Connaissances.
Alain Giboin: Revue d'Intelligence Artificielle. He was reviewer for the book Communication and multi-activity at work.
Isabelle Mirbel: Information and Software Technology Journal.
Alexandre Monnin was reviewer for Literary and linguistic Computing journal. He co-edited a special issue of the CNRS journal Intellectica entitled Philosophy of the Web and Knowledge engineering (with Gunnar Declerck). He wrote the introduction to this special issue (under the same title) and a paper with Pierre Livet (Distinguishing/Making explicit. The ontology of the Web as an ontology of operations).
Andrea Tettamanzi: IEEE Transactions on Systems, Man and Cybernetics – Systems and Information Sciences.
Serena Villata: Journal of Logic and Computation, Argument & Computation, ACM Transactions on Intelligent Systems and Technology.
Fabien Gandon was invited speaker at :
ACM MEDES
Alexandre Monnin:
presented a paper with Harry Halpin on The history of decentralized knowledge from cybernetics to the Semantic Web and Google at the ADAM ANR project international final workshop, at Mines ParisTech. He was invited to deliver a talk on a new history of the Web in Lyon, on October 20th during a conference organized at ENS LSHS by three labs (Datadata from ENSBA Lyon, NHumérisme from ENS Lyon and PAMAL from ESA Avignon) on media archeology. He participated in a two-day conference on the renewal of social science and Science and Technology Studies organized at Mines ParisTech (funded by DIM). He participated in a day-long training session as part of the MESR national priority, in Tours where he presented An introduction to the Semantic Web. He was invited to participate in the Social Week in Lille (November 23rd) alongside other researchers. He was invited to deliver a talk in Beirut (by visio conference) for a special session on The philosophy of the Web, on December 4th (other invitees include Peter Greenaway and Harry Halpin).
Catherine Faron-Zucker is vice secretary of AFIA, Association Française d'Intelligence Artificielle.
Alain Giboin serves as scientific correspondent for Inria Sophia of COERLE (Inria Comité Opérationnel d’Evaluation des Risques Légaux et Ethiques), in tandem with the legal correspondent Sabine Landivier.
Catherine Faron-Zucker is responsible of Master track Web Science at Polytech'Nice UNS.
Nhan Le Thanh is coordinator of the bilateral scientific cooperation program (NiceCampus) between Univ. of Nice and Univ. Da Nang, Vietnam. He is director of the Computer Science department of IUT, Univ. Nice-Sophia Antipolis.
Isabelle Mirbel is Vice-dean of Science Department at University Nice-Sophia Antipolis.
Andrea Tettamanzi has coordinated the 3rd year of the Licence in Business Informatics (MIAGE) at the UFR Science of the Université Nice Sophia Antipolis (UNS). He acted as the coordinator of an Erasmus Mundus student and staff mobility project with East Asia (EMMA East 2011, EACEA Project no. 2010-2362), which closed this year, for which he wrote the final report for the European Commission.
Master : Michel Buffa, Web 2.0, Web Services, HTML5, 40h, M2 MIAGE, UNS.
Master : Michel Buffa, Distributed Web development, 40h, M2, UNS.
Master : Michel Buffa, Java certification, 25h, M2, UNS.
Master : Michel Buffa, Plasticity of User Interfaces, HTML5, 8h, M2, UNS.
Master : Michel Buffa, New interaction means, HTML5, 8h, M2, UNS.
Master : Michel Buffa, Programmable Web, 40h, M2, UNS.
Master : Michel Buffa, Web technologies, 40h, M1, UNS.
Master : Elena Cabrio, Knowledge Engineering, 3h, M2 KIS, UNS.
Master : Elena Cabrio, Web Science, 3h, M2 IFI, UNS.
Master : Olivier Corby, Catherine Faron Zucker, Fabien Gandon, Semantic Web, 45h, M2, UNS.
Master : Amosse Edouard, Introduction to Android programming, 16h, M2 MBDS, UNS.
Master : Amosse Edouard, Distributed Architecture and Web Services, 20h, M2 MIAGE, UNS.
Master : Fabien Gandon, Knowledge Engineering, 4h, M2, UNS.
Master : Alain Giboin, Human-Computer-Interaction Design and Evaluation, 35h, M2, UNS.
Master : Alain Giboin, Task and Activity Analysis for HCI design and evaluation, 6h, M2 Sociology and Ergonomics of Digital Technologies, UNS.
Master : Alain Giboin, Economics and ICT: Ergonomics, 15h, M2 Economics and ICT, ISEM, UNS.
Master : Isabelle Mirbel, Project Management, 20h, M2 MIAGE, UNS.
Master : Alexandre Monnin, Knowledge Engineering, 3h, M2, UNS.
Master : Alexandre Monnin, REST, 6h, M2, UTT.
Master : Serena Villata, Knowledge Engineering, 3h, M2 KIS, UNS.
Master : Serena Villata, Web Science, 5h, M2 Miage, UNS.
Master : Isabelle Mirbel, Advanced databases, 78h, M1 MIAGE, UNS.
Master : Isabelle Mirbel, Requirement Engineering, 42h, M1 MIAGE, UNS.
Master : Oumy Seye, XML Technologies, 16h, M1 UNS.
Master : Oumy Seye, Systèmes Distribués, 16h, M1 UNS.
Master : Andrea Tettamanzi, Systèmes Distribués, 18h, M1 MIAGE, UNS.
Master : Andrea Tettamanzi, Concurrency and Parallelism, 18h, M1 International, UNS.
Master : Andrea Tettamanzi, Fuzzy Description Logics and Ontology Learning, in Ingénierie des connaissances, 10h, M2 Web, Polytech'Nice, UNS.
Licence : Amel Ben Othmane, Databases, 64h, L1 IUT, UNS.
Licence : Isabelle Mirbel, Databases, 63h, L3 MIAGE, UNS.
Licence : Nhan Le Thanh, Logical Data Models and languages, 24h, L3 Pro, UNS.
Licence : Nhan Le Thanh, Design and Development of DBMS services, 24h, L3 Pro. UNS.
Licence : Nhan Le Thanh, Advanced Databases, 105h, L3, UNS.
Licence : Nhan Le Thanh, Databases, 120h, L2, UNS.
Licence : Oumy Seye, Interaction Homme Machine, 50H, L1, IUT UNS.
Licence : Oumy Seye, Conception et Programmation orientées objet avancées, 30H, L2, IUT UNS.
Licence : Andrea Tettamanzi, Algorithmique – Programmation Objet – Python, 50h, L2, UNS.
Licence : Andrea Tettamanzi, Programmation Web Avancée (côté client), 39h, L2, UNS.
Licence : Andrea Tettamanzi, Web, 18h, L3 MIAGE, UNS.
E-learning
Michel Buffa
HTML5, 6 weeks, http://
Fabien Gandon, Catherine Faron Zucker, Olivier Corby
Semantic Web, 7 weeks, http://
Fabien Gandon gave a course on Semantic Web at URFIST, Rennes, 6H.
Fabien Gandon gave a course on Semantic Web and Linked Data Graphs at Winter School on Complex Networks
PhD Rakebul Hasan, Explanations for Social Semantic Web, UNS, November 4th, Fabien Gandon;
PhD Maxime Lefrançois, Meaning-Text Theory Lexical Semantic Knowledge Representation : Conceptualization, Representation and Operationalization of Lexicographic Definitions, Inria, UNS, June 24th, Fabien Gandon, Christian Boitet;
PhD Nicolas Marie, Linked Data Based Exploratory Search, Alcatel Lucent Bell Labs, December 12th, Fabien Gandon, Myriam Ribière;
PhD Oumy Seye, Sharing and Reusing Rules for the Web of Data, University Gaston Berger, Saint-Louis, Sénégal, December 15th, Olivier Corby, Catherine Faron Zucker, Fabien Gandon, Moussa Lo.
PhD in progress : Amel Ben Othmane, Temporal and Semantic Analysis of Information Retrieved from Short and Spatio-Temporal Messages in Social Networks, UNS, Nhan Le Than.
PhD in progress : Papa Fary Diallo, Co-Construction of Community Ontologies and Corpus in a Limited Technological Environment, Inria, UNS, UGB, Isabelle Mirbel, Olivier Corby, Moussa Lo.
PhD in progress : Amosse Edouard, Studies of Spatial Semantic Aspect, Real Time Filtering Mechanisms and Semantic Enrichment of Short Messages on Dynamic Spatio-Temporal Social Networks, UNS, Nhan Le Thanh.
PhD in progress : Amine Hallili, Assistant Conversational Agents with Natural Language and Intuition, UNS, Catherine Faron, Fabien Gandon.
PhD in progress : Zide Meng, Temporal and Semantic Analysis of Richly Typed Social Networks from User-Generated-Content Sites on the Web, UNS, Fabien Gandon, Catherine Faron Zucker.
PhD in progress : Thi Hoa Hue Nguyen, Semantic Mappings with a Dataflow-based scientific worflow : an approach to develop dataflow applications using knowledge-based systems, Vietnam, Nhan Le Thanh.
PhD in progress : Tuan Anh Pham, Study and integrate the mechanism of workflow control in MVC (Model View Controler) architecture: design and implement an APM (Activity Process Management) platform for dynamic information systems on the networks, UNS, Nhan Le Thanh.
PhD in progress : Abdoul Macina, Distributed Query Processing over a Wide-area Network, UNS Labex UCN@Sophia, Johan Montagnat (Modalis, I3S), Olivier Corby;
PhD in progress : Chaka Kone, Infrastructure logicielle/matérielle pour le suivi émotionnel et du comportement dans la maladie d'Alzheimer et les pathologies associées, Cécile Belleudy from LEAT, Nhan Le Thanh.
Catherine Faron-Zucker was reviewer for PhD of Houda Khrouf: Structuring and Mining Event-based Data in The Social Web, June 30th, Télécom ParisTech.
Fabien Gandon was jury member for PhD of Bruno Paiva Lima da Silva : Data Access over Large Semi-Structured Databases - A Generic Approach Towards Rule-Based Systems, January 13th, Univ. Montpellier 2, LIRMM.
He was reviewer for PhD of Fabrizio Orlandi : Profiling User Interests on the Social Semantic Web, March 31st, National University of Ireland, Galway.
He was reviewer for HdR of Fabian Suchanek : Contributions à l'avancement des grandes bases de connaissances, Université Pierre et Marie Curie, Paris, October 10th.
He was jury member for HdR of Sébastien Ferré: Reconciling Expressivity and Usability in Information Access From File Systems to the Semantic Web, IRISA, University Rennes 1, November 6th.
Alain Giboin was jury member of the PhD thesis of François Palaci, Contribution ergonomique à l'analyse prospective d'innovations technico-organisationnelles dans les systèmes complexes, Université Technologique de Troyes.
Isabelle Mirbel was reviewer of PhD : Salma Najar, Adaptation dynamique des services sensibles au contexte selon une approche intentionnelle, Univ. Paris I - Panthéon-Sorbonne, April 2014.
Andrea Tettamanzi was reviewer of the PhD thesis of Mustafa Al Bakri, Uncertainty-Sensitive Reasoning over the Web of Data, Université de Grenoble, December 15.
He was president of the jury of Maxime Lefrançois, Représentation des connaissances sémantiques lexicales de la Théorie Sens-Texte : COnceptualisation, représentation, et opérationnalisation des définitions lexicographiques, Université Nice Sophia Antipolis, June 24.
Raphael Boyer: (September 2014 - September 2015), Maintenance of DBpedia.fr, Addressing boolean and n-relation questions with QAKiS, Master student of MIAGE, University of Nice-Sophia Antipolis.
Fabrice Jauvat: Identifier et extraire les noms d'entreprises dans des contenus web et les relier à DBpedia, L3 UNS, supervisor: Elena Cabrio, May-August.
Ahmed Missaoiu: Translation of data from raw data to semantic data enriched with further information extracted from external information sources, Master KIS UNS, supervisors: Serena Villata, Elena Cabrio and Catherine Faron.
Yoann Moise: Booster la visualisation des réponses d'un système automatique de question-réponse a l'aide du contenu multimédia, L3 UNS, supervisor: Elena Cabrio, May-August.
Emilie Palagi: Evaluation et reconception des IHM explicatives du moteur de recherche exploratoire Discovery Hub, Master 2, Sociologie et Ergonomie des technologies numériques, UNS, supervisors: Alain Giboin and Nicolas Marie.
Vivek Sachidananda: QAKiS - multimedia answers visualization (October 2013 - June 2014), Master student of Multimedia at Telecom ParisTech. Supervisor: Elena Cabrio, co-supervision with Dr. Raphael Troncy (Assistant Professor at EURECOM).
Molka Tounsi: Publishing Antique Zoology Data on the Web of Data, Master KIS UNS, supervisors: Serena Villata, Elena Cabrio and Catherine Faron.
Fabien Gandon
Meeting Inria-Industry Technologies du Web : ce réseau de ressources numériques mondial
Amadeus QLM Staff Briefing keynote, Semantic Web, February 14th.
Invited speaker on Quel avenir pour le Web ? in the cycle of conferences Les enjeux du Web 3.0 dans le secteur culturel - Ministry of Culture and Inria.
Scientific days at Inria : Relations between Inria and W3C and Presentation of LabCom SMILK.