EN FR
EN FR
MOEX - 2021
2021
Activity report
Project-Team
MOEX
RNSR: 201722226P
Research center
In partnership with:
Université de Grenoble Alpes
Team name:
Evolving Knowledge
In collaboration with:
Laboratoire d'Informatique de Grenoble (LIG)
Domain
Perception, Cognition and Interaction
Theme
Data and Knowledge Representation and Processing
Creation of the Project-Team: 2017 November 01

Keywords

Computer Science and Digital Science

  • A3.2. Knowledge
  • A3.2.1. Knowledge bases
  • A3.2.2. Knowledge extraction, cleaning
  • A3.2.4. Semantic Web
  • A3.2.5. Ontologies
  • A3.2.6. Linked data
  • A6.1.3. Discrete Modeling (multi-agent, people centered)
  • A7.2. Logic in Computer Science
  • A9. Artificial intelligence
  • A9.1. Knowledge
  • A9.9. Distributed AI, Multi-agent

Other Research Topics and Application Domains

  • B8.5. Smart society
  • B9. Society and Knowledge
  • B9.5.1. Computer science
  • B9.7.2. Open data
  • B9.8. Reproducibility

1 Team members, visitors, external collaborators

Research Scientist

  • Jérôme Euzenat [Team leader, Inria, Senior Researcher, HDR]

Faculty Members

  • Manuel Atencia Arcas [Univ Grenoble Alpes, Associate Professor, in secondement at university of Malaga since September 2021]
  • Jérôme David [Univ Grenoble Alpes, Associate Professor]

PhD Students

  • Yasser Bourahla [Inria]
  • Alban Flandin [Univ Grenoble Alpes, until September 2021]
  • Khadija Jradeh [Univ Grenoble Alpes]
  • Andrea Kalaitzakis [Univ Grenoble Alpes]
  • Line Van Den Berg [Univ Grenoble Alpes, until October 2021]

Interns and Apprentices

  • Marouan Boulli [Univ Grenoble Alpes, March–May 2021, L3 MIASHS]
  • Clarisse Deschamps [Univ Grenoble Alpes, October–December 2021, L3 MIASHS]
  • Yidong Huang [Inria, February–June 2021, M2 MOSIG]
  • Hugo Marcaurelio [Univ Grenoble Alpes, March–May 2021, L3 MIASHS]

Administrative Assistant

  • Julia Di Toro [Inria]

2 Overall objectives

Human beings are apparently able to communicate knowledge. However, it is impossible for us to know if we share the same representation of knowledge.

mOeX addresses the evolution of knowledge representations in individuals and populations. We deal with software agents and formal knowledge representation. The ambition of the mOeX project is to answer, in particular, the following questions:

  • How do agent populations adapt their knowledge representation to their environment and to other populations?
  • How must this knowledge evolve when the environment changes and new populations are encountered?
  • How can agents preserve knowledge diversity and is this diversity beneficial?

We study them chiefly in a well-controlled computer science context.

For that purpose, we combine knowledge representation and cultural evolution methods. The former provides formal models of knowledge; the latter provides a well-defined framework for studying situated evolution.

We consider knowledge as a culture and study the global properties of local adaptation operators applied by populations of agents by jointly:

  • experimentally testing the properties of adaptation operators in various situations using experimental cultural evolution, and
  • theoretically determining such properties by modelling how operators shape knowledge representation.

We aim at acquiring a precise understanding of knowledge evolution through the consideration of a wide range of situations, representations and adaptation operators.

In addition, we still investigate rdf data interlinking with link keys, a way to link entities from different data sets.

3 Research program

3.1 Knowledge representation semantics

We work with semantically defined knowledge representation languages (like description logics, conceptual graphs and object-based languages). Their semantics is usually defined within model theory initially developed for logics.

We consider a language L as a set of syntactically defined expressions (often inductively defined by applying constructors over other expressions). A representation (oL) is a set of such expressions. It may also be called an ontology. An interpretation function (I) is inductively defined over the structure of the language to a structure called the domain of interpretation (D). This expresses the construction of the “meaning” of an expression in function of its components. A formula is satisfied by an interpretation if it fulfills a condition (in general being interpreted over a particular subset of the domain). A model of a set of expressions is an interpretation satisfying all the expressions. A set of expressions is said consistent if it has at least one model, inconsistent otherwise. An expression (δ) is then a consequence of a set of expressions (o) if it is satisfied by all of their models (noted oδ).

The languages dedicated to the semantic web (rdf and owl) follow that approach. rdf is a knowledge representation language dedicated to the description of resources; owl is designed for expressing ontologies: it describes concepts and relations that can be used within rdf.

A computer must determine if a particular expression (taken as a query, for instance) is the consequence of a set of axioms (a knowledge base). For that purpose, it uses programs, called provers, that can be based on the processing of a set of inference rules, on the construction of models or on procedural programming. These programs are able to deduce theorems (noted oδ). They are said to be sound if they only find theorems which are indeed consequences and to be complete if they find all the consequences as theorems.

3.2 Data interlinking with link keys

Vast amounts of rdf data are made available on the web by various institutions providing overlapping information. To be fully exploited, different representations of the same object across various data sets, often using different ontologies, have to be identified. When different vocabularies are used for describing data, it is necessary to identify the concepts they define. This task is called ontology matching and its result is an alignment A, i.e. a set of correspondences e,r,e' relating entities e and e' of two different ontologies by a particular relation r (which may be equivalence, subsumption, disjointness, etc.) 3.

At the data level, data interlinking is the process of generating links identifying the same resource described in two data sets. Parallel to ontology matching, from two datasets (d and d') it generates a link set, L made of pairs of resource identifier.

We have introduced link keys 3, 4 which extend database keys in a way which is more adapted to rdf and deals with two data sets instead of a single relation. An example of a link key expression is:

{ 𝖺𝗎𝗍𝖾𝗎𝗋 , 𝖼𝗋𝖾𝖺𝗍𝗈𝗋 } { 𝗍𝗂𝗍𝗋𝖾 , 𝗍𝗂𝗍𝗅𝖾 } l i n k k e y 𝖫𝗂𝗏𝗋𝖾 , 𝖡𝗈𝗈𝗄

stating that whenever an instance of the class 𝖫𝗂𝗏𝗋𝖾 has the same values for the property 𝖺𝗎𝗍𝖾𝗎𝗋 as an instance of class 𝖡𝗈𝗈𝗄 has for the property 𝖼𝗋𝖾𝖺𝗍𝗈𝗋 and they share at least one value for their property 𝗍𝗂𝗍𝗋𝖾 and 𝗍𝗂𝗍𝗅𝖾, then they denote the same entity. More precisely, a link key is a structure Keq,Kin,C such that:

  • Keq and Kin are sets of pairs of property expressions;
  • C is a pair of class expressions (or a correspondence).

Such a link key holds if and only if for any pair of resources belonging to the classes in correspondence such that the values of their property in Keq are pairwise equal and the values of those in Kin pairwise intersect, the resources are the same. Link keys can then be used for finding equal individuals across two data sets and generating the corresponding 𝗈𝗐𝗅:𝗌𝖺𝗆𝖾𝖠𝗌 links. Link keys take into account the non functionality of rdf data and have to deal with non literal values. In particular, they may use arbitrary properties and class expressions. This renders their discovery and use difficult.

3.3 Experimental cultural knowledge evolution

Cultural evolution considers how culture spreads and evolves with human societies 20. It applies a generalised version of the theory of evolution to culture. In computer science, cultural evolution experiments are performed through multi-agent simulation: a society of agents adapts its culture through a precisely defined protocol 17: agents perform repeatedly and randomly a specific task, called game, and their evolution is monitored. This aims at discovering experimentally the states that agents reach and the properties of these states.

Experimental cultural evolution has been successfully and convincingly applied to the evolution of natural languages 22, 21. Agents play language games and adjust their vocabulary and grammar as soon as they are not able to communicate properly, i.e. they misuse a term or they do not behave in the expected way. It showed its capacity to model various such games in a systematic framework and to provide convincing explanations of linguistic phenomena. Such experiments have shown how agents can agree on a colour coding system or a grammatical case system.

Work has recently been developed for evolving alignments between ontologies. It can be used to repair alignments better than blind logical repair 2, to create alignments based on entity descriptions 15, to learn alignments from dialogues framed in interaction protocols 16, 19, or to correct alignments until no error remains 182 and to start with no alignment 1. Each study provides new insights and opens perspectives.

We adapt this experimental strategy to knowledge representation 2. Agents use their, shared or private, knowledge to play games and, in case of failure, they use adaptation operators to modify this knowledge. We monitor the evolution of agent knowledge with respect to their ability to perform the game (success rate) and with respect to the properties satisfied by the resulting knowledge itself. Such properties may, for instance, be:

  • Agents converge to a common knowledge representation (a convergence property).
  • Agents converge towards different but compatible (logically consistent) knowledge (a logical epistemic property), or towards closer knowledge (a metric epistemic property).
  • That under the threat of a changing environment, agents that have operators that preserve diverse knowledge recover faster from the changes than those that have operators that converge towards a single representation (a differential property under environment change).

Our goal is to determine which operators are suitable for achieving desired properties in the context of a particular game.

4 Application domains

Our work on data interlinking aims at application to linked data offered in RDF on the web. It has found applications in thesaurus and bibliographical data interlinking (see previous years' report).

mOeX's work on cultural knowledge evolution is not directly applied and rather aims at extracting general principles of knowledge evolution. However, we foresee its potential impact in the long term in fields such as smart cities, the internet of things or social robotics in which the knowledge acquired by autonomous agents will have to be adapted to changing situations.

5 Highlights of the year

Line van den Berg defended her PhD thesis “Cultural knowledge evolution in dynamic epistemic logic”. Her work starts bridging cultural knowledge evolution and dynamic epistemic logic. On the one hand, it shows that there is actually a gap between logics modelling agent knowledge and actual implemented agent systems. On the other hand, it shows that there is a path for more generally considering cultural evolution from an epistemic logic standpoint. Nonetheless, this work also opened interesting perspectives in the study of signature awareness which applies to epistemic logics independently from cultural evolution.

6 New software and platforms

   

6.1 New software

6.1.1 Lazylav

  • Name:
    Lazy lavender
  • Keywords:
    Reproducibility, Multi-agent, Simulation
  • Scientific Description:
    Lazy lavender aims at supporting mOeX's research on simulating knowledge evolution. It is not a general purpose simulator. However, it features some methodological innovations in term of facilitating publication, recording, and replaying of experiments.
  • Functional Description:
    Lazy Lavender is a simulation environment for cultural knowledge evolution, i.e. running randomised experiments with agent adjusting their knowledge while attempting to communicate. It can generate detailed report and data from the experiments and directions to repeat them.
  • Release Contributions:

    Lazy is continuously evolving and do not feature stable releases.

    Instead, use git hashes to determine which version is used in a simulation.

  • News of the Year:
    In 2021, we developed multi-task agents as well as agent capability to reproduce and transmit knowledge to their offspring.
  • URL:
  • Publications:
  • Contact:
    Jerome Euzenat
  • Participants:
    Jerome Euzenat, Yasser Bourahla, Iris Lohja, Fatme Danash, Irina Dragoste, Andrea Kalaitzakis

6.1.2 Alignment API

  • Keywords:
    Ontologies, Alignment, Ontology engineering, Knowledge representation
  • Scientific Description:

    The API itself is a Java description of tools for accessing the common format. It defines five main interfaces (OntologyNetwork, Alignment, Cell, Relation and Evaluator).

    We provide an implementation for this API which can be used for producing transformations, rules or bridge axioms independently from the algorithm that produced the alignment. The proposed implementation features: - a base implementation of the interfaces with all useful facilities, - a library of sample matchers, - a library of renderers (XSLT, RDF, SKOS, SWRL, OWL, C-OWL, SPARQL), - a library of evaluators (various generalisation of precision/recall, precision/recall graphs), - a flexible test generation framework that allows for generating evaluation data sets, - a library of wrappers for several ontology APIs , - a parser for the format.

    The API implementation provides an extended language for expressive alignments (EDOAL). EDOAL supports many types of restrictions inspired from description logics as well as link keys. It is fully supported for parsing and serialising in XML. It also provide other serialisers, to OWL and SPARQL queries in particular.

    To instanciate the API , it is sufficient to refine the base implementation by implementing the align() method. Doing so, the new implementation will benefit from all the services already implemented in the base implementation.

  • Functional Description:
    Using ontologies is the privileged way to achieve interoperability among heterogeneous systems within the Semantic web. However, as the ontologies underlying two systems are not necessarily compatible, they may in turn need to be reconciled. Ontology reconciliation requires most of the time to find the correspondences between entities (e.g. classes, objects, properties) occurring in the ontologies. We call a set of such correspondences an alignment.
  • Release Contributions:

    See release notes.

    This is the last release made from gforge svn repository. After it, the Alignment API is hosted by gitlab and versioned with git. It may well be the last formal release, clone from the repo instead.

    The Alignment API compiles in Java 11 (jars are still compiled in Java 8).

  • News of the Year:
    In 2021, the painful effort of transferring the Alignment API and its documentation from gforge to gitlab has been completed. As a consequence, it is not planned anymore to issue formal releases (checking out from git and refer to the git hash should be the thing to do).
  • URL:
  • Publications:
  • Contact:
    Jerome Euzenat
  • Participants:
    Armen Inants, Chan Le Duc, Jérôme David, Jerome Euzenat, Jérôme Pierson, Luz Maria Priego-Roche, Nicolas Guillouet

6.1.3 LinkEx

  • Keywords:
    LOD - Linked open data, Data interlinking, Formal concept analysis
  • Functional Description:
    LinkEx implements link key candidate extraction with our initial algorithms, formal concept analysis or pattern structures. It can extract link key expressions with inverse and composed properties and generate compound link keys. Extracted link key expressions may be evaluated using various measures, including our discriminability and coverage. It can also evaluate them according to an input link sample. The set of candidates can be rendered within the Alignment API's EDOAL language or in dot.
  • URL:
  • Publications:
  • Author:
    Jérôme David
  • Contact:
    Jérôme David

7 New results

7.1 Cultural knowledge evolution

In 2021, we built on the work developed previously to investigate knowledge transmission in agents societies and experiment with agents able to perform several tasks.

Finally, we published and extended our theoretical study of cultural alignment repair through dynamic epistemic logics.

7.1.1 Knowledge transmission across agent generations

Participants: Manuel Atencia [Correspondent], Yasser Bourahla [Correspondent], Jérôme Euzenat.

Last year, we designed a two-stage experiment in which (1) agents learn ontologies based on examples of decision making, and (2) then they interactively compare their decisions on different objects and adapt their ontologies when they disagree. We showed that agents indeed reduce interaction failure, most of the time they improve the accuracy of their knowledge about the environment, and they do not necessarily opt for the same ontology 8. In cultural evolution, this is considered as horizontal knowledge transmission between agents. Other work has shown that variation generated through vertical, or inter-generation, transmission allows agents to exceed that level 14. Such results have been obtained under the drastic selection of agents allowed to transmit their knowledge or introducing artificial noise during transmission. In order to study the impact of such measures on the quality of transmitted knowledge, we combined the settings of these two previous work and relaxed their assumptions (no strong selection of teachers, no fully correct seed, no introduction of artificial noise). Under this setting, we confirmed that vertical transmission improves on horizontal transmission even without drastic selection and oriented learning. We also showed that horizontal transmission is able to compensate for the lack of parent selection if it is maintained for long enough.

This work is part of the PhD thesis of Yasser Bourahla.

7.1.2 Pluripotent agents

Participants: Jérôme Euzenat [Correspondent], Andrea Kalaitzakis [Correspondent].

The work on cultural knowledge evolution concentrated on agents performing a single task 8. This is not a natural condition, thus we are developing agents able to carry out several tasks and to adapt their knowledge with the same protocol as before (§7.1.1). We studied the impact of having different tasks on the knowledge elaborated by agents performing such tasks. We showed that agents tackling several tasks are more successful than their single task counterparts: the higher the number of tasks, the higher the average accuracy demonstrated by a population of agents.

This work is part of the PhD thesis of Andreas Kalaitzakis.

7.1.3 Modelling cultural evolution in dynamic epistemic logic

Participants: Manuel Atencia [Correspondent], Jérôme Euzenat [Correspondent], Line van den Berg.

Ontology alignments enable agents to communicate while preserving heterogeneity in their knowledge. Alignments may not be provided as input and should be able to evolve when communication fails or when new information contradicting the alignment is acquired. The Alignment Repair Game (ARG) has been proposed for agents to simultaneously communicate and repair their alignments through adaptation operators when communication failures occur 2. ARG has been evaluated experimentally and the experiments showed that agents converge towards successful communication and improve their alignments. However, whether the adaptation operators are formally correct, complete or redundant could not be established by experiments. We introduced Dynamic Epistemic Ontology Logic (DEOL) to answer these questions. It allows us (1) to express the ontologies and alignments used via a faithful translation from ARG to DEOL, (2) to model the ARG adaptation operators as dynamic modalities and (3) to formally define and establish the correctness, partial redundancy and incompleteness of the adaptation operators in ARG 5.

This work is part of the PhD thesis of Line van den Berg defended in October 2021 12.

7.1.4 Populations

Participants: Manuel Atencia [Correspondent], Yidong Huang [Correspondent], Jérôme Euzenat [Correspondent].

The population version of the alignment repair game involves several populations of agents using the same ontologies. At given intervals, agents of the same population compare their alignments and build a consensus that they have to integrate with their own alignements. Initially it was hypothesised that such knowledge transmission can help agent achieve faster convergence, but the results suggest otherwise. Agents regularly replace their (repaired) local alignment by the (conservative) population consensus which slows down the convergence significantly. We have proposed three different ways in which agents can be critical towards the population consensus: They will adopt the consensus either based on a probability law, based on the distance between the consensus and their local alignment or based on the memory that the agent have of discarded correspondences. We tested all three approaches and found that agents can achieve faster convergence and produce alignments of comparable quality.

7.1.5 Experiment reproducibility

Participants: Yasser Bourahla [Correspondent], Clarisse Deschamps [Correspondent], Jérôme Euzenat [Correspondent].

We had maintained a registry of experiment descriptions since the beginning of the experimental cultural knowledge evolution activity. This registry was very dependent on gforge.inria.fr in order to store software (svn/git), data, experiment descriptions and analysis (wiki). The shutdown of gforge forced us to reconsider this approach. We built a new independent experimental workflow relying on git (for versioning and accountability), jupyter (for result analysis and experiment display), docker (for portable processing) and zenodo (for long-term storage of large experiment results). All previous experiments have been transferred and they are currently been repurposed as jupyter notebooks. The process and results are visible on the https://sake.re web site.

7.2 Link keys

The link key exploration is continued following two directions (§3.2):

  • Extracting link keys;
  • Reasoning with link keys.

As shown in §7.2.3, it is now possible to use both directions jointly.

7.2.1 Strategies for identifying high-quality link keys

Participants: Jérôme David [Correspondent].

Link keys correspond to closed sets of a specific Galois connection and can be discovered thanks to a concept analysis algorithm (FCA, RCA or pattern structure). Given a pattern concept lattice whose intents are link key candidates, we aim at identifying the most relevant candidates with respect to adapted quality measures. To achieve this task, we introduced the Sandwich algorithm which is based on a combination of dual bottom-up and top-down strategies for traversing the pattern concept lattice 7. The output of the Sandwich algorithm is a partially ordered set of the most relevant link key candidates.

We also introduced the notion of non-redundant link key candidate 6. Some link keys are redundant from an extensional standpoint when the closure of the sameAs relation on the links they induce is the same as another link key.

This work is part of the PhD thesis of Nacira Abbas, co-supervised by Amedeo Napoli (LORIA).

7.2.2 Fixed-point semantics for a reduced version of relational concept analysis

Participants: Jérôme Euzenat [Correspondent].

We have used relational concept analysis (RCA) to extract link keys. This led us to notice that when there exist circular dependencies between objects, it extracts a unique stable concept lattice family grounded on the initial formal contexts. However, other stable families may exist whose structure depends on the same relational context. These may be useful in applications that need to extract a richer structure than the minimal grounded one. This year, we considered this issue in a reduced version of RCA, which only retains the relational structure. We redefined the semantics of RCA in terms of concept lattice families closed by a fixed-point operation induced by this relational structure. We showed that these families admit a least and greatest fixed point and that the well-grounded RCA semantics is characterised by the least fixed point 9. We then characterised the interesting lattices as the self-supported fixed points. We provided an algorithm to compute the greatest fixed point (dual to the RCA algorithm) and discussed strategies to extract all self-supported fixed points 13. This approach also applies to RCA in general, though we still have to provide the proofs.

7.2.3 An effective algorithm for reasoning with link keys

Participants: Manuel Atencia [Correspondent], Khadija Jradeh [Correspondent].

Link keys can be thought of as axioms in a description logic. It is thus possible to reason with link keys in combination with ontologies and alignments. In the previous years, we have designed a worst-case optimal tableau algorithm, based on compressed tableau and directed by rule application, for reasoning in the description logic 𝒜ℒ𝒞 with individuals and link keys. This algorithm is sound, complete, and has exponential complexity. A proof of concept for this algorithm has been implemented and evaluations were carried out. These evaluations show the importance of reasoning for the task of data interlinking. In particular, it is possible to use the reasoner, during link key extraction, to discard link keys introducing inconsistencies.

This work is part of the PhD thesis of Khadija Jradeh, co-supervised by Chan Le Duc (LIMICS).

8 Partnerships and cooperations

8.1 European initiatives

8.1.1 Horizon Europe

Tailor

mOeX participates in the Tailor network, especially in work package 6 “Social AI: learning and reasoning in social contexts”:

Participants: Jérôme Euzenat [Correspondent], Yasser Bourahla, Andreas Kalaitzakis.

  • Program:
    H2020 ICT-48
  • Title:
    Foundations of Trustworthy AI integating Learning, Optimisation and Reasoning
  • Partner Institution(s):
    • Linköping University, Sweden (coordinator)
    • Université Grenoble Alpes, France
    • INRIA, France
    • many others
  • Duration:
    2020–2024
  • Web site:
  • Abstract:
    The purpose of TAILOR is to build the capacity of providing the scientific foundations for Trustworthy AI in Europe by developing a network of research excellence centres leveraging and combining learning, optimization and reasoning.

8.2 National initiatives

8.2.1 Elker

Participants: Manuel Atencia [Correspondent], Jérôme David, Jérôme Euzenat, Khadija Jradeh.

mOeX coordinates the Elker project:

  • Program:
    ANR-PRC
  • Title:
    Extending link keys: extraction and reasoning
  • Partner Institution(s):
    • Université Grenoble Alpes (coordinator)
    • INRIA Nancy Lorraine (Orpailleur)
    • Université Paris 13
  • Duration:
    October 2017–September 2022
  • Web site:
  • Abstract:
    The goal of Elker is to extend the foundations and algorithms of link keys (see §3.2) in two complementary ways: extracting link keys automatically from datasets and reasoning with link keys.

8.2.2 Miai

Participants: Manuel Atencia [Correspondent], Jérôme David [Correspondent], Jérôme Euzenat [Correspondent], Yasser Bourahla, Andreas Kalaitzakis, Line van den Berg.

mOeX holds the MIAI Knowledge communication and evolution chair

  • Program:
    ANR-3IA
  • Title:
    Multidisciplinary institute in artificial intelligence
  • Partner Institution(s):
    • Université Grenoble Alpes (coordinator)
  • Duration:
    July 2019–December 2023
  • Web site:
  • Abstract:
    The MIAI Knowledge communication and evolution chair aims at understanding and developing mechanisms for seamlessly improving knowledge (see §3.3). It studies the evolution of knowledge in a society of people and AI systems by applying evolution theory to knowledge representation.

9 Dissemination

Participants: Manuel Atencia, Jérôme David, Jérôme Euzenat, Line van den Berg.

9.1 Promoting scientific activities

Member of organizing committees
  • Jérôme Euzenat had been organiser of the 16th Ontology matching workshop of the 21th ISWC, held online, 2021 (with Pavel Shvaiko, Ernesto Jiménez Ruiz, Cássia Trojahn dos Santos and Oktie Hassanzadeh) 11
  • Manuel Atencia had been member of the organizing committee of “Journée Thématique EGC & IA: Évolution et dynamique des connaissances formelles”.

9.1.1 Scientific events: selection

Member of conference program committees
  • Jérôme Euzenat has been programme committee member of the “International Joint Conference on Artificial Intelligence (ijcai)”.
  • Jérôme Euzenat has been programme committee member of the “Web Conference (www)”.
  • Jérôme David has been programme committee member of the “International semantic web conference (iswc)”.
  • Jérôme Euzenat has been programme committee member of the “International Conference on Autonomous Agents and Multi-Agent Systems (aamas)”.
  • Manuel Atencia and Jérôme David have been programme committee members of the “European semantic web conference (eswc)”.
  • Jérôme David has been programme committee member of the “Extraction et Gestion des connaissances (egc)”.
  • Jérôme Euzenat has been programme committee member of the “Journées Françaises d'intelligence artificielle fondamentale (jiaf)”.

9.1.2 Journal

Member of editorial boards
  • Jérôme Euzenat is member of the editorial board of Journal of web semantics (area editor), Journal on data semantics (associate editor) and the Semantic web journal.
Reviewer - reviewing activities
  • Jérôme Euzenat had been reviewer for Autonomous agents and multi-agent systems.
  • Jérôme David had been reviewer for Applied ontology.

9.1.3 Invited talks

  • Jérôme Euzenat participated to the “Autonomous agents on the web” Dagstuhl seminar (Online, 2021-02-14-19) 10
  • Line van den Berg gave a Logic and Interactive Rationality Seminar (LIRa, Amsterdam (NL), 2021-03-25) on “Multi-Agent Knowledge Evolution in Dynamic Epistemic Logic”.
  • Jérôme Euzenat gave an invited talk to the 5ème Journée AFIA-EGC Extraction et gestion des connaissances et intelligence artificielle (Grenoble (FR), 2021-05-18) on “Knowledge evolution: a computational cultural knowledge evolution perspective”.
  • Line van den Berg gave a Bern Logic Seminar (Bern (CH), 2021-10-14) on “Cultural Knowledge Evolution in Dynamic Epistemic Logic”.
  • Jérôme Euzenat gave an invited talk to the Ontocommons online workshop: “Ontology commons addressing challenges of the industry 5.0 transition” (Online, 2021-11-02) on “Introduction to ontology matching and alignments”.

9.1.4 Leadership within the scientific community

9.1.5 Research administration

  • Jérôme Euzenat is member of the COS (Scientific Orientation Committee) of INRIA Grenoble Rhône-Alpes
  • Jérôme David is member of the "Commission du développement technologique" of INRIA Grenoble Rhône-Alpes
  • Jérôme David is member of the LIG laboratory council

9.2 Teaching - Supervision - Juries

9.2.1 Teaching

Responsibilities
  • Jérôme David is coordinator of the Master “Mathematiques et informatiques appliquées aux sciences humaines et sociales” (Univ. Grenoble Alpes);
  • Manuel Atencia had been co-responsible of the 2nd year of Master “Mathematiques et informatiques appliquées aux sciences humaines et sociales” (Univ. Grenoble Alpes);
  • Manuel Atencia had been part of the committe granting IDEX scholarships for the Master “Mathematiques et informatiques appliquées aux sciences humaines et sociales”;
  • Manuel Atencia had been coordinator of the “Web, Informatique et Connaissance” option of the master M2 “Mathematicques et informatiques appliquées aux sciences humaines et sociales” (Univ. Grenoble Alpes);
Lectures
  • Licence: Jérôme David, Programmation Fonctionelle, L1 MIASHS, 19,5h/y, UGA, France
  • Licence: Yasser Bourahla, Algorithmique et programmation Python, 55h/y, L1 Physique, Chimie, Mécanique, Mathématiques, UGA, France
  • Licence: Jérôme David, Algorithmique et programmation par objets, 70h/y, L2 MIASHS, UGA, France
  • Licence: Jérôme David, Système, L3 MIASHS, 18h/y, UGA, France
  • Licence: Manuel Atencia, Introduction aux technologies du Web, 84h/y, L3 MIASHS, UGA, France
  • Master: Jérôme David, Programmation Java 2, 30h/y, M1 MIASHS, UGA, France
  • Master: Jérôme David, JavaEE, 30h/y, M2 MIASHS, UGA, France
  • Master: Jérôme David, Web sémantique, 3h/y, M2 MIASHS, UGA, France
  • Master: Manuel Atencia, Formats de données du web, 42h/y, M1 MIASHS, UGA, France
  • Master: Manuel Atencia, Introduction à la programmation web, 42h/y, M1 MIASHS, UGA, France
  • Master: Manuel Atencia, Web sémantique, 27h/y, M2 MIASHS, UGA, France
  • Master: Jérôme David, Stage de programmation, 10h/y, M2 MIASHS, UGA, France
  • Master: Jérôme Euzenat, Semantics of distributed knowledge, 22.5h/y, M2R MoSIG, UGA, France

9.2.2 Supervision

  • Nacira Abbas, “Link key extraction and relational concept analysis”, in progress since 2018-10-01 (Jérôme David and Amedeo Napoli)
  • Khadija Jradeh, “Reasoning with link keys”, in progress since 2018-10-01 (Manuel Atencia and Chan Le Duc)
  • Line van den Berg, “Knowledge Evolution in Agent Populations”, defended 2021-10-29 (Manuel Atencia and Jérôme Euzenat) 12
  • Yasser Bourahla, “Evolving ontologies through communication”, in progress since 2019-10-01 (Manuel Atencia and Jérôme Euzenat)
  • Andreas Kalaitzakis, “Effects of collaboration and specialisation on agent knowledge evolution”, in progress since 2020-10-01 (Jérôme Euzenat)
  • Alban Flandin, “The benefits of forgetting knowledge”, stopped 2021-09-30 (Jérôme Euzenat and Yves Demazeau)

9.2.3 Juries

  • Jérôme Euzenat had been panel member of the habilitation defense of Antoine Zimmermann (Université Jean Monnet – Saint-Étienne).
  • Jérôme Euzenat had been panel chair of the PhD defense of Muideen Lawal (Université Grenoble Alpes).
  • Jérôme Euzenat had been panel chair of the PhD defense of Mathieu Viry (Université Grenoble Alpes).

9.3 Popularization

9.3.1 Interventions

  • Introduction by all members of mOeX of the Class? game to high-school students (tenth and eleventh graders) within the Fête de la science (Science fair), Montbonnot (FR), 2021-10-05–07.

10 Scientific production

10.1 Major publications

  • 1 inproceedingsJ.Jérôme Euzenat. Crafting ontology alignments from scratch through agent communication.PRIMA 2017: Principles and Practice of Multi-Agent SystemsNice, FranceSpringer Verlag2017, 245-262
  • 2 inproceedingsJ.Jérôme Euzenat. Interaction-based ontology alignment repair with expansion and relaxation.Proc. 26th International Joint Conference on Artificial Intelligence (IJCAI), Melbourne (VIC AU)2017, 185--191
  • 3 bookJ.Jérôme Euzenat and P.Pavel Shvaiko. Ontology matching.Heidelberg (DE)Springer-Verlag2013, URL: http://book.ontologymatching.org

10.2 Publications of the year

International journals

  • 4 articleM.Manuel Atencia, J.Jérôme David and J.Jérôme Euzenat. On the relation between keys and link keys for data interlinking.Semantic Web – Interoperability, Usability, Applicability1242021, 547-567
  • 5 articleL.Line van den Berg, M.Manuel Atencia and J.Jérôme Euzenat. A logical model for the ontology alignment repair game.Autonomous Agents and Multi-Agent Systems3522021, 1-32

International peer-reviewed conferences

  • 6 inproceedingsN.Nacira Abbas, A.Alexandre Bazin, J.Jérôme David and A.Amedeo Napoli. Non-Redundant Link Keys in RDF Data: Preliminary Steps.FCA4AI 2021 - 9th Workshop on What can FCA do for Artificial Intelligence?2972Montréal/Online, CanadaAugust 2021, 1-6
  • 7 inproceedingsN.Nacira Abbas, A.Alexandre Bazin, J.Jérôme David and A.Amedeo Napoli. Sandwich: An Algorithm for Discovering Relevant Link Keys in an LKPS Concept Lattice.ICFCA 2021 - 16th international conference on formal concept analysis12733LNCSStrasbourg /Virtuel, FranceSpringer Verlag2021, 243-251
  • 8 inproceedingsY.Yasser Bourahla, M.Manuel Atencia and J.Jérôme Euzenat. Knowledge improvement and diversity under interaction-driven adaptation of learned ontologies.AAMAS 2021 - 20th ACM international conference on Autonomous Agents and Multi-Agent SystemsLondon, United Kingdom2021, 242-250
  • 9 inproceedingsJ.Jérôme Euzenat. Fixed-point semantics for barebone relational concept analysis.Proc. 16th international conference on formal concept analysis (ICFCA)ICFCA - 16th international conference on formal concept analysisStrasbourg, FranceSpringer Verlag2021, 20-37
  • 10 inproceedingsJ.Jérôme Euzenat. The web as a culture broth for agents and people to grow knowledge.Proc. Dagstuhl seminar on Autonomous agents on the web2021 - Dagstuhl seminar on Autonomous agents on the webWadern, GermanySchloss Dagstuhl - Leibniz-Zentrum fuer Informatik2021, 40-41

Edition (books, proceedings, special issue of a journal)

  • 11 proceedingsP.Pavel ShvaikoJ.Jérôme EuzenatE.Ernesto Jiménez-RuizO.Oktie HassanzadehC.Cassia TrojahnOntology Matching 2021 : Proceedings of the 16th International Workshop on Ontology Matching co-located with the 20th International Semantic Web Conference (ISWC 2021).16th ISWC workshop on ontology matching (OM 2021)3063CEUR Workshop ProceedingsonlineCEUR-WS.org2021, 1-218

Doctoral dissertations and habilitation theses

  • 12 thesisL.Line van den Berg. Cultural knowledge evolution in dynamic epistemic logic.Université Grenoble Alpes [2020-....]October 2021

Reports & preprints

  • 13 reportJ.Jérôme Euzenat, M.Manuel Atencia, J.Jérôme David, A.Amedeo Napoli and J.Jérémy Vizzini. Relational concept analysis for circular link key extraction.ELKERDecember 2021, 1-57

10.3 Cited publications

  • 14 articleA.Alberto Acerbi and D.Domenico Parisi. Cultural Transmission Between and Within Generations.Journal of Artificial Societies and Social Simulation912006, 1-16
  • 15 inproceedingsM.Michael Anslow and M.Michael Rovatsos. Aligning experientially grounded ontologies using language games.Proc. 4th international workshop on graph structure for knowledge representation, Buenos Aires (AR)2015, 15-31
  • 16 articleM.Manuel Atencia and M.Marco Schorlemmer. An interaction-based approach to semantic alignment.Journal of web semantics1312012, 131-147
  • 17 articleR.Robert Axelrod. The dissemination of culture: a model with local convergence and global polarization.Journal of conflict resolution4121997, 203-226
  • 18 inproceedingsP.Paula Chocron and M.Marco Schorlemmer. Attuning ontology alignments to semantically heterogeneous multi-agent interactions.Proc. 22nd European conference on artificial intelligence (ECAI), The Hague (NL)2016, 871-879
  • 19 articleP.Paula Chocron and M.Marco Schorlemmer. Vocabulary alignment in openly specified interactions.Journal of artificial intelligence research682020, 69-107
  • 20 articleA.Alex Mesoudi, A.Andrew Whiten and K.Kevin Laland. Towards a unified science of cultural evolution.Behavioral and brain sciences2942006, 329-383
  • 21 bookM.Michael Spranger. The evolution of grounded spatial language.Language science press, Berlin (DE)2016
  • 22 bookL.Luc SteelsExperiments in cultural language evolution.John Benjamins, Amsterdam (NL)2012