EN FR
EN FR


Section: New Results

Cultural knowledge evolution

Agents may use ontology alignments to communicate when they represent knowledge with different ontologies: alignments help reclassifying objects from one ontology to the other. Such alignments may be provided by dedicated algorithms [9], but their accuracy is far from satisfying. Yet agents have to proceed. They can take advantage of their experience in order to evolve alignments: upon communication failure, they will adapt the alignments to avoid reproducing the same mistake.

Such repair experiments have been performed [3] and revealed that, by playing simple interaction games, agents can effectively repair random networks of ontologies.

Expansion and relaxation modalities for cultural alignment repair

Participant : Jérôme Euzenat [Correspondent] .

We repeated these experiments and, using new measures, showed that the quality of previous results was underestimated. We introduced new adaptation operators that improve those previously considered. We also allowed agents to go beyond the initial operators in two ways [8]: they can generate new correspondences when they discard incorrect ones, and they can provide less precise answers. The combination of these modalities satisfy the following properties: (1) agents still converge to a state in which no mistake occurs, (2) they achieve results far closer to the correct alignments than previously found, (3) they reach again 100% precision and coherent alignments.

Starting with empty alignments in cultural alignment repair

Participant : Jérôme Euzenat [Correspondent] .

The results of §4.1.1 suggest that, with the expansion modality, agents could develop alignments from scratch. We explored the use of expanding repair operators for that purpose. When starting from empty alignments, agents fail to create them as they have nothing to repair. Hence, we introduced the capability for agents to risk adding new correspondences when no existing one is useful [7]. We compared and discussed the results provided by this modality and showed that, due to this generative capability, agents reach better results than without it in terms of the accuracy of their alignments. When starting with empty alignments, alignments reach the same quality level as when starting with random alignments, thus providing a reliable way for agents to build alignment from scratch through communication. The evolution curves of both approaches (random and empty alignments), passed a starting phase in which figures correspond to the initial conditions, superimpose nearly exactly. This comfort a posteriori the experiments with random initialisation.