Section: New Results
Data interlinking
The web of data uses semantic web technologies to publish data on the web in such a way that they can be interpreted and connected together. It is thus important to be able to establish links between these data, both for the web of data and for the semantic web that it contributes to feed. We consider this problem from different perspectives.
Interlinking cross-lingual RDF data sets
Participants : Tatiana Lesnikova [Correspondent] , Jérôme David, Jérôme Euzenat.
RDF data sets are being published with labels that may be expressed in different languages. Even systems based on graph structure, ultimately rely on anchors based on language fragments. In this context, data interlinking requires specific approaches in order to tackle cross-lingualism. We proposed a general framework for interlinking RDF data in different languages and implemented two approaches: one approach is based on machine translation, the other one is based taking advantage of multilingual refernces, such as BabelNet. We evaluated variation of theses two settings on English (DBPedia) and Chinese (XLore) datasets. Both approaches demonstrated promising results [20] . We will conduct more experiments including other language pairs and larger corpus.
This work is part of the PhD of Tatiana Lesnikova developed in the Lindicle project (§ 8.1.2 ).
Interactive learning of interlinking patterns
Participants : Zhengjie Fan [Correspondent] , Jérôme Euzenat.
We proposed an interlinking method which, from class correspondences between data source ontologies, uses -means or -medoids clustering to produce property correspondences. It then generates a first interlinking pattern which is a combination of a link key and similarity measures. Such patterns can be transformed into a Silk script for generating an initial link set. A sample of these links are assessed by users as either correct or incorrect. These are taken as positive and negative example by an extension of the disjunctive version space method to find an interlinking pattern, that can justify correct links and incorrect links. Experiments show that, with only 1% of sample links, this method reaches a F-measure over 96%. The F-measure quickly converges, being improved by nearly 10% than other comparable approaches [19] .
This work is part of the PhD of Zhengjie Fan [4] , co-supervised with François Scharffe (lirmm ), and developed in the Datalift project (§ 8.1.1 ).
An iterative import-by-query approach to data interlinking
Participants : Manuel Atencia Arcas [Correspondent] , Mustafa Al-Bakri, Steffen Lalande, Marie-Christine Rousset.
We modelled the problem of data interlinking as a reasoning problem on possibly decentralised data. We described an import-by-query algorithm that alternates steps of sub-query rewriting and of tailored querying of data sources. It only imports data as specific as possible for inferring or contradicting target sameAs assertions. Experiments conducted on a real-world dataset have demonstrated in practice the feasibility and usefulness of this approach for data interlinking and disambiguation purposes.
This work is part of the PhD thesis of Mustafa Al-Bakri, co-supervised by Manuel Atencia and Marie-Christine Rousset, developed in the Qualinca project.
Link key extraction
Participants : Jérôme David [Correspondent] , Manuel Atencia Arcas, Jérôme Euzenat.
Ontologies do not necessarily come with key descriptions, and never with link key assertions. Keys can be extracted from data by assuming that keys holding for specific data sets, may hold universally. We have extended such a classical key extraction technique for extracting weak link keys. We designed an algorithm to generate first a small set of candidate link keys and described this approach in the framework of formal context analysis [13] . Depending on whether some of the, valid or invalid, links are known, we defined supervised and non supervised measures for selecting the appropriate link keys. The supervised measures approximate precision and recall on a sample, while the non supervised measures are the ratio of pairs of entities a link key covers (coverage), and the ratio of entities from the same data set it identifies (discrimination). We have experimented these techniques, showing the accuracy and robustness of both approaches [12] .
This work has been developed partly in the Lindicle project (§ 8.1.2 ).