EN FR
EN FR


Section: Overall Objectives

Highlights of the Year

As first highlight, we are happy to report that our paper “Fiedler Random Fields: A Large-Scale Spectral Approach to Statistical Network Modeling ” has been accepted for publication at Journal of Machine Learning Research, the top journal in the field of machine learning. This paper's contributions are twofold. First, we introduce the Fiedler delta statistic, based on the Laplacian spectrum of graphs, which allows to dispense with any parametric assumption concerning the modeled network properties. Second, we use the defined statistic to develop the Fiedler random field model, which allows for efficient estimation of edge distributions over large-scale random networks. After analyzing the dependence structure involved in Fiedler random fields, we estimate them over several real-world networks, showing that they achieve a much higher modeling accuracy than other well-known statistical approaches.

The second highlight of the year is the publication of our paper “ Improving pairwise coreference models through feature space hierarchy learning” at the annual Meeting of the Association for Computational Linguistics (ACL 2013), the premier conference in the field of Natural Language Processing. This paper proposes a new method for significantly improving the performance of pairwise coreference models. Given a set of indicators, our method learns how to best separate types of mention pairs into equivalence classes for which we construct distinct classification models. In effect, our approach finds an optimal feature space (derived from a base feature set and indicator set) for discriminating coreferential mention pairs. Although our approach explores a very large space of possible feature spaces, it remains tractable by exploiting the structure of the hierarchies built from the indicators. Our experiments on the CoNLL-2012 Shared Task English datasets (gold mentions) indicate that our method is robust relative to different clustering strategies and evaluation metrics, showing large and consistent improvements over a single pairwise model using the same base features.