EN FR
EN FR


Section: Partnerships and Cooperations

International Research Visitors

Visits of International Scientists

Peter Kling

The objective of the visit of Peter Kling was centered around Learning in Distributed Environments. This initiative contributes to the recent effort of Magnet towards decentralized learning also supported for instance by the Pamela project (Personalized and decentrAlized MachinE Learning under constrAints). Peter Kling's background in distributed computing, combinatorial optimization, online algorithms, and stochastic processes is a good opportunity to investigate new machine learning approaches in this area. In this first of one month, we have started to study Population and Spreading Processes. Two other topics on distributed load balancing and energy-aware algorithms will be the investigated in a second visit in 2018.

Valentina Zantedeschi

During her one month stay, Valentina Zantedeschi has collaborated with Aurélien Bellet and Marc Tommasi on decentralized learning. A paper on collaborative and decentralized boosting will be submitted in 2018.

Isabel Valera

visited Magnet for 3 days to collaborate with Aurélien Bellet on fairness in machine learning.

Clement Weisbecker

visited Magnet for 1 week to collaborate with Aurélien Bellet on large-scale kernel methods using block low-rank approximations.

Wilhelmiina Hamalainen

visited Magnet for 2 weeks to collaborate with Jan Ramon . In particular, they worked on multiple hypothesis tests for regression and discretization problems.

Bert Cappelle

visited Magnet for a semester, as part of his "delegation", to collaborate with Pascal Denis and Mikaela Keller on compositional distributional semantics, and more specifically on the distributional analysis of so-called privative adjectives. A collaborative paper on this work will be submitted in 2018.

Several international researchers have also been invited to give a talk at the MAGNET seminar:

  • R. Babbar (Max Planck Institute): Algorithms for Extreme Multi-Class and Multi-Label Classification

  • M. Chehreghani (Xerox Research): Unsupervised Learning over Graphs: Distances, Algorithms, and an Information-Theoretic Model Validation Principle

  • G. Boleda (University Pompeu Fabra): Instances and Concepts in Distributional Space

  • M. Blondel (NTT): A Regularized Framework for Sparse and Structured Neural Attention

  • L. Wehenkel (University of Liège): Probabilistic Reliability Management of the European Electric Power System

  • A. Herbelot (University Pompeu Fabra): A Formal Distributional Semantics for Cognitively-Plausible Reference Acts

  • H. Ivey-Law (Data61/CSIRO): Private Federated Learning on Vertically Partitioned Data via Entity Resolution and Additively Homomorphic Encryption

Internships
Juhi Tandon

worked on developing re-ranking parsing models that exploit and compare various tree kernels in the context of semi-supervised graph-based multilingual dependency parsing.

Quentin Tremouille

worked on applications of the Hypernode graphs model [39] in the context of (movie) recommendation based on reviews in natural language.

Hippolyte Bourel

worked on the application of the decentralized learning algorithms [15] for mobility data.

Rumei Li

worked on a Yanakakis style algorithm for computing the effective sample size of a set of dependent training examples.

Visits to International Teams

Research Stays Abroad
Mathieu Dehouck

visited USC during one month. He worked with pairs of 8 main and auxiliary NLP tasks. More specifically, he looked at transfer learning from low-level tasks (such as part-of-speech tagging, named entity recognition, chunking, word polarity classification) to high-level tasks (e.g., semantic relatedness, textual entailment, sentiment analysis). In contrast to a common belief in the NLP community that transfer learning between these tasks should be possible, we discovered that the widely-used technique in which word representations act as a medium of transfer only leads to limited improvements. These results were presented by Fei Sha at the Inria SiliconValley workshop (BIS’2017), and a paper is in preparation for 2018.

Aurélien Bellet

visited École Polytechnique Fédérale de Lausanne (EPFL) during 1 week. He worked with the distributed computing group of Rachid Guerraoui on decentralized and privacy-preserving machine learning, leading to some joint papers [18], [16].

Aurélien Bellet and Pascal Denis

visited USC during two weeks in December 2017. In collaboration with Melissa Ailem , recently recruited as a post-doc on the LEGO project, they worked on developing a new algorithm for joint learning of word and image embeddings inspired on the SkipGram word2vec model. In addition, they furthered the work initiated with Mathieu Dehouck along with USC colleagues on multi-task learning by proposing a new encoder-decoder model that integrates task and domain embeddings.