Section: Partnerships and Cooperations
International Research Visitors
Visits of International Scientists
- Peter Kling
The objective of the visit of Peter Kling was centered around Learning in Distributed Environments. This initiative contributes to the recent effort of Magnet towards decentralized learning also supported for instance by the Pamela project (Personalized and decentrAlized MachinE Learning under constrAints). Peter Kling's background in distributed computing, combinatorial optimization, online algorithms, and stochastic processes is a good opportunity to investigate new machine learning approaches in this area. In this first of one month, we have started to study Population and Spreading Processes. Two other topics on distributed load balancing and energy-aware algorithms will be the investigated in a second visit in 2018.
- Valentina Zantedeschi
During her one month stay, Valentina Zantedeschi has collaborated with Aurélien Bellet and Marc Tommasi on decentralized learning. A paper on collaborative and decentralized boosting will be submitted in 2018.
- Isabel Valera
- Clement Weisbecker
- Wilhelmiina Hamalainen
- Bert Cappelle
visited Magnet for a semester, as part of his "delegation", to collaborate with Pascal Denis and Mikaela Keller on compositional distributional semantics, and more specifically on the distributional analysis of so-called privative adjectives. A collaborative paper on this work will be submitted in 2018.
Several international researchers have also been invited to give a talk at the MAGNET seminar:
- Juhi Tandon
- Quentin Tremouille
worked on applications of the Hypernode graphs model  in the context of (movie) recommendation based on reviews in natural language.
- Hippolyte Bourel
worked on the application of the decentralized learning algorithms  for mobility data.
- Rumei Li
Visits to International Teams
Research Stays Abroad
- Mathieu Dehouck
visited USC during one month. He worked with pairs of 8 main and auxiliary NLP tasks. More specifically, he looked at transfer learning from low-level tasks (such as part-of-speech tagging, named entity recognition, chunking, word polarity classification) to high-level tasks (e.g., semantic relatedness, textual entailment, sentiment analysis). In contrast to a common belief in the NLP community that transfer learning between these tasks should be possible, we discovered that the widely-used technique in which word representations act as a medium of transfer only leads to limited improvements. These results were presented by Fei Sha at the Inria SiliconValley workshop (BIS’2017), and a paper is in preparation for 2018.
- Aurélien Bellet
visited École Polytechnique Fédérale de Lausanne (EPFL) during 1 week. He worked with the distributed computing group of Rachid Guerraoui on decentralized and privacy-preserving machine learning, leading to some joint papers , .
- Aurélien Bellet and Pascal Denis
visited USC during two weeks in December 2017. In collaboration with Melissa Ailem , recently recruited as a post-doc on the LEGO project, they worked on developing a new algorithm for joint learning of word and image embeddings inspired on the SkipGram word2vec model. In addition, they furthered the work initiated with Mathieu Dehouck along with USC colleagues on multi-task learning by proposing a new encoder-decoder model that integrates task and domain embeddings.