Section: Partnerships and Cooperations
International Research Visitors
Visits of International Scientists
-
Tejas Kulkarni (University of Warwick) visited the team from May to August 2018 to work with Aurélien Bellet , Marc Tommasi and Jan Ramon on privacy-preserving computation of -statistics.
-
Larisa Soldatova (Brunel University) visited the team in June 2018 to work with Jan Ramon on probabilistic reasoning for biomedical applications.
-
Raouf Kerkouche (Inria Privatics) visited the team for 2 weeks in July 2018 to work with Aurélien Bellet and Marc Tommasi on federated and decentralized learning from medical data.
-
Guillaume Rabusseau (Université de Montréal) visited the team for 1 week in July 2018 to work with Aurélien Bellet and Marc Tommasi on multi-task distributed spectral learning.
-
Daphner Ezer, Adrià Gascón, Matt Kusner, Brooks Paige (all from Alan Turing Institute) and Hamed Haddadi (Imperial College London) visited the team for 2 days in October 2018 for the kick-off of the PAD-ML associate team.
Several international researchers have also been invited to give a talk at the MAGNET seminar:
-
D. Hovy (Bocconi Univ.): Retrofit Everything: Injecting External Knowledge into Neural Networks to Gain Insights from Big Data.
-
A. Trask (OpenMined): OpenMined - Building Tools for Safe AI.
-
C. Biemann (Univ. Hamburg): Adaptive Interpretable Language Technology.
-
W. Daelemans (Univ. Antwerp): Profiling authors from social media texts.
Internships
-
Igor Axinti explored several ways to compare word embeddings and studied the minimal corpus size for the comparison to be meaningful. He applied some of his findings to comparing two corpus in middle french from the 15th century, one originating from London and the other from Flanders. He produced a querying interface to allow Christopher Fletcher (IRHiS), who provided the data, explore and compare the embeddings spaces.
-
Nicolas Crosetti (joint internship with Joachim Niehren and Florent Cappelli, Links) worked on dependency-weighted aggregation, i.e., aggregation where the elements to aggregate are weighted according to the extent where they correspond to independent observations.
-
Arthur d'Azemar worked on decentralized recommender systems in collaboration with the WIDE team in Inria Rennes (François Taïani). Arthur has applied metric learning techniques in order to learn a K-nn graph for personalized and adaptive user-based recommendations.
-
Antoine Capriski worked on the analysis of word semantic change in political texts in collaboration with Caroline Le Pennec (UC Berkeley). He used the techniques of word embeddings to analyze of corpus of political manifestos from the French general elections for the period 1958-1993.
-
Most of the works on machine learning and privacy make the assumption that learners are honest but curious. Alexandre Huat worked on making protocols for private machine learning more robust again malicious attacks.
Visits to International Teams
Research Stays Abroad
-
Fabio Vitale is on leave at Department of Computer Science of Sapienza University (Rome, Italy) in the Algorithms Randomization Computation group with Prof. Alessandro Panconesi and Prof. Flavio Chierichetti. His current work on machine learning in graphs follows three directions:
-
designing new online reciprocal recommenders analyzing their performance both in theory and in practice,
-
clustering a finite set of items from pairwise similarity information in different learning settings,
-
introducing a new online learning framework encompassing several problems where the environment changes over time, and an efficient and very scalable unifying approach to solve the related general learning problem.
Current (and unfinished) ongoing research also includes the following topics: low-stretch spanning trees, active learning in correlation clustering problems, hierarchical clustering.
-
-
Aurélien Bellet visited the Alan Turing Institute (London) and Amazon Research Cambridge for 1 week in February 2018. He worked with Adrià Gascón and Borja Balle on privacy-preserving machine learning.