EN FR
EN FR


Section: New Results

Privacy, Fairness, and Transparency in Online Social Medias

Bringing transparency to algorithmic decision making systems and guaranteeing that the system satisfies properties of fairness and privacy is crucial in today's world. To start tackling this broad challenge, we focused on the case of online advertising and we had the following contributions.

  • Transparency properties for social media advertising and audit of Facebook's explanations. In [15], we took a first step towards exploring the transparency mechanisms provided by social media sites, focusing on the two processes for which Facebook provides transparency mechanisms: the process of how Facebook infers data about users, and the process of how advertisers use this data to target users. We call explanations about those two processes data explanations and ad explanations, respectively.

    We identify a number of properties that are key for different types of explanations aimed at bringing transparency to social media advertising. We then evaluate empirically how well Facebook's explanations satisfy these properties and discuss the implications of our findings in view of the possible purposes of explanations. In particular, for ad explanations, we define five key properties: personalization, completeness, correctness (and the companion property of misleadingness), consistency, and determinism, and we show that Facebook's ad explanations are often incomplete and sometimes misleading. In particular, we observe that Facebook reveals only the most prevalent attribute used by the advertisers, which may allow malicious advertisers to easily obfuscate ad explanations from ad campaigns that are discriminatory or that target privacy-sensitive attributes. For data explanations, we define four key properties of the explanations: specificity, snapshot completeness, temporal completeness, and correctness; and we show that Facebook's explanations are incomplete and often vague; hence potentially limiting user control.

    Overall, our study provides a first step towards better understanding and improving transparency in social media advertising. During this work, we developed the tool AdAnalyst (https://adanalyst.mpi-sws.org/), which was instrumental for the study but also provides a transparency tool on its own for the large public, and is anticipated to be the basis of a number of further research studies in transparency.

  • Potential for discrimination in social media advertising. Recently, online targeted advertising platforms like Facebook have been criticized for allowing advertisers to discriminate against users belonging to sensitive groups, i.e., to exclude users belonging to a certain race or gender from receiving their ads. Such criticisms have led, for instance, Facebook to disallow the use of attributes such as ethnic affinity from being used by advertisers when targeting ads related to housing or employment or financial services. In our paper [30], we systematically investigate the different targeting methods offered by Facebook (traditional attribute- or interest-based targeting, custom audience and lookalike audience) for their ability to enable discriminatory advertising and showed that a malicious advertiser can create highly discriminatory ads without using sensitive attributes (hence banning those features is inefficient to solve the problem). We argue that discrimination measures should be based on the targeted population and not on the attributes used for targeting and propose a discrimination metric in this direction.

  • Identification and resolution of privacy leakages in the Facebook's advertising platform. In paper [31] we discovered that the information provided to advertisers through the custom audience feature (where an advertisers can upload PIIs (Personally Identifiable Information) of their customers and Facebook matches those with their users) was very severely leaking personal information. Specifically, it was making it possible for a malicious advertiser knowing the email address of a user to discover its phone number. Perhaps even worse, it was allowing a malicious advertiser to de-anonymize visitors of a website he controls. We discovered that the problem was due to the way Facebook computes estimates of the number of users matching a list of PIIs and proposed a solution based on not de-duplicating records with different PIIs belonging to the same users; and we proved the robustness of our solution theoretically. Our work led to Facebook implementing a solution inspired by the one we proposed.