## Section: New Results

### Foundations of information hiding

Information hiding refers to the problem of protecting private information while performing certain tasks or interactions, and trying to avoid that an adversary can infer such information.

This is one of the main areas of research in Comète, and two PhD thesis based on this topic have been defended this year in Comète [12] , [11] have been defended this year. We are exploring several topics, described below. An overview of our results is contained in [24] .

#### The problem of information hiding in presence of concurrency

The analysis of probabilistic concurrent systems usually relies on the notion of scheduler in order to solve the nondeterminism. Unfortunately the classical notion of scheduler, which is a mathematical functions that chooses the next step depending on the history of the computation, can leak any secret information contained in the history. This creates false positives, and it is known as the problem of the *allmighty scheduler*.
One way to solve this problem, already explored in literature, is to fix the strategy of the scheduler beforehand [31] . However this solution is considered too rigid and unrealistic.
In [14] we have propose a milder restriction on the schedulers, and we have defined the notion of strong (probabilistic) information hiding under various notions of observables. Furthermore, we have proposed a method, based on the notion of automorphism, to verify that a system satisfies the property of strong information hiding, namely strong anonymity or no-interference, depending on the context.

#### Modeling the knowledge of the adversary

In [15] we have developed a game semantics for process algebra with two interacting agents. The purpose of our semantics is to make manifest the role of knowledge and information flow in the interactions between agents and to control the information available to interacting agents. We have defined games and strategies on process algebras, so that two agents interacting according to their strategies determine the execution of the process, replacing the traditional scheduler. We have shown that different restrictions on strategies represent different amounts of information being available to a scheduler. We have also shown that a certain class of strategies corresponds to the syntactic schedulers of Chatzikokolakis and Palamidessi [32] , which were developed to overcome problems with traditional schedulers modeling interaction. The restrictions on these strategies have an explicit epistemic flavor.

#### Opacity

Opacity is a security property formalizing the absence of secret information leakage and we have addressed in [30] the problem of synthesizing opaque systems. A secret predicate $S$ over the runs of a system $G$ is opaque to an external user having partial observability over $G$, if s/he can never infer from the observation of a run of $G$ that the run belongs to $S$. We have chosen to control the observability of events by adding a device, called a mask, between the system $G$ and the users. We have first investigated the case of static partial observability where the set of events the user can observe is fixed a priori by a static mask. In this context, we have shown that checking whether a system is opaque is PSPACE-complete, which implies that computing an optimal static mask ensuring opacity is also a PSPACE-complete problem. Then, we have introduced *dynamic* partial observability where the set of events the user can observe changes over time and is chosen by a dynamic mask. We have shown how to check that a system is opaque with respect to a dynamic mask and we have also addressed the corresponding synthesis problem: given a system $G$ and secret states $S$, compute the set of dynamic masks under which $S$ is opaque. Our main result is that the set of such masks can be finitely represented and can be computed in EXPTIME and this is a lower bound. Finally we have also addressed the problem of computing an optimal mask.

#### Interactive systems

In [13] we have considered systems where secrets and observables can alternate during the computation. We have shown that the information-theoretic approach which interprets such systems as (simple) noisy channels is not valid anymore. However, the principle can be recovered if we consider more complicated types of channels, that in Information Theory are known as channels with memory and feedback. We have shown that there is a complete correspondence between interactive systems and such kind of channels. Furthermore, we have shown that the capacity of the channels associated to such systems is a continuous function of the Kantorovich metric.

#### Differential privacy

Differential privacy is a notion that has emerged in the community of statistical databases, as a response to the problem of protecting the privacy of the database's participants when performing statistical queries. The idea is that a randomized query satisfies differential privacy if the likelihood of obtaining a certain answer for a database $x$ is not too different from the likelihood of obtaining the same answer on adjacent databases, i.e. databases which differ from $x$ for only one individual.

In [17] , [16] , we have analyzed critically the notion of differential privacy in light of the conceptual framework provided by the Rényi min information theory. We have shown that there is a close relation between differential privacy and leakage, due to the graph symmetries induced by the adjacency relation. Furthermore, we have considered the utility of the randomized answer, which measures its expected degree of accuracy. We have focused on certain kinds of utility functions called “binary”, which have a close correspondence with the Rényi min mutual information. Again, it turns out that there can be a tight correspondence between differential privacy and utility, depending on the symmetries induced by the adjacency relation and by the query. Depending on these symmetries we can also build an optimal-utility randomization mechanism while preserving the required level of differential privacy. Our main contribution is a study of the kind of structures that can be induced by the adjacency relation and the query, and how to use them to derive bounds on the leakage and achieve the optimal utility.