Section: New Results

Specific studies: security and privacy

Participants : Guillaume Aucher, Blaise Genest.

We have worked on three parallel lines of research related to security and privacy. The first line deals with problems of deleguation and revocation in distributed systems. The second line deals with problems of compliance of a system with respect to a privacy regulation expressed in a language combining epistemic, deontic and dynamic modalities. The third line tackles the minimal information needed at runtime to e.g. break in a (stochastic) system.

Delegation and revocation in distributed systems

Together with Steve Barker from King's College London, Guido Boella from the University of Torino, Valerio Genovese and Leon van der Torre from the University of Luxembourg, we defined a (sound and complete) propositional dynamic logic to specify and reason about delegation and revocation schemes in distributed systems. This logic describes formally a family of delegation and revocation models that are based on the work of  [65] . We extended our logic to accommodate an epistemic interpretation of trust. What emerges from this work is a rich framework of formally well-defined delegation and revocation schemes that accommodates an important trust component. In particular, we showed how to automatically reason about whether an agent is authorized to do an operation on an object and about the authorization policy resulting from the execution of a sequence of actions. We used our logical framework to give a formal account of eight different types of revocation schemes informally introduced in previous literature. This work is published in [18] .

Privacy policy with modal logic: the dynamic turn

As explained in Section 6.4 , we want to define a logical language to specify privacy policies which is close to the natural language. In general, privacy policies can be defined either in terms of permitted and forbidden knowledge, or in terms of permitted and forbidden actions. For example, it may be forbidden to know the medical data of a person, or it may be forbidden to disclose these data. Implementing a privacy policy based on permitted and forbidden actions is relatively easy, since we can add a filter on the system checking the outgoing messages. Such a filter is an example of a security monitor. If the system attempts to send a forbidden message, then the security monitor blocks the sending of that message. However, the price to pay for this relatively straightforward implementation is that it is difficult to decide which actions are permitted or forbidden so that a piece of information is not disclose. We are therefore interested in privacy policies expressed in terms of permitted and forbidden knowledge. Expressing a privacy policy in terms of permitted and forbidden knowledge is relatively easy, since it lists the situations, where, typically, it may not be permitted to know some sensitive information. Implementing a privacy policy based on permitted and forbidden knowledge is quite difficult, since the system has to reason about the relation between permitted knowledge and actions. The challenge is that the exchange of messages changes the knowledge, and the security monitor therefore needs to reason about these changes. This inference problem is already non trivial with a static privacy policy, and becomes challenging when privacy policies can change over time. Together with Guido Boella and Leon van der Torre, we therefore introduced a dynamic modal logic that permits not only to reason about permitted and forbidden knowledge to derive the permitted actions, but also to represent explicitly the declarative privacy policies together with their dynamics. The logic can be used to check both regulatory and behavioral compliance, respectively by checking that the permissions and obligations set up by the security monitor of an organization are not in conflict with the privacy policies, and by checking that these obligations are indeed enforced. We also showed that the complexity of the model checking problem is quadratic in the size of the model and the formula and provided the corresponding model-checking algorithms. This work is published in [11] .

Minimal information needed

Together with Nathalie Bertrand from Vertecs, we tackle the problem of the minimal information a user needs at runtime to achieve a simple goal, modeled as reaching an objective with probability one [25] . The natural question is then to minimize the additional information the user needs to fulfill her objective. This optimization question gives rise to two different problems, whether we consider to minimize the worst case cost, or the average cost. On the one hand, concerning the worst case cost, we show that efficient techniques from the model checking community can be adapted to compute the optimal worst case cost and give optimal strategies for the users. On the other hand, we show that the optimal average price (a question typically considered in the AI community) cannot be computed in general, nor can it be approximated in polynomial time even up to a large approximation factor. Following this negative results, we investigate with P.S. Thiagarajan's group at NUS, Singapore basic algorithms of the AI community to infer the exact probability in (compact) stochastic systems. We proposed in [45] a simple parametrized extension of the usual Factored Frontier algorithm in order to choose the desired accuracy of the algorithm, at the cost of additional but managable computations. We showed its benefit when dealing with biological pathways.