Licitis an Exploratory Action created by Inriain 2008 to undertake new research activities on the interactions between ICT and law. The motivations for this new initiative are manyfold. First and foremost, the very fast evolution of the technological landscape and the impact of ICT on the every day life of a majority of citizens (including their private life) raise new challenges which cannot be tackled by a purely technological approach . For example the protection of privacy rights in “ambient intelligence” environments is by essence multidimensional and requires expertises from disciplines such as social sciences, economics, ethics, law and, of course, ICT . Other examples of the ever growing intermingling of ICT and law include e-government, e-justice, electronic commerce, digital rights management (DRM), Radio Frequency Identification (RFID tags), forensics, cybercrime, Web services, virtual worlds, .... As far as research is concerned however, there are still very few links or interconnections between the ICT and law communities. This situation is unfortunate considering the importance of the interests (both societal and economical) at stake. In addition, at a time of growing mistrust of citizens towards technologies, more attention should be paid to the implications of research results on society.
Starting from this observation, the objective of Licitis to contribute, in partnership with research groups in law, to the development of new approaches and methods for a better integration of technical and legal instruments.
In practice, the interactions between ICT and law take various forms and go into both directions :
The ICT “objects” are, as any other objects, “objects of law“: on one hand, there is no reason why new technologies and services should escape the realm of law; on the other hand, it may be the case that existing regulations are too specific and need to be adapted to take into account the advent of new, unforeseen technological developments (e.g. part of the privacy regulations become inapplicable in a pervasive computing context, intellectual property laws are challenged by the new distribution modes of electronic contents, ...); understanding precisely when this is the case and how regulation should evolve to cope with the new reality may be a tricky techno-legal issue with potential impacts on both sides.
ICT can also provide new enforcement mechanisms and tools to the justice. For example, DRM technologies are supposed to “implement” legal provisions and contractual commitments, Privacy Enhancing Technologies (PET) help reducing privacy threats, certified tools can be provided to support “legal” electronic signature, computer logs (when they meet certain requirements) can be used in courts, ...At a different level, data mining or knowledge management systems can be applied to the extraction of relevant legal cases, to the analysis of computer logs or the formalization of legal reasoning.
Generally speaking, legal and technical means should complement each other to reduce risks and to increase citizens' and consumers' trust in ICT: on one side, laws (or contracts) can provide assurances which are out of reach of any technical means (or cope with situations where technical means would be defeated ); on the other side, technology can help enforcing legal and contractual commitments. This synergy should not be taken for granted however, and, if legal issues (and more generally, the social consequences of the technologies) are not considered from the outset, technological decisions made during the design phase may very well hamper or make impossible the enforcement of legal rights.
On the longer term, further thoughts need to be devoted to the crucial problem of managing the conflicting requirements raised by, on one hand rapidly evolving technologies and, on the other hand, bodies of regulations which, by essence and for the sake of “legal security”, require a form of stability. This complex issue is related to the problem of finding the right level of abstraction in regulations - or strike the right balance between very general principles (which remain stable but offer little indication as far as practical application is concerned, and can thus lead to another form of legal insecurity) and precise provisions (such as e.g. regulations about cookies in German law) whose application may be less prone to interpretation but are bound to become quickly outdated.
The means used by Licitto reach its objectives are twofold:
Research actions: to investigate specific research topics following a pluridisciplinary approach in order to better integrate legal and technical instruments. This research work will emphasize the use of formal methods as a link between the ICT and legal dimensions.
Networking actions: to favour the emergence of an “ICT and law” research community and to enhance the interest of ICT researchers in this emerging field.
The expected outputs of the first line of actions are research results whereas the outputs of the networking action will take the form of joint projects (coordination actions, networks, ...), joint events (seminars, conferences) and position papers.
The collaborative project
Priam(Privacy Issues in Ambient Intelligence)
A formal framework for privacy management based on “Privacy Agents” in charge of managing personal data on behalf of their owners . This framework has been devised consistently with the requirements and recommendations resulting from the legal study conducted in the project , .
The organization of the
PriamConference “ICT and law: opportunities, challenges and limitations” (20-21 November, Grenoble)
As set forth in Section , Licitis by nature not only pluridisciplinary but also transversal in the sense that a wide variety of computer science areas are potentially relevant to its activities (security, software engineering, program specification, validation, knowledge management, automated reasoning, natural language engineering, ...). Encompassing this variety of competences within the action itself is obviously out of reach: the objective of Licitis rather to establish partnerships with research groups (in ICT and law) providing complementary backgrounds in order to ensure that the highest level of expertise is available to reach the objectives of the action. As far as the legal background is concerned, the relevant domains include intellectual property, individual rights (privacy right, personal data protection, free speech, ...), contract law, legal proofs, legistics, ...
In this section, we focus on the techniques playing a central role in Licit, namely formal methods, which serve as a link between ICT and law. We motivate their significance in the context of Licitin a first subsection before outlining the relevant techniques in a second subsection.
Beyond their many differences, ICT and law share a strong emphasis on formalism. This commonality is not without reason: in both cases formalism is a way to avoid ambiguity and to provide the required level of rigour, transparency and security. It is interesting to note for example that L. Fuller in his book “The morality of law” puts forward the following distinctive features of a legal system: (1) set of rules (2) without contradiction (3) understandable (4) applicable (5) predictable (6) publicized and (7) legitimate. Among these features the first five are also often used to characterize a good specification.
As far as software is concerned, the fact that both disciplines refer to the word “code” is not insignificant and the explorations of the commonalities can be very fruitful (and not only from a theoretical perspective). Indeed, there are many situations where the frontier between the two notions seems to be blurring. Just to take a few examples:
Software contracts typically incorporate references to technical requirements or specifications which can be used, for example, to decide upon acceptance of the software by the customer or validity of an error correction request. In case of litigation, such specifications can also be exploited by judges since they form part of the contract executed by the parties. The contract can thus be seen in such cases as an extension of the technical specification including further requirements such as use rights, delivery schedule, warranty, liability, ...
Several languages have been proposed to express enterprise privacy policies (e.g. P3P by the W3C Consortium and EPAL by IBM); they are used by some of commercial sites and can be handled by popular browsers such as Mozilla Firefox or Internet Explorer. The policies published by these sites can thus be used both by software code - checked by browsers or enforced by Privacy Enhancing Technologies (PET) - and by judges, possibly interpreting them as commitments on the privacy policy of the company.
The DRM technologies are supposed to implement legal provisions and contractual commitments about the use of digital contents such as music or video.
More and more transactions are performed on the basis of electronic contracts (SLA: Service Level Agreements for Web and grid services, electronic software licenses, e-commerce contracts, ...).
In fact, the convergence has developed so much that legal experts have expressed worries that “machine code” might more and more frequently replace “legal code”, with detrimental effects on consumers. This topic has stirred up a series of discussions and publications in the legal community , , and is bound to remain active for quite a long time. Indeed, the implementation of contractual commitments by computer code raises a number of issues such as the lack of flexibility of automated tools, the potential inconsistency between computer code and legal code, the potential errors or flaws in computer code itself or the respective roles of human beings and computers in the process.
The position taken in Licitis that the first step for a fruitful and useful exploration of the relationships between legal and software code is the definition of a formal framework for expressing the notions at hand, understanding them without ambiguity, and eventually relating or combining them.
The formal methods relevant to Licitinclude (1) modelling methods and (2) validation methods.
Modelling consists in designing models of IT systems to provide support for various kinds of analyses and tools such as consistency analysis, validation, evaluation, certification, animation, .... Modelling can take place at different phases of the life cycle of a system : before, during or after its design and development. Different frameworks have been proposed for system modelling, which can be roughly classified into semi-formal methods and formal methods. Semi-formal methods provide a well-defined syntax for the models (or “views” of the models) while the underlying semantics itself remains informal; in contrast, formal methods rely on a mathematical framework which is used to define the semantics of the models. The benefit of semi-formal methods is the definition of a shared body of notions, presentation rules and graphical tools which improve the communication and mutual understanding between the actors involved in the life-cycle of a system (designer, architect, development teams, evaluators, etc.). However, because of their lack of mathematical semantics, they do not necessarily guarantee the absence of ambiguity and they do not support formal verification tools. A standard example of semi-formal framework is UML. In contrast, formal methods such as Coq or B incorporate interactive theorem provers which help users verifying critical properties of their models. In addition, they provide ways to establish a formal link between a model and its implementation (through program extraction in Coq and refinement in B). Both formal and semi-formal methods are relevant to Licit, especially specification techniques based on “execution traces” where the expected behaviour of a system is defined in terms of properties of its sequences of operations.
Validation consists in checking a system to ensure that it behaves as expected. The expected behaviour of the system, as well as the checking process, can be performed in various ways. The most ambitious validation methods involve a formal specification of the system (using one of the formalisms set forth in (1) above) and a proof (usually interactive) that the actual implementation is consistent with the specification. An alternative is to use the formal specification to derive test suites in a systematic way based on well-defined coverage criteria. The validation can also consist in checking simpler properties (typically well-foundedness properties such as type correctness, absence of buffer overflow or implementation of specific security properties) using automatic tools: these tools are called “type checkers” when the properties to be checked can be expressed as types and “program analysers” when they are defined in terms of abstract domains. The main benefit of this category of tools is their automation; their limitation is the restriction in terms of expressive power of the language of properties. Licitwill use and extend existing validation techniques to perform “a posteriori” as well as “a priori” verifications. A posteriori checks are necessary when a priori verifications are either not practically feasible or insufficient to establish the effective behaviour of a system.
To conclude this subsection, we stress the fact that the separations into categories (semi-formal versus formal, type inference versus program analysis, testing versus verification) have been used for the sake of the presentation (and because they correspond to different research trends) but the frontiers between them are far from absolute: for example certain frameworks include semi-formal and formal techniques, graphical representations such as state diagrams can be endowed with formal semantics, type can be defined in terms of abstract domains, ...
The application areas which are directly concerned by Licitare varied, including
Ambient intelligence, RFID, video-surveillance, profiling, geographic information systems, electronic passports, ...(especially w.r.t. protection of privacy and individual rights)
Software licensing, IT contracts and services, (especially w.r.t. liability, compatibility, intellectual property right).
Telecom services (especially w.r.t. liability)
Banking services (especially w.r.t. liability)
Digital content (audio, video, information, ...) distribution and protection, Digital Right Management (especially w.r.t. liability and intellectual property right protection).
Digital libraries (especially w.r.t. intellectual property right)
E-commerce (especially w.r.t. liability and validity of electronic contracts)
E-services, Service Level Agreements, grids, cloud computing (especially w.r.t. liability and validity of electronic contracts).
Forensics and cybercrime (especially w.r.t. liability and digital proofs)
Internet and Web tools (browsers, search engines, ...) and services (Web publishing, Web 2.0, ...), virtual worlds, “Internet of things” (especially w.r.t. protection of privacy and individual rights, liability, intellectual property)
Security, dependability, quality of service (especially w.r.t. liability).
Technical assistance to legal activities: contract management, law making process, impact analysis, on-line dispute resolution, legal reasoning, legal knowledge management, complexity management, e-government, e-administration, ...
The work on risk and liability analysis described in Section is the result of an industrial collaboration in the framework of a “Research Valorisation Agreement” between Inriaand the Trusted Logic Group.
Privacy is a complex and multi-faceted notion, both from the social and from the legal point of view and it has been interpreted in various ways depending on times, cultures and individual perceptions. Notwithstanding such differences, it is widely agreed that the values underlying privacy pertain to fundamental human rights and many regulations, instruments and recommendations have been introduced to protect them . However, despite apparently strong legal protections, many citizens feel that technologies - especially information technologies - have invaded so much of their lives that they no longer have suitable guarantees about their privacy. As a matter of fact, many aspects of new information technologies render privacy protection difficult to put into practice. Many data communications already take place nowadays on the Internet without the users' notice and the situation is going to get worse with the advent of “ambient intelligence” or “pervasive computing” . One of the most challenging privacy issue in this context is the compliance with the “informed consent” principle, which is a cornerstone of most data protection regulations. For example, Article 7 of the EU Directive 95/46/EC states that “personal data may be processed only if the data subject has unambiguously given his consent” (unless waiver conditions are satisfied, such as the protection of the vital interests of the subject). In addition, this consent must be informed in the sense that the controller must provide sufficient information to the data subject, including “the purposes of the processing for which the data are intended”. Imposing that the user of ambient intelligence environments delivers his consent before each communication of personal data would largely defeat the purpose of providing these systems in the first place. This would lead to a situation where individuals would just have the choice between refusing the new services or renouncing to their privacy rights.
One of the results of the Priamproject is a proposal for a technical and legal infrastructure to solve this apparent discrepancy between ambient intelligence technologies and informed consent. The solution put forward in the project is based on the notion of “Privacy Agent”, a dedicated software acting as a “surrogate” of the subject and managing on his behalf his personal data. The subject can define his privacy requirements once for all, with all information and assistance required, and then rely on his Privacy Agent to implement these requirements faithfully. But this possibility also triggers a number of new questions from the legal side: for example, to what extent should a consent delivered via a software agent be considered as legally valid? Are the current regulations flexible enough to accept such kind of delegation to an automated system? Can the Privacy Agent be “intelligent” enough to deal with all possible situations ? Should subjects really rely on their Privacy Agent and what would be the consequences of any error (bug, misunderstanding, ...) in the process? In order to shed some light on these legal issues, three main aspects of consent have been studied in Priam: (1) the legal nature of consent (unilateral versus contractual act), its essential features (qualities and defects) and its formal requirements. In a second stage, drawing the lessons learned from this legal analysis, a privacy architecture has been proposed to use Privacy Agents as valid means for the consent of the data subject . Actually, several kinds of Privacy Agents have been proposed in Priam, including:
Subject Agents which are installed on a device attached to the subjects (for example their mobile phones) and control all disclosures of their personal data (whether stored on the same device or delivered through other means such as RFID tags or sensors).
Controller Agents which are installed on the sites of the controllers and manage the access and use of the personal data collected by the controllers. Controller Agents implement the commitments of the controllers and ensure that all requirements set by the subjects are met (retention delay, access right, modification right, ...).
Auditor Agents which are launched by certified authorities and interact with Controller Agents to check their execution traces.
As far as the legal framework is concerned, the roles of the different actors involved in the process have been defined precisely (including the roles of the subjects, of the controllers, of the Privacy Agent providers and the personal data authority) and contract models have been proposed to formalize the commitments of the Privacy Agent provider with respect to the subjects and to the controllers. In order to minimize the risks of misunderstanding, a simple privacy language has been devised. This language is a restricted (pattern based) natural language dedicated to the expression of privacy policies (the requirements of the subject on one side and the commitments of the controller on the other side). Subjects (respectively controllers) can interact with their agents through a user-friendly interface and double-check a natural text description of their privacy requirements (respectively privacy commitments) before accepting them. In order to avoid ambiguities in the expression of privacy policies, a mathematical semantics of the privacy language has been defined. This mathematical semantics characterizes precisely the expected behaviour of the Privacy Agents (based on the privacy policies defined by their users) in terms of compliant execution traces. In addition, all privacy related actions are recorded into log files which can be audited automatically by Auditor Agents (to check that they are consistent with the authorized execution traces) and can also be used as evidence in case of legal dispute.
A broad variety of methods and techniques have been proposed for IT security analysis, both by the academic world and by industry, with a number of differences in terms of scope, objectives and approaches. From our experience however, one of the main challenges for the security analyst remains to get a representation of the security of the system which is both sufficiently complete and sufficiently rigorous. Rigour is especially necessary in order to establish the precise responsibilities of all actors and stakeholders. Responsibility can be understood here both in the technical sense and the legal sense (liability). Indeed, a large number of actors are usually involved in the design and operation of modern IT systems and security issues may increasingly become a matter of liability, especially when substantial valuables are at stake. Evaluating existing security analysis methods by the above yardsticks lead us to their classification into two main categories:
In the first category, which includes most industrial methods and standards, some level of systematization is attained through the use of catalogues or checklists, which does not provide a sufficient level of rigour. In addition, these methods are appropriate only for the analysis of established (and relatively stable) categories of products such as operating systems or firewalls: they cannot be applied to the analysis of new products in emerging markets for which, typically, no data base of vulnerabilities is yet available.
Methods in the second category provide a systematic approach based on semi-formal or formal models of the system under study. Different levels of rigour can be attained depending on the formalism used to represent the models and the tools available to analyse them. However these methods, which originate mostly from the academic world, usually focus on technical issues and leave organizational aspects out of their scope.
The ASTRA(for Asset Tracking) method has been devised precisely to fill this gap and provide a framework for the systematic security analysis of innovative products, addressing in an incremental and uniform way both organizational and technical aspects . The method is iterative and relies on the systematic collection and analysis of all security relevant information to detect inconsistencies and assess residual risks. The core of the ASTRAmethod is the construction and analysis of functions representing different views of the system. These views include traditional notions such as locations, subjects, access rights, contexts, trust levels and sensitivity levels, but also responsability functions. For example, each constraint on the access to a location or an asset by a subject is associated with an actor in charge of ensuring this constraint.
The three main phases of the method are (1) the collection of information, (2) the detection of inconsistencies and (3) the risk assessment. The goal of the first and second one is to build a consistent and comprehensive view of the security of the system. The third phase is repeated, possibly with intermediate decision making steps (e.g. to decide the implementation of additional countermeasures) until a stable state is reached.
A significant advantage of the approach is to separate the issues of defining the set of responsible subjects and evaluating of the risk level. Whereas the risk level depends on the initial assumptions about trust and sensitivity of subjects and assets, the definition of responsible subjects does not rely on such assessments. This property is illustrated by the confinement theorem shown in . Another important benefit of ASTRA, from the practical point of view, is that organizational rules can be handled in exactly the same way as technical rules: individual actors such as security officers or night-watchers can be represented as subjects, physical goods or authorization documents can be represented as assets, rooms or premises are represented as locations, ...
The work on risk and liability analysis described in Section is the result of an industrial collaboration in the framework of a “Research Valorisation Agreement” between Inriaand the Trusted Logic Group.
The Cibleprogramme of Région Rhône-Alpes funds a collaborative project involving Licit, the Valorisation Service of the InriaGrenoble Rhône-Alpes and the research group GRDS(“Research Group in Law and Science”) of the Law Faculty of Grenoble (University Pierre Mendès-France). The main objective of this project is to study, from a dual - academic and industrial - perspective the legal issues pursuant to software license agreements, especially liability issues. This project funds a doctoral position (Sophie Guicherd).
Priam
One of the results of Priamis a proposal for a technical and legal infrastructure to solve the apparent discrepancy between ambient intelligence technologies and the informed consent of the data subject, which is the cornerstone of European regulations in terms of personal data protection. The solution put forward in Priamis based on the notion of “Privacy Agent”, a dedicated software acting as a “surrogate” of the subject and managing on his behalf his personal data. A formal framework has been proposed for Privacy Agents and the legal issues raised have been analyzed and integrated in the solution. Further details on the results of Priamare presented in Section .
The
Lise
One of the motivations of the Liseproject is the fact that, as observed by several authors, software quality and patterns of security frauds are directly related to legal liability patterns. But the precise definition of the expected functionalities of software systems is quite a challenge, not to mention the use of such definition as a basis for a liability agreement. Taking up this challenge is precisely the objective of Lise. To achieve this goal, the project will study liability issues both from the legal and the technical points of view with the aim to put forward methods (1) to define liability in a precise and unambiguous way and (2) to establish liability in case of disageement.
The
Fluor
The Fluorproject aims at protecting corporate documents circulating within companies. More precisely, the objective of the project is to unify information flow models and usage control models and analyze the legal issues raised by the use of these documents. Emphasis will be put by Liciton privacy issues and the design of a technical framework making it easier for organizations to handle privacy requirements and comply with privacy regulations.
Persopolis(2008-2010) is a project funded by the Competitivity poles Systematicand TES. The coordinator is OCS(Oberthur Card Systems) and the other partners of the project are CEV, ENSICaen, IAECaen, the Law Faculty of Caen, Inria( Licit), NBSTechand Trusted Logic.
The smart card life cycle includes, before delivery to the end-user, a personalization phase which consists in loading on the card memory data which is specific to the user (typically name, credentials, cetificates, ...). This personalization phase, which is highly critical, is generally conducted in the secured premises of the card manufacturer or subcontracted to a third party (“personalizer”) offering high security guarantees. In order to favour the deployment of service cards managed by local authorities (e.g. city council, social services, employment agencies, ...) it is necessary to reconsider this centralized personalization process while maintaining the required security guarantees. The objective of the Persopolisproject is to define the technical and legal requirements for the personalization of smart cards in such “open” contexts. Emphasis will be put on the management of personal data and the associated liability issues.
Licitcollaborates with the Aces, Amazonesand Pop Artproject-teams in the context of Priamand Lise.
Licitcollaborates with the following research groups:
GRDS(“Research Group in Law and Science”) - Law faculty of Grenoble, University Pierre Mendès-France ( Cibleproject).
CERCRID(“Research Group in Law”) - Law Faculty of Saint-Etienne, University Jean Monnet ( Liseproject).
DANTE(“Business and New Technologies Law”) - Law Faculty of Versailles Saint-Quentin ( Liseproject).
PrINT(“Intellectual Property”) - Law Faculty of Caen ( Liseand Persopolisprojects).
SSIR(“Security of Information Systems and Networks”) - Supelec( Liseproject).
Verimag- INPGGrenoble ( Liseproject).
SISTEM- ENSICaen ( Persopolisproject).
CIME- IAECaen ( Persopolisproject).
IODE(European Regulation and Human Rights) - CNRS( Fluorproject).
LIUPPA- University of Pau ( Fluorproject).
SERES, PRATIC, LUSSI- ENSTB( Fluorproject).
Terre-Océan - University of Polynésie Française ( Fluorproject).
Licittakes part in the activities of the
NESSI TSD WGFluor
As part of the networking activities put forward in Section , Licithas organized the following events:
PriamConference “ICT and law: opportunities, challenges and limitations” (20-21 November Grenoble)
First edition of the seminar “
DIAGONALESInformation Technologies and Society”
Daniel Le Métayer has also been a member of the scientific committees of :
The Annual Conference on Privacy Protection
CPDP(to be held in Brussels, 16-17 January 2009)
The first International Workshop on Advances in Policy Enforcement (
APE'08)
Daniel Le Métayer and Shara Monteleone have given a course on privacy at INSALyon.
Eduardo Mazza, co-advised by Daniel Le Métayer (with Marie-Laure Potet, Verimag), since November 2008. PhD in computer science, INPG.
Sophie Guicherd, co-advised by Daniel Le Métayer (with Etienne Vergès, GRDSLaw Faculty of Grenoble), since October 2008. PhD in law, Pierre Mendès-France University.