Section: New Results
Despite apparently strong legal protections, many citizens feel that information technologies have invaded so much of their lives that they no longer have suitable guarantees about their privacy. As a matter of fact, many aspects of new information technologies render privacy protection difficult to put into practice. A lot of data communications already take place nowadays on the Internet without the users' notice and the situation is going to get worse with the advent of “ambient intelligence” or “pervasive computing”  . One of the most challenging privacy issues in this context is to reconcile this continuous flow of data with privacy protection. One possible option to improve the situation when data has to be disclosed (or when it is practically impossible to object to its disclosure) is to enhance the obligations of the controllers and enforce more stringent rules on the use of personal data. We have followed this approach, considering both
technical means to define and enforce obligations and
possible evolutions of data protection regulations to avoid discriminations based on the use of personal data.
Technical means: specification and a posteriori verification of obligations
Legal means: privacy and non discrimination
In order to address the new threats to individual rights that are made possible by the progess of information technologies, we have proposed to distinguish two very different types of data collection  :
The collection of data as part of formal procedures with clearly identified parties or in the course of clearly identified events, recognized as such by the subjects (e.g. when submitting a file, filling a questionnaire, using a smart card or providing one's fingerprint to get access to a building).
The apparently insignificant and almost continuous collection of data that will become more and more common in the digital society (digital audit trails, audio and video recordings, etc.). This collection may be more or less perceived or suspected by the subject or remain completely invisible and unsuspected. Another worrying phenomenon is the automatic generation of new knowledge using data mining and knowledge inference techniques. In this kind of situation, the subject may ignore not only the process but also the generated knowledge itself, even if this knowledge is about him and could be used to take actions affecting him (e.g. not offering him a job or an insurance contract or adjusting the price of a service up to the level he would be prepared to pay).
The regulations on personal data protection were originally designed to address the first type of situation. Efforts are made to adapt them to the complex issues raised by the second type of data collection but they tend to be increasingly ineffective in these situations. The main cause of this ineffectiveness is their underlying philosophy of a priori and procedural controls. Starting from this observation, we have argued that a possible option is to strengthen a posteriori controls on the use of personal data and to ensure that the victims of data misuses can get compensations which are significant enough to represent a deterrence for data controllers. We have also argued that the consequences of such misuses of personal data often take the form of unfair discriminations and this trend is likely to increase with the generalization of the use of profiles. For this reason, we advocate the establishment of stronger connections between anti-discrimination and data protection laws, in particular to ensure that any data processing resulting in unfair differences of treatments between individuals is prohibited and is subject to effective compensations and sanctions  .