Privacy Framework
The privacy framework provides a procedural process and technical toolboxes that aim to help organizations identify and manage privacy risks, in order to build innovative products and services while protecting individuals’ privacy, under specific regulations and legal contexts.
The privacy framework mainly highlights the importance of evaluating the privacy risks, through a Privacy Impact Assessment (PIA) (also called “Data Protection Impact Assessment”, in accordance with art. 35 GDPR) from the design phase of the product. A PIA consists of analysing how personal information are collected, used, shared, and maintained, in order to ensure main privacy properties.
This page will introduce the:
privacy properties that need to be fulfilled by designated products and services,
PIA methodology as detailed by European Union Agency for Cybersecurity (ENISA),
use-case studies as introduced by the French data protection agency.
1- Privacy properties and general principles
The European Data Protection Board establised main Guidelines to give general guidance on the obligation of Data Protection by Design and by Default (DPbDD), set forth in Article 25 in the GDPR. DPbDD is an obligation for all controllers, irrespective of size and varying complexity of processing. And to be able to implement the requirements of DPbDD, it is crucial that the controller understands the data protection principles and the data subject’s rights and freedoms.
In order to bridge the gap between the legal framework and the available technological implementation measures, the European Union Agency for Cybersecurity (ENISA) provides an inventory of existing approaches, privacy design strategies, and technical building blocks of various degrees of maturity from research and development. Starting from the privacy principles of the legislation, important elements are presented as a first step towards a design process for privacy-friendly systems and services.
The main privacy properties that have to be enforced by the products and services dealing with personal and sensitive data in the context of a smart city, are summarized as follows:
Anonymity: it means the ability of the user to access a resource or service without disclosing his identity to third parties. That is, the anonymity of a user means that he is not identifiable within a set of subjects, known as the anonymity set. Several levels of anonymity have been defined in the literature, ranging from complete anonymity (i.e., no one can reveal the identity of the user) to pseudo- anonymity (i.e., the identity is generally not known but can be disclosed if necessary) to pseudonymity (i.e., multiple virtual identities can be created and used in different settings).
Data minimization: it is a fundamental feature of privacy preservation. It requires that service providers collect and process the minimum amount of information needed for appropriate execution of a service or a particular transaction. The goal is to minimize the amount of collected personal information by service providers, for instance, to reduce the risk of profiling and tracking users.
Unlinkability: this property is essential for user privacy support and is closely related to the anonymity property. Unlinkability of two or more Items of Interest (IoIs, e.g., users, messages, actions, ⋯) from an attacker’s perspective means that within the system (comprising these and possibly other items), the attacker cannot sufficiently distinguish whether these IoIs are related or not.
Unobservability: this property means the undetectability of a user against all users uninvolved in an IOI and its anonymity even against the other user(s) involved in that IoI. That is, a user can use a resource or a service, without being noticed by others. Unobservability also requires that third parties cannot determine if an operation is running.
2- Privacy Impact Assessment (PIA)
The European Union Agency for Cybersecurity (ENISA) provides a risk data assessment methodology. It presents a simplified approach that can guide the Small and Medium Entreprises (SME)s through their specific data processing operation and help them evaluate the relevant security risks. This methodology could indeed be followed as first step also by other kinds of organizations, adding then some integrations and further evaluations, as needed. The ENISA’s guidelines for SMEs propose an approach on evaluation of risk, which is based on four steps, as follows:
the definition of the processing operation and its context.
the understanding and evaluation of impact.
the definition of possible threats and evaluation of their likelihood (threat occurrence probability).
the evaluation of risk (combining threat occurrence probability and impact).
Following the evaluation of risk, the SMEs can adopt technical and organizational security measures (from
a proposed list) that are appropriate to the level of risk, as reported in the ENISA’s guidelines.
3- Examples of Privacy Impact Assessments
ENISA initiative mainly relies on a detailed process provided by the CNIL, the French data protection agency, It discusses a set of good practices aiming to address the privacy risks and impact on the protection of personal data a processing can result in, and provides a set of comprehensive guilances with an illustrated use case to efficiently conduct this process, as follows: