Relevant EU Regulations
This page introduces EU legal and policy requirements, in the context of a smart city, namely for the usage of collected data by govermental entities and also the sharing of these data between different actors.
In order to illustrate the consideration of different texts, we point out their relevance and present general recommandations to the organizations implemeting or aiming to implement IMPETUS and similar technologies in the context of a smart city.
Four main directives are identified and briefly introduced hereafter:
1. General Data Protection Regulation (GDPR)
Title: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC ( General Data Protection Regulation )
Summary: The GDPR is a globally known and respected legislation offering high levels of protection regarding collecting and processing personal data. The Regulation is directly applicable in all Member States and offers a unified approach to protecting personal data in the European Union. Any data processor and controller who does not abide by the GDPR rules face severe penalties, such as €20 million or 4% of global revenue (whichever is higher), plus data subjects have the right to seek compensation for damages.
GDPR builds on the 1950 European Convention on Human Rights where it was stated that everyone has the right to respect for his private and family life, his home and his correspondence. As technology progressed (widespread of online banking from the 2000s, emergence of Facebook in 2006, growth of Google etc.), the EU recognized the need for modern protections and GDPR was put into effect on May 25, 2018.
The GDPR defines an array of legal terms such as:
Personal data is any information that relates to an individual who can be directly or indirectly identified. Names and email addresses are obviously personal data. Location information, ethnicity, gender, biometric data, religious beliefs, web cookies, and political opinions can also be personal data. Pseudonymous data can also fall under the definition if it’s relatively easy to ID someone from it.
Data processing is any action performed on data, whether automated or manual. The examples cited in the text include collecting, recording, organizing, structuring, storing, using, erasing… so anything.
Data subject is the person whose data is processed.
Data controller is the person who decides why and how personal data will be processed.
Data processor is a third party that processes personal data on behalf of a data controller. The GDPR has special rules for these individuals and organizations. They could include cloud servers or email service providers.
The GDPR also enumerated a comprehensive list regarding data subjects' rights, e.g, the right to be informed, the right of access, the right to erasure, the right to restrict processing, etc.
If the data is being processed, it must be done according to these protection and accountability principles, and data gathered must be handled by implementing appropriate technical and organizational measures.
Relevance to IMPETUS: The GDPR Regulation is relevant for IMPETUS tools and the platform that collect and analyze personal data for purposes other than those described under the LED Directive section.
When we use terms with capital letters, we refer to the definitions given by the GDPR and summarized hereabove.
Data Protection Officers (DPOs) are figures at the heart of the legal framework established by GDPR, facilitating many organisations in complying with its provisions. Under the GDPR, it is mandatory for certain Data controllers and processors to designate a DPO. Nevertheless, even when the GDPR does not specifically require the appointment of a DPO, organisations may sometimes find it useful to designate a DPO on a voluntary basis. Data Protection Officers must be appointed in case of:
public authority other than a court acting in a judicial capacity performing data processing,
core activities requiring an entity to monitor people systematically and regularly on a large scale,
core activities are large-scale processing of special categories of data or data relating to criminal convictions and offenses.
All users are nevertheless reminded to ensure that they have all relevant GDPR compliance protocols in place (for more information, please refer to this link).
2. Law Enforcement Directive (LED)
Title: Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA
Summary: The so-called Police Directive or LED has been adopted in the same package as the GDPR and is effective from May 2018. Because the GDPR excludes its application in law enforcement bodies’ criminal investigations and operations safeguarding against and preventing threats to public security, it became necessary to separately regulate the protection of personal data being processed in such circumstances. The LED adopts similar principles to protecting personal data as present in the GDPR, with some restrictive elements considering the particularities of criminal and public security threat investigations. The Directive needs to be implemented in the Member State domestic law by a specialized act which can differ to a certain extent from one Member state to another. Therefore, it is necessary to consult local legal experts when assessing the impact of the relevant national acts.
LED is not limited to processing by bodies who might be typically considered as ‘law enforcement authorities’, but to any processing for law enforcement purposes, carried out by a public or private body who fits the definition of ‘competent authority’ (such as local authorities when prosecuting litter fines, or local transport company in relation to ticket offences). This means that a potentially very large number and variety of bodies might fall under the scope, and the applicability of this regime needs to be assessed on a case-by-case basis.
Relevance to IMPETUS: The LED is relevant for all uses of IMPETUS tools and the platform that can be utilized in police (and related) investigations to tackle crime or threats to public security. Having in mind that, in principle, the law enforcement bodies are responsible for public security, each operation of personal data collection and subsequent analysis, sharing, storage, deletion, and others that are done to prevent public security threats or aid in criminal investigation, should be conducted under the auspices and control of a relevant law enforcement body (and, preferably, their personnel), with the security hub established by the partner city acting as a partner.
In order to evaluate if a technology or service is within the scope of the LED, the following key questions should be considered:
Is the body/entity in question a public authority, competent for law enforcement purposes?
Is the body/entity in question any other body or entity authorized by law to exercise public authority and public powers for law enforcement purposes?
Is the processing in question being carried out for the purposes of (a) the prevention of criminal offences, (b) the investigation of criminal offences, (c) the detection of criminal offences, (d) the prosecution of criminal offences and/or (e) the execution of criminal penalties?
Affirmative answers to these questions lead to potential subjectivity to LED. For example, given that the IMPETUS was designed to enhance the resilience of cities in the face of security threats in public spaces, it is recommended that all city users collaborate with local relevant law enforcement bodies. This should ensure that the police department assumes the principal role in all personal data manipulation activities conducted per the LED Directive and the relevant domestic law.
3. Ethics Guidelines for Trustworthy AI
Title: Ethics Guidelines for Trustworthy AI , Independent High-Level Expert Group on Artificial Intelligence
Summary: The EU Ethics Guidelines for Trustworthy AI (EGTAI) is a set of recommendations for all artificial intelligence algorithms deployers to adhere to ensure the so-called “trustworthy artificial intelligence.” According to the Guidelines, trustworthy AI should be:
(1) lawful - respecting all applicable laws and regulations,
(2) ethical - respecting ethical principles and values,
(3) robust - both from a technical perspective while taking into account its social environment.
The Guidelines assess seven basic principles and develop a set of issues and checklists to consider when preparing to deploy such algorithms:
Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches
Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan, as well as being accurate, reliable and reproducible.
Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured.
Transparency: the data, system and AI business models should be transparent. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination.
Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly.
Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
Relevance to IMPETUS: As IMPETUS tools and the platform are mainly based on AI algorithms, the Guidelines have been designated as the core consideration for ethical recommandations that are relevant for all IMPETUS partners and deployers.
4. Proposal for the Artificial Intelligence Act
Title: Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence ( Artificial Intelligence Act ) and amending certain Union legislative acts
Summary: The Draft Artificial Intelligence (AI) Act aims to become one of the first legislative instruments on a global scale regulating the use of different types of computer algorithms in various uses. The proposed Act builds on the risk-based approach applying different levels of requirements for different AI systems based on the potential risk they pose to fundamental rights of individuals, such as risks to the health and safety of individuals, respect for private life and protection of personal data, non-discrimination and equality between women and men. This has resulted in three categories of AI systems:
Prohibited AI practices: These are AI systems that impose a substantial threat to the EU's values and fundamental rights. This includes practices that have a significant potential to manipulate a person through subliminal techniques beyond their consciousness or exploit specific vulnerable groups such as children or persons with disabilities in order to materially distort their behavior in a manner that is likely to cause them, or another person, psychological or physical harm.
AI systems which involve a high risk: AI systems which constitute a high risk are allowed under the proposed Act but are strictly regulated. The classification as a high-risk AI system is based on the intended purpose of the system itself. And the proposed Act sets a list of legal requirements and obligations that should be considered by the AI system’s sphere, e.g. importers, distributors and authorized representatives of the high-risk AI system.
AI systems which involve a limited or minimal risks: These systems must be transparent to the human interacting with or being analyzed by the AI system. However, AI systems used for biometric categorization and deepfakes can be used without informing the recipient if the use is authorized by law to detect, prevent, investigate and prosecute criminal offenses.
Relevance to IMPETUS Platform: As it currently stands, the AI Act will impact the tools utilized by IMPETUS tools and the plateform, especially when considering its application in police matters and public security at large. Detailed protocols and supervisory mechanisms must precisely regulate such tools. Advanced technologies, such as face recognition, will be permitted only in exceptional cases and with prior (or later) court (or other competent authority) approval.