Ethical and Legal Issues
This page provides a summary of ethical and legal issues that need to be considered by both IMPETUS and general users. These issues are mainly reated to the use (collection, analysis, storage, access, and similar) of “Big Data” in security operations. This chapter is organised as follows:
It is important to emphasize that the following analysis is tight the overall ecosystem, i.e., national and international regulations and guidelines, technical environments and involved actors, in the context of security and intelligence data-gathering operations.
1- General context
Smart systems aim at empowering human beings, allowing them to make informed decisions and fostering their fundamental rights, and at the same time, proper oversight mechanisms will be ensured using a human-in-the-loop approach. These technologies have to support human autonomy and decision-making, as prescribed by the principle of respect for human autonomy. They should act as enablers of a democratic, flourishing, and equitable society by supporting the user’s agency, fostering fundamental rights and allowing for human oversight. Thus, a crucial component is to achieve Trustworthy AI which is tightly related to technical robustness, and closely linked to the principle of prevention of harm. This should also apply to potential changes in their operating environment or the presence of other agents (human and artificial) that may interact with the system in an adversarial manner.
From this perspective, and considering relevant EU recommandations, it is of utmost importance to carefully identify the raised issues, associated with the context of usage.
2- Socio-technical environment
Whereas the public, in general, supports the primary efforts aimed at enhancing public security, the data-collection technology ued in security operations may raise concerns about privacy preservation. The timely and efficient law enforcement operations may require intrusions into privacy, thus requiring precise and detailed analysis of the available ethical and legal guidelines and rules on how the engaged actors must handle such data processing. The ethical considerations revolve over the principal conflict of interests between collecting and analysing big data on one side, and the need to protect personal data on the other.
In the context of a smart city, the noted strife is enhanced by the ever-increasing smart city capabilities of gathering big data as challenged by the civilizational strive to promote data rights as human rights (human-centric approach). As noted by the European Commission (EC) in 2015, it is challenging to balance citizens' personal data confidentiality with the law enforcement and judicial proceedings’ transgression into personal data. The purpose of an ethical analysis is to reconcile the two, at first glance, opposing forces, and to explore venues of co-existence (socio-technical environment), as represented in the table below.
Conflicting interests | |
Security interests | Privacy interests |
Necessity of identification | Right to anonymity |
Access to data | Control of data and information symmetry |
Arbitrary post-hoc information on data access | Informed access with consent |
Storing and use of data for other purposes | Right to be forgotten |
National and public security interests | Limitation of governmental surveillance |
Right of secrecy | Right to redress |
System of immunity and impunity | Responsibility and liability |
Table 1: Conflicting interests
3- Involuntary data collection and manipulation
One of the most critical junctions in evaluating the ethical permissibility of data collection and manipulation concerns the instances of (semi-)automated involuntary data collection and manipulation activities. These activities include involuntary data collection (i.e., sensors, CCTV cameras, smart devices, …), involuntary identification (i.e., face recognition, biometric data, automatic voice detection, …), involuntary tracing and tracking, and the consequent analysis of such data. The noted activities are relevant concerning the person whose device is being accessed and all individuals whose data is captured by such a device.
3.1- Mass surveillance
The all-out collection and manipulation of data are particularly pronounced in the differentiation between targeted surveillance versus mass surveillance. Indeed, some IMPETUS tools may enable mass surveillance, considering that they are implementing AI algorithms.
In the context of modern security challenges, it would be irresponsible to neglect and abandon the means and tools in data mining and analysis offered by the AI. At the same time, it is necessary to understand what makes the mass and targeted surveillance justifiable and what are the concrete benefits of employing AI algorithms in such operations. This evaluation’s outcome must have a fair and beneficial effect on the security and intelligence sector and society. A positive impact on society (common good principle) must outweigh all negative aspects. Potential threats and risks (internal/external, intentional/accidental) must be acknowledged and mitigated to the best extent possible (maximization of opportunities and minimization of risks principle).
For concrete analysis of IMPETUS Tools, please refer to the ethical analysis chapter.
3.2- Common goods
The common good principle should point to a particular value of general appreciation that gives justification for reducing other values, principles, and rights. Hereinafter, we will mainly focus on what the European Data Protection Supervisor (EDPS) referred to as the so-called big data protection ecosystem, and therefore on the notions of privacy and personal data.
3.2.1- Privacy and personal data
The Convention for the Protection of Individual with regard to Automatic Processing of Personal Data (APPD) with its latest Protocol from 2018, recognizes the need to assess the protection of personal data, while considering the “… diversification, intensification and globalization of data processing and personal data flows …”. The APPD Protocol recognizes the need to reconcile the right to personal data protection with other fundamental human rights, thus elevating the data rights to fundamental rights’ core echelon.
The personal data implies any information pertaining to an identified or identifiable individual (Art, 1. APPD; co-opted by the General Data Protection Regulation ), irrespective of its nature. The term identifiable may relate to anonymized data that can be re-personalized.
The right of privacy is recognized as a universal human right by the Universal Declaration of Human Rights. The Convention (as interpreted by the European Court of Human Rights) places an obligation on the State to protect its citizens against unjustified intrusions into their private affairs. Such intrusions are only permitted if prescribed by law and required by exceptional circumstances (i.e., national security, prevention of crime, citizens’ protection, and similar). Such exceptions are scrutinized by the Court of Justice of the European Union, particularly regarding the data protection defined by the Charter of Fundamental Rights of the EU (CRFREU).
3.2.2- Limitation and exclusions
CRFREU requires (Art. 8) consent for data collection or some other legitimate basis and stipulates the subject's right to access collected data and the right to rectification. In cases where such rights are to be limited or excluded, the limitations or exclusions must be prescribed by law, must the necessary (sufficient grounds) and proportionate (choice and severity of measures), must be foreseeable (to a certain degree), and must fulfil broader goals of general interest (in the public interest). The APPD Protocol reinforces the afore-mentioned criteria by stipulating that any data processing activities justified by national security or defence purposes must be subjected to a regulated independent review and supervision.
GDPR also stipulates several legal grounds, including the public interest and the exercise of official authority. GDPR details several reasons for a valid exclusion or limitation of subject’s rights, including national security, defence, public security, criminal proceedings, critical public interests, and others. Based on GDPR (and other similar rules and other legal documents), national legislation usually adopts separate acts and statues concerning the security and intelligence operations, criminal proceedings, public security issues, national security issues, …. Therefore, the regulated aspects of personal data and right to privacy may or may not be relevant for each jurisdiction, as most relevant rights are either limited, restricted, or altogether excluded. Indeed, most EU Member States and other states will have enacted specialized laws and status concerning the operation of their security apparatus, police force, data secrecy, and similar. In principle, the matters of national security remain under the purview of states. On the European level, such matters are regulated by the Directive (EU) 2016/680.
3.2.3- Traceable vs. untraceable data
The GDPR relates to the term pseudonymization, referring to a data management technique whereby a set of data relatable to a particular individual (subject) can only be accessed by using a separate key. This method allows creating two different sets of data, one traceable and consequently re-identifiable, and the other untraceable, not identifiable, and, ultimately, possibly not subject to personal data regulation.
4- Moral machines
One of the central ethical questions in the digital sphere (digital ethics, big data ethics) arising from the use of AI systems in data collection and manipulation (AI ethics) revolves around moral machines and machine ethics concepts. These concepts evaluate the AI systems' ability to recognize morally significant situations, recognize relevant values, and factor those values into decisions they make.
Indeed, core components of political and social life are slowly transferred to the digital arena, and societal solidarity and empathy are taken over by algorithm evaluation and scoring mechanisms. The Ethics Advisory Group Group concludes that moral values, human dignity, and personhood must remain an integral part of any decision-making relevant for data collection, data manipulation, and digital decision-making.
The noted switch of responsibility is likely to be enhanced by the red-flag system whereby the AI system alerts the human operator when the critical junction has been reached (such as is the case with the FD tool). The coding behind the noted system will allow designers (those who are designing the AI system) and deployers (those who are using AI system and/or offering services through that AI system, also commonly referred to as controllers and processors) to delegate specific decision-making points to the AI system.
5- Stakeholders
It is important to analyse how the various actors can be involved in security and intelligence data collection and manipulation. In the modern-day environment, such actors include a plethora of active and passive participants.
The primary layer consists of public security agencies and institutions (i.e., police department, security apparatus, fire department, supranational bodies, etc.).
The second layer consists of relevant public administration bodies (i.e., transportation authority, various city offices, and similar).
A new, third layer has emerged in recent times, consisting of private entities contractually or non-contractually engaged directly or indirectly in security operations.
The fourth layer represents ordinary citizens and legal persons involved either in autonomous or automatic data sharing/gathering.
Whereas the security services primarily use anonymous data to identify surveillance targets, private companies use anonymous data for commercial purposes. Individuals whose data are collected and analysed become objects rather than users. Such data may nor may not be re-identified. Still, questions are raised on whether private entities engaged in security and intelligence data collection and manipulation operations can avail of such data for other purposes.
Stakeholder | Role | Examples |
Regulate the implementation of security technology in public places |
| |
Decide on investment in security technology solutions |
| |
Use the technologies in security operations |
| |
Interact with primary users in managing security events |
| |
Residents of smart cities, impacted by the use of the technology |
|
Table 2: Stakeholders of Public Safety Solutions