Recommendations on Ethical Issues

In line with the Assessment List as prepared under the Ethics Guidelines for Trustworthy AI, and having in mind the particularities of IMPETUS, this section provides prelimenary checklists that serve as a starting point of legal and non-legal compliance with EU Guidelines for Trustworthy AI.

All users are recommended to assess a checklist prepared under IMPETUS and reproduced below. To ensure that all actions undertaken by the involved end-users are considered ethically and legally acceptable, each entity adopting the IMPETUS tools or similar technologies must ensure its capacity to undergo proper due diligence and to ensure that all the below-listed factors are considered, understood, answered, and resolved. 

This procedure aims to foster not only the developer's and deployer's dedication to legal and ethical standards concerning personal data manipulation, but additionally serves to enhance and strengthen the public perception and acceptance of such activities. Given the nature of personal data manipulation and all moral and legal hazards connected to such activities, developers and deployers must take extra steps to ensure their dedication to the relevant legal and non-legal standards and recommendation concerning the adequate measures and procedures connected to the manipulation of data and use of advanced algorithms. 

 

List of stakeholders  

In order to ensure that relevant laws and regulations are being followed, it is recommended to define the stakeholders affected by the use of smart city technologies in public security sector.

It is useful if they can be categorized into one of the following Ethics guidelines for trustworthy AI (EGTAI) categories: 

  • developers (research, design, development of AI system), 

  • deployers (public or private organizations that utilize AI systems for themselves or as a service to third persons), 

  • end-users (engaged with the AI system), 

  • society at large (all third parties affected by the AI system).

For developers, it is recommended to explore whether there are relevant stakeholders among them, and if there are, are they public or private entities. In addition, it is important to know what kind of AI systemq they are developing and whether they already have a legal and ethical framework in place concerning the collection and manipulation of data.

For deployers, it is recommended to explore whether there are relevant stakeholders among them, and if there are, are they public or private entities. Furthermore, it is important to be informed about the purpose of and the type of  AI system they are utilizing as well as if they already have a legal and ethical framework in place concerning the collection and manipulation of data, do they solely rely on the AI system (full automatization), or do they insist on human operators / oversight / control.

It is recommended to explore whether there are relevant stakeholders among end-users, and if there are, are they public or private entities. Moreover, it is important to be aware about the purpose of and the type of the AI system they are utilizing.

Lastly, it is recommended to explore whether there are relevant stakeholders among the third parties, and if there are, are they public or private entities. Also, it is essential to know how they are affected, whether they are aware that they are affected and to what extent. 

 

Fundamental rights' impact assessment  

In order to ensure that relevant laws and regulations are being followed, it is essential to know what sort of data can be collected by the tools used (visual data, auditory data, biometric data, genetic data, documentary data, ethnical data, racial data, social data, religious data, health data, private data, and other data) and which of them are relevant for the context.

It is recommanded to examine what the possible risks to fundamental data rights are while collecting this data and whether such risks are acceptable, and can a limitation or exclusion of fundamental data rights be justified by higher goals/principles.

Concerning data anonymization, it is relevant to know what are the implications of anonymized (de-identified) personal data getting re-personalized (re-identified) and does the technology in question allow for pseudonymization technics leading to untraceable data sets. 

It is crucial to be aware of the potential of certain fundamental data rights precluding the collection and manipulation of personal data as well as being familiarized with boundaries to what extent private data can be collected and manipulated.

There is a difference between collecting and manipulating data for commercial purposes and security and intelligence purposes, and it is important to know to what extent.

It is relevant to understand whether human operators control private data collection and manipulation or can specific activities be fully automated and does the moment when anonymized data needs to be re-personalized constitute a valid junction for a human operator to take over the analysis from the AI system, does the AI system decide which anonymized data needs to be re-personalized or make recommendations to that effect as well as should the AI system continue with independent analysis and decision-making after the data has been re-personalized.

When informing the third-party stakeholders (citizens and private entities) about their data being collected and manipulated, it is important to define at what time and to what extent is this information to be released, are there any plausible exclusions to such a rule, should a human operator be in charge of informing the third-party stakeholders, or can this procedure be delegated to the AI system and does it make a difference if the data collected on a particular individual was relevant for the conducted investigation/surveillance.

It is valuable to investigate, if the collection and manipulation of data have led to a particular decision legally affecting individuals, whether such decisions be based on automated processing or must the human operator’s consideration precede such a decision in the country of event.

Another aspect to explore in national legislation is to what extent should private stakeholders be involved in data collection and manipulation in security and intelligence operations and should the private entities be allowed to act as processors on behalf of security and intelligence agencies acting as controllers.

Lastly, important to check is whether there is any audit or external feedback mechanism in place, and is an oversight system in place.

 

Deployer’s obligations  

It is recommended to know to what extent should the AI system’s operations be managed, controlled, or supervised by human operators, and what category of the AI system’s governance should be employed.

It is important to decide on an established audit, supervision and/or oversight mechanism, and whether it is to be handled internally or externally. If deployer’s operation is not regulated accordingly, it is necessary to ensure further regulation.

If not implemented, it is essential for the deployer to develop a data storage protocol, data access protocol, AI system’s algorithm integrity and reliability protocol, AI system’s algorithm decision-making protocol, human operator’s decision-making protocol, data quality and integrity protocol, data processing protocol, data sets and processes traceability protocol, and other similar protocols and standard (code) of conduct specifications.

In line with the previous points, the AI system’s algorithm should have clear rules on dividing surveillance-related relevant from irrelevant data. It is to be decided whether the two sets are stored jointly or separately, can a human operator access both sets or just the relevant data set, should the human operator be able to access both sets and should the irrelevant data set be stored for a specified time, or should it be deleted immediately after classification.

The AI algorithm should have clear rules on pseudonymization and the creation of traceable and untraceable data sets. It is to be decided whether the two sets are stored jointly or separately, can a human operator access both sets or just the traceable data set and should the untraceable data set be stored for a specified time, or should it be deleted immediately after classification.

It is important to identifty which data are considered as traceable and which are untraceable. Under this categorization, it is relevant to decide if all data, not falling under the red-flag alert system, should automatically be pseudonymized and directed to the untraceable data set (green-flagged, cleared data) and should a human operator have the option to reverse the course and move data freely from one category to another.

It is crucial for the deployer to develop an AI system’s negative impacts management system (identification, assessment, documentation, and minimization).

It is recommended to consider is to what extent does the AI system’s algorithm influence the human operator’s decision-making. 

 

General ethical and legal considerations

When deciding on the use of big data in security and intelligence operations, it is important to realize to what extent does the use of big data in security and intelligence operations widen the asymmetry of information and power between the public security departments and agencies and the general public and whether the concept of data rights as fundamental human rights is then being belittled.

Furthermore, it is wise to consider to what extent do the security exigencies justify private data infringement, access to personal information, and use of private collection mechanisms (i.e., mobile phones, private CCTVs, and similar) and the security clearance negate third-party stakeholders’ right to be informed over legal or illegal transgressions into their private domain.

It is important to define and consult with national legislation to what extent should security and intelligence operations concerning data collection and manipulation be regulated, supervised and oversight, should the oversight and supervision bodies be internal or external, or both and what should be the composition of such bodies be. 

Moreover, it is recommended to address relevant legislation in the matter of extent to which is the redress right (right to claim for damage compensation) warranted when harm has resulted from unlawful/unfair/unethical collection and data manipulation during the security and intelligence operation, should the redress right be warranted where harm has resulted from the AI system’s malicious use during the security and intelligence operation and to what extent should the redress right be warranted where harm has resulted from the AI system’s malfunction during the security and intelligence operation.

Concerning the previous point, it is recommended to inform about who should be the responsible party (i.e., the agency conducting data collection and manipulation, the agency providing hardware/software, the agency in charge of the overall investigation, the ministry of interior, state) and does the existence of such a right require the presence of a mandatory liability insurance policy.

It is recommended to decide should the security or intelligence agency collecting and manipulating data have experts in ethical issues, or should each operator be trained in ethics. 

It is important to explore and define how long should the anonymized and re-personalized data be kept, can it be shared with other agencies, and can it be used for purposes other than the initial investigation.

It is important to explore relevant legislation and decide should consideration be made concerning the restrictions imposed on public bodies when collecting and manipulating data be equally applied to the private sector and should each public security department and agency run separate real-time security and intelligence data collection and manipulation center, or should an emphasis be placed on joint center(s).