/
Privacy Impact Assessments and Ethical Analysis

Privacy Impact Assessments and Ethical Analysis

This page includes the privacy impact assessment (PIA), following the European Union Agency for Network and Information Security (ENISA) methodology, and provides ethical analysis of the IMPETUS tools and the platform, that are dealing with sensitive and/or personal data, based on a concrete situation.

To carry out a DPIA is a responsibility of the Data controller, in accordance with art. 35 GDPR. The Data controller shall seek the advice of the data protection officer, where designated, when carrying out a DPIA. Data processors assist the Data controller in ensuring compliance with the obligations pursuant to art. 35 GDPR, taking into account the nature of processing and the information available to the Data processor. With this in mind, the security measures applied to the IMPETUS tools were checked and mapped before the IMPETUS Live Exercises (“LEx”). The security measures have been identified and described in accordance with the list indicated in the “Handbook on security of personal data processing”, prepared by ENISA. The privacy assessment of the tools provided hereunder is based on these lists of security measures.

Indeed, for the LExs, the tools were connected to the Security Operation Centres (“SOCs”), networks, technological systems and other infrastructures of the different cities. The safety and the efficiency of each tool and of the data collected depended largely on the infrastructures of the cities and on their settings, and this should be considered for future implementations of the IMPETUS tools or similar technologies, as further explained here.

The different analysed technologies include:

  1. UAD

  2. CTDR

  3. EO

  4. SMD

  5. FD

  6. WMS

  7. CTI

  8. BD

  9. The platform

 

During the IMPETUS Live Exercises (“LEx”) in the city of Oslo, the UAD tool collected the data on public transport that are publicly made available by the city itself. The data are timestamped instances geo-tagged with latitude and longitude (meaning the number of public means of transport and their real-time location). On the other hand, in Padova the UAD tool received data from the sensors that are located in some strategic places in the city centre. These sensors convey to the UAD tool the number of pedestrians and vehicles entering and exiting from the area, but not images.

The data and information collected could therefore change according to the specific context of use and to the instruments to which the UAD tool is connected. Indeed, the UAD tool may use different sets of data and may be connected to other tools, in order to detect anomalies related to events different than the traffic situation in a city. The UAD tool allows to elaborate data and information through big data algorithms for two main purposes: anomalies’ detection and event classification. In addition to the anomaly detector, the event classifier would be able to indicate also the type of the threat under analysis. For this task, a training phase and an event identification phase are foreseen. Combining anomalies’ detection and event classification, the tool may help identifying specific threats.

The exact functioning of the tool and the information flow have been explained here.

Data Protection Impact assessment (DPIA) During the IMPETUS LEx, the UAD tool did not process any personal data and it . For the project, the UAD tool was installed on a local server, protected by firewall and  within the premises of the University of Milano. It was protected by tools available on the University’s premises (e.g., firewall) and was accessible only via VPN. It applied eEncryption of data in transit was applied. The tool collected aggregated data and numbers coming from the cameras and the SOC used by one of the cities. The images and the other data collected by the cameras were hashed within the SOC of the city. Only through the SOC it could be possible to deanonymize these data, e.g. for public security reasons. The UAD tool itself cannot deanonymize data. Moreover, the tool was not connected to the tools within the SOC which are used to deanonymize data.  

Regarding the data collected by the city of Oslo, they refer to single buses. By collecting a sufficient amount of these data and comparing them with the databases of the municipality, such data may enable the identification of specific persons. This did not happen during the LEx; therefore, a DPIA was not carried out.

Ethical issues The UAD tool integrates machine learning advanced algorithms. Human agency and oversight on the functioning of the tool and its algorithms have been considered and granted at a sufficient level.

More specifically, UAD performs the following machine learning tasks: anomaly detection (to identify data that differ from what previously observed) and classification (to classify data according to defined labels, according to what the system learned from past data). The first is an unsupervised task, while the second one is supervised. Nevertheless, an appropriate level of human control is granted. The tool provides anomalies detection and classification as “information” to the human operator. The human operator would then interpret the data and consequently take action.

The human operator is also capable of understanding the feedback information received by the algorithm and understanding how the algorithm has produced that feedback, because the tool integrates a feature-ranking approach to provide justification of alerts generated. These mechanisms grant also a sufficient level of transparency and explainability regarding the outcomes of the algorithmic system.

General considerations and recommendations As seen with the example of the municipality of Oslo, public entities may wish to process, through the UAD tool, the location data of public transport means.

This could enable behavioural tracking of bus drivers, residents of remote areas who are usually the sole occupants of such buses in specific locations, and similar.

This is an issue that the smart city which adopts the tool has to face since there would be the need to implement security measures and protocols of use to grant a secure processing of the data of all citizens and data subjects.

Moreover, if the UAD tool will be connected to datasets containing personal data and sensitive information, the access control module should be adapted in order to perform security and privacy-aware transformations, ranging from pruning and reshaping to encrypting/decrypting or anonymizing the full resource or part of it, before giving access to data.

Having regard to an ethical use of the UAD tool, it is necessary that the users will be specifically trained in order to be able to understand the outcomes, to evaluate them and to give the right interpretation which could lead to the best decisions.

During the IMPETUS Live Exercises (“LEx”), the CTDR tool was tested on the premises of a Security Operation Centre (“SOC”), following the usual two-step process and considering a small subnetwork.

First, the end user (usually, IT specialists and analysts and SOC operators) launches a scan of a network or sub-network with Nessus. This scan can be launched directly from the IMPETUS platform. The scan results in a file that is automatically downloaded to the end user’s PC. Nessus platform creates a graphical representation (“attack graph”) of the distribution of logs and the correlation of alerts.

After that, the file is uploaded into the CTDR tool, which runs an additional analysis of network traffic data and alerts the end user of any anomalies detected in the system. For these features, the CTDR tool is based on Prelude OSS, the freeware version of Prelude SIEM3, which is a Security Information and Event Management (SIEM) tool, for the generation and reporting of cybersecurity alerts.The end user gets the alerts and a summary of the main features of the detected anomalies via Kafka bus on the IMPETUS platform, once a vulnerability has been detected. The end user can access the CTDR user interface to find additional details and remedial measures.

The tool also allows cybersecurity analysts to conduct deeper analysis of threats and countermeasures. The exact functioning of the tool and the data flow have been explained here.

Data Protection Impact Assessment (DPIA) For the DPIA of the processing activities of the CTDR tool during the IMPETUS Live Exercises (“LEx”), the following information were considered:

  • Processed personal data: IP addresses of network scanning devices installed on city premises. Additionally, the tool collects other information that are not personal data, such as network traffic, network scans, information about organization’s devices, and similar

  • Storage location: IP addresses are saved on the premises of the Data controller (i.e., the public entity which adopts the tool), and can be processed (e.g., anonymized) before the tool and the IMPETUS platform get the new contents.

  • Retention period: one month after the conclusion of each “project” (for anonymized data)

  • Data processors: the use of the tool and the data processing activities do not require external data processors.

The risks related to the data processing activities, including the evaluation of the impact and the analysis of the threats, have been evaluated using the online tool provided by ENISA. The Overall impact evaluation resulted medium and the Overall threat occurrence probability resulted medium. Therefore, the risk is “medium”. The risk assessment will highly depend on the infrastructure and security measures adopted by the Data controller.

The technical and organisational security measures adopted for the LEx were considered adequate for the specific context of use. The Data controller shall consider that, according to the specific context and situation and especially to the width of the scanned network and of the collected IP addresses, further security measures may be required.

The DPIA conducted for the LEx should be implemented by information given by the Data controller, as specified hereafter. In particular, the Data controller shall:

  • identify a valid legal basis for processing;

  • ascertain whether the data processing activity is proportionate and necessary, or not, considering also the impact on the rights of the public entity’s employees;

  • evaluate if it is required to conduct a complete DPIA in accordance with art. 35 GDPR and to consult the Data Protection Authority as provided for by art. 36 GDPR;

  • evaluate if a consultation of data subjects has to be done, in accordance with art. 35.9 GDPR, to seek their views on the intended processing.

Ethical issues The CTDR tool does not use machine learning, deep learning or other kinds of advanced algorithms. Data analysis is done by the logical reasoner which processes a network scan and by the algorithm developed in programming languages such as php, python and javascript.

The CTDR tool and its algorithm grant a sufficient level of human control and oversight. It does not take final decisions and only helps analysing and connecting alerts sent by different security tools. In any case, the outcomes require analysis and post-processing from IT and cybersecurity experts. The alerts received by human operators bring information about the vulnerability being exploited, the criticality (i.e., impact or collateral damages) and the possible mitigation actions. In this way, human operators are able to understand how the algorithm produced the alert. This grants a sufficient level of transparency.

General considerations and recommendations For the future use case scenarios, public entities adopting the CTDR tool should evaluate further aspects.

Firstly, the tool was created already applying the best security practices in the relevant field. This is due also to the fact that it is based on open-source software, subject to a constant “peer review”.

The choice of Kafka to share data with the IMPETUS platform is also satisfying, since Kafka allows to encrypt the data using a public key with the private key on the IMPETUS Platform, with any other system needing to read the data from the Kafka bus.

On the other hand, other features of the tool can be adapted to the needs of public entities and from such choices both improvements and higher risks may derive.

First of all, the file with the results of the scanning of network’s vulnerabilities will be downloaded on the premises of the public entity. Therefore, before scanning, it is fundamental to ascertain that this file will be saved on a secure server (either local or in cloud).

Secondly, other software can be used instead of Nessun to scan the network. This would require some adaptations of the CTDR tool algorithm, but the public entity will be responsible for choosing a software that grants at least the same level of efficiency and security measures of Nessus.

In the third place, it should be considered that the ability of the tool in detecting vulnerabilities depends on the knowledge graph used. This graph needs to be updated from time to time. Therefore, it is foreseen to develop and employ a Natural Language Processing (« NLP ») model, in order to update automatically the knowledge graph. The updating of the tool itself will be possible since i twill be released as an open-source software. The use of natural language processing could lead to the rise of further issues, especially related to ethics. Ethical issues associated with NLP do not subside with the process of data generation but are recurrent at every stage, concerning learning bias and the evaluation, aggregation and deployment stages.

Lastly, anonymization of IP addresses collected when vulnerabilities are detected (either exploited or likely to be exploited) should be considered.

On the one hand, IP addresses detection may facilitate a deeper analysis of vulnerabilities and the prevention of future attacks. Indeed, the organisation adopting the CTDR tool could better evaluate specific countermeasures, including training of the employees.

On the other hand, the lack of anonymization of IP addresses implies the possibility to detect if the action of a specific employee contributed to the exploitation of a certain vulnerability. In this way, the tool could be considered as an instrument to monitor workers and to address disciplinary sanctions.

This use of the tool could lead to the breach of labour law provisions, as analysed here. In general, each public entity must adapt internal procedure on how much data is stored, accessed, shared, and when are they deleted.

During the IMPETUS Live Exercises (“LEx”), the simulation of emergency situations has been performed with volunteers who were aware of the simulated nature of the emergency, even if they were required to act as naturally and spontaneously as possible. There was no real panic situation. Static pre-simulated scenarios were defined before the Live Exercises because the tool could not rely on the outputs of the counting sensors of the cities.

During the LEx, the human operator receiving the data from the tool manually searched through the guidelines uploaded on the IMPETUS platform and had to choose the right one to apply.  Indeed, the EO tool may be used by the police and emergency forces to provide efficient exodus ways given different scenarios possible. It could be used both to a better management of organised events and in case of critical event. The tool can be combined with a broad communication tool and the defined guidelines may suggest using one or more other technological tools.

The exact functioning of the tool and the data flow have been explained here.

Data Protection Impact Assessment (DPIA) The EO tool does not require the processing of any personal data for its functioning. For example, in cities with counter person sensors, the tool would register only the number of people crossing a specific gate at a given time and calculate the density in a specific public space. It could also elaborate historical data on the number of people in a public space in a particular period of time.

On the other hand, output data of the tool are represented by guidelines and numbers representing parameters for managing the crowd of people (time for egress, available gates, etc.), without any reference to identifiable persons. 

Ethical issues The EO tool works according to the following principles: reference scenarios involving the egress of a crowd are pre-simulated in advance through a dedicated software. Results are synthesized and turned into a set of written guidelines, a video of simulated egress and a coloured risk class. The EO tool per se does not contain any algorithm. For ethical issues which may arise from its use in real-life scenarios, please refer to the general considerations and recmmandations for use.

General considerations and recommendations According to the technological solutions and sensors adopted by each public entity, it could be possible to use the EO tool for evacuating public spaces during emergency situations, with the intervention of public security services.

For an efficient functioning of the tool, it is necessary that sensors can convey numbers to the tool in real-time, since any delay in the provision of information could affect the validity of the identified solution.

Moreover, it would be necessary to set the EO tool so that the identification of the suitable guideline can be automatically performed. The IMPETUS platform will then consider the exact place and the number of people counted by the sensors located in that place to show which guideline has to be followed for the evacuation process.

In such circumstances, public entities adopting the tool should adopt internal procedures in order to:

  • train their employees to make them able to understand the outcomes of the EO tool;

  • define parameters to identify when the EO tool has provided the best applicable guidelines and when not;

  • define if the identified guidelines should always be followed and/or the margins of freedom for public security forces to disrespect the indications coming from the tool, according to their education and on-field training.

The SMD tool is a tool for big data visualization. Threats and insights into relevant topics may be detected by creating a “project” with some specific keywords that refer to criminal acts, illegal instruments, words of hate together with words connected with local politicians, strategic places etc.

This tool has been projected to be used by IT analysts employed in the city administration. To insert the keywords, the end-user has to access a software, which is connected to the IMPETUS tool but not to the IMPETUS Platform. The end user receives alerts on the IMPETUS Platform and on the external software once the analysis is completed. On the external dashboard, the end user may visualize the completed analysis and insights extracted.

The tool is able to collect data from social networks, social media and webpages in general. The SMD tool collects data using the API provided by the online resources or web crawling and scraping, when there is no API. The algorithm of the tool does not have access to personal data, but rather only to the messages which are provided to the machine learning models to train them. The exact functioning of the tool and the information flow have been explained here.

Data Protection Impact Assessment (DPIA) For the DPIA of the processing activities of the SMD tool during the Live Exercises (“LEx”), the following information were considered:

  • Processed personal data: data included in publicly available text messages on Twitter; author’s id, nickname, name and location (provided by the user). Maximum 150 messages per keyword (per execution, in case of projects which are set to run periodically. A lower value may be set.

  • Storage location: servers of Amazon Web Services located in Ireland during LEx. Servers of the Data controller (i.e., the public entity which adopts the tool), for real-life use cases.

  • Retention period: one month after the conclusion of each “project” (for anonymized data).

  • Data processors: during the LEx, Amazon. Normally, the use of the tool and the data processing activities do not necessarily require external data processors.

The risks related to the data processing activities, including the evaluation of the impact and the analysis of the threats, have been evaluated using the online tool provided by ENISA The Overall impact evaluation resulted medium, while the Overall threat occurrence probability resulted low. Therefore, the risk is “medium”. The risk assessment will highly depend on the infrastructure and security measures adopted by the Data controller.

The technical and organisational security measures adopted for the LEx were considered adequate for the specific context of use. The Data controller shall consider that, according to the specific context and situation, further security measures may be required. The DPIA conducted for the LEx should be implemented by information given by the Data controller, as specified here .

In particular, the Data controller shall:

  • identify a valid legal basis for processing;

  • ascertain whether the data processing activity is proportionate and necessary, or not;

  • consider that this tool could lead to a large-scale monitoring of personal data, keeping in mind that also data which are “publicly available” may still be personal data. This consideration would make the DPIA even more necessary in accordance with art. 35 GDPR and would likely lead to a consultation of the Data Protection Authority as provided for by art. 36 GDPR. Any entity adopting the tool should consider the specific settings and identify adequate security measures and legal basis;

  • evaluate if a consultation of data subjects has to be done, in accordance with art. 35.9 GDPR, to seek their views on the intended processing.

Ethical issues In general, the tool and its algorithm have been developed in compliance with the rules of trustworthy AI, granting:

  • Human agency and oversight

  • Technical robustness and safety

  • Privacy and data governance

  • Diversity, non-discrimination and fairness

  • Accountability.

More specifically, the machine learning models only give evaluations and classifications for different features. They do not give final decisions. It is the human operator who will have to review and analyse the results from the analysis and take all the relevant decisions and subsequent steps. The human operator is capable of understanding the feedback information received by the advanced algorithm and understanding how the advanced algorithm has produced that feedback, since there is an explainability method to explain the scores provided by the models. 

Data that may lead to an identification of the user can be anonymized or pseudonymized, according to the requirements imposed by the Data controller. It is also possible to establish different levels of access to personal data, by giving only to some users and roles the possibility to unscramble pseudonymized data to return them to the original format. The key for the decryption is stored in a separate section of the system and this key is also encrypted.

General considerations and recommendations For future uses of the SMD tool or similar technologies which may be employed to monitor and analyse online interactions, critical ethical issues may arise referring to the criteria applied to create the datasets on which the algorithm for natural language processing (NLP) is trained. The algorithm of the SMD tool uses deep learning models to classify and evaluate the data acquired in real life.

It is really important that any public entity adopting the tool would conduct an audit on the used datasets before taking any decision based on the outcomes of the tool, to verify the potential presence of bias and their quality. Moreover, public entities should implement internal policies to define the aims for which the SMD tool can be used and the allowed criteria to set a new “project” with the tool. Indeed, biases may also be revealed by the choice of keywords to be looked at or of the sources to investigate.

Lastly, fairness and non-discrimination should be applied also in the interpretation of the analysis and insights extracted by the SMD tool. Therefore, it is necessary that the end users will be specifically trained in order to be able to understand the outcomes, to evaluate them and to give the right interpretation. Public entities should identify (as thoroughly as possible) the functionalities and the underlying sources of the tools and take mitigating measures in this regard to prevent unlawful conducts in the future.

Public entities should also document their choice about anonymizing or pseudonymizing the data collected and the reasons for that. The volume, nature and range of analysed personal data contribute to define the level of impact on human rights, as further explained here.

If someone may have access to deanonymized data, this has to be done in compliance with all applicable laws and would probably be legitimate only if the goal is to conduct a specific investigation or to prevent serious crimes. Indeed, there is a high risk of collecting information about plenty of people and that only a small percentage of them would actually be useful.

The FD tool performs the scanning of the footage provided by CCTV cameras to detect the presence of weapons in the video images. The scan is done by an artificial intelligence algorithm which also systematically anonymizes people’s faces, blacking them out. Images are never recorded by the FD tool. When the AI detects a person brandishing or carrying a weapon, the SOC operator receives an alert through the IMPETUS platform and may see a 5-seconds video with the detected images (not obfuscated) and their geolocation on the IMPETUS dashboard. The SOC operator receives also a still image of the weapon and information about how many minutes and seconds have passed since the weapon has been detected. If the SOC operator confirms the “emergency”, the alert is shared automatically through Telegram or other communication channels with officers in the field. If the SOC operator marks the alert as not an alarm (i.e.,“false positive”), the detected image of what was considered a weapon is sent back to the AI system to retrain it. In this case, the image of the “false weapon” is combined with synthetic data (not real ones).

The software which processes and analyses images is installed on an edge device, that is directly connected via LAN to the infrastructures of the public entity adopting the tool. During the IMPETUS LEx, the algorithm processed the images of some volunteers holding weapons for testing purposes. On the other hand, the training of the AI during the IMPETUS LEx took place on the premises of the developer, but can be done elsewhere if the Data controller has the required AI hardware. The AI needs to be trained for each specific context of use. During the training, raw videos are sent to the location where the AI is trained and are retained for 30 days. The exact functioning of the tool and the information flow have been explained here.

Data Protection Impact Assessment (DPIA) For the DPIA of the processing activities of the FD tool during the LEx, the following information were considered:

  • Processed personal data: images of volunteers;

  • Storage location: images were processed on the edge device located inside the municipalities’ premises and shared with the IMPETUS platform on a partner’s secure servers in case of detection of an “emergency”. Data are shared via internet but in future real-life use cases, the platform itself should be installed on the premises of the Data controller;

  • Retention period: until the end of the LEx. To be decided with Data controllers for future use cases;

  • Data processors: subject which is responsible for the training of the AI.

The risks related to the data processing activities, including the evaluation of the impact and the analysis of the threats, have been evaluated using the online tool provided by ENISA. The Overall impact evaluation for the use of the FD tool during the LEx resulted medium and the Overall threat occurrence probability resulted medium. Therefore, the risk is “medium”. The risk assessment will highly depend on the infrastructure and security measures adopted by the Data controller, according to the description of the functioning of the tool provided in the previous paragraph.

The technical and organisational security measures adopted were considered adequate for the specific context of use of the LEx. The Data controller shall consider that, according to the specific context and situation, further security measures may be required.

Therefore, the DPIA conducted for the LEx should be implemented by information given by the Data controller, as specified here.

In particular, the Data controller shall:

  • identify a valid legal basis for processing;

  • ascertain whether the data processing activity is proportionate and necessary, or not;

  • appropriately inform the data subjects in accordance with art. 13 GDPR;

  • evaluate whether a consultation of the data protection authority in accordance with art. 36 GDPR is necessary, or not;

  • evaluate if a consultation of data subjects has to be done, in accordance with art. 35.9 GDPR, to seek their views on the intended processing.

Ethical issues During the LEx, the potentially relevant ethical issues that the use of the FD tool could pose were not considered as an obstacle since only a small amount of images were made visible to SOC operators, and the tool was used for a limited period of time.

General considerations and recommendations Hereinafter we will consider the main issues that a public entity should consider when adopting the FD tool and connecting it to its CCTV cameras.

First of all, the modalities of the training of the AI should be considered. The correct functioning of the AI depends on the characteristics of the cameras, on their location, on light conditions, and so on. Therefore, to be able to process images in an efficient way, AI has to be trained for some months with the required high-level hardware.

Secondly, obviously many public authorities could think that combining the technology of the FD tool with a facial recognition system would be a desirable solution to better protect citizens. In 2021, the European Data Protection Board called for a ban of all kinds of facial recognition systems. This line would probably be followed as regards private companies, but many European states still maintain wide exemptions for law enforcements to deploy such technology in cases including a search for missing children, preventing terrorist attacks or locating armed and dangerous criminals. Each public entity should therefore carefully analyse the regulatory context.   

In the third place, public entities should audit the algorithm used to obfuscate images in order to evaluate the security level against artificial intelligence-based reconstruction attempts. Images are not recorderd by the FD tool, but real-time attacks could still be possible.

Moreover, public entities and all their employees, especially SOC operators and security forces, should be specifically trained and well aware about the functioning but also about the limitations of the AI system of the FD tool. This should lead to the adoption of internal procedures that define how the tool should be used and clarify that it cannot substitute human intervention or analysis, but rather represent a help to human operators. Indeed, even with an accurate training of the AI, the capacity to detect weapons among billions of processed images could depend, for example, on the lighting, on the partial obstruction of the weapon, on the camera positioning and lens type, on the presence of rapidly moving objects, on the image resolution, and so on.

Lastly, the processing of images collected through CCTV cameras with the FD tool would represent a new processing activity, with different means and for different purposes. Therefore, citizens should be duly informed about the intended use of the collected images.

During the IMPETUS LEx, some volunteers who were employees of the municipality and/or of the police department wore the headband with the sensors to collect brain signals and heart rate signals. A stress situation was simulated. The sensor was connected via Bluetooth to a local device. The raw data were buffered for 5 seconds before being deleted.

Before the test, an assessment model had been created based on the normal biosignals and health related information of the individual. The features extracted from these raw data were shared via an encrypted file saved in drive with the tool developer to create the machine learning model. The machine learning models within the tool are trained on these features extracted from data that are acquired during the calibration task, which takes place offline. The assessment model had then been deployed on a secured USB drive.

The results of the analysis of biosignals collected by the sensors of the WMS tool were showed on the dashboard of the IMPETUS platform as referred to the “SOC operator”.

For the use of the WMS tool or similar technologies in real-life situations, the context and ways of use could differ from the ones of the LEx for the following aspects:

  • identification of the person wearing the sensor (identified with a generic name, e.g. “operator 1” or with his/her exact name). If generic names are used, there would still be the possibility for the employer to know which person is wearing which sensor in a precise moment;

  • period of retention of biosignals (raw data) and/or alerts (aggregated data and outcomes);

  • choice of the devices (USB drive for the model and computer with the necessary software) to be used. This would influence the security of the processing activities and the power to control and use the assessment model.

Data Protection Impact Assessment (DPIA) For the DPIA, of the processing activities of the WMS tool during the LEx, the following information were considered:

  • Processed personal data: biosignals (health data) ( signals through an electroencephalogram (EEG) and heart rate and flow signals through photopletysmography (PPG)). Training data: age and information about personality and health status.

  • Storage location: stored locally on a computer provided by the tool developer. Will be a device belonging to the Data controller (i.e., the public entity which adopts the tool), for real-life use cases.

  • Retention period: raw training data will be anonymized immediately after the creation of the assessment model (buffered only for 5 seconds). The workload predictions are stored for 5 minutes before being deleted (the storage period can be changed according to the needs of the Data controller).

  • Data processors: for the LEx, the tool developer. Normally, the use of the tool and the data processing activities do not require any data processor.

The risks related to the data processing activities, including the evaluation of the impact and the analysis of the threats, have been evaluated using the online tool provided by ENISA The Overall impact evaluation for the use of the WMS tool during the LEx resulted high, while the Overall threat occurrence probability resulted low. Therefore, the risk is “high”. The risk assessment will highly depend on the infrastructure and security measures adopted by the Data controller.

The technical and organisational security measures adopted for the LEx were considered adequate for the specific context of use.The Data controller shall consider that, according to the specific context and situation, further security measures may be required.

The DPIA conducted for the LEx should be implemented by information given by the Data controller, as specified here

In particular, the Data controller shall:

  • identify a valid legal basis for processing;

  • ascertain whether the data processing activity is proportionate and necessary, or not;

  • appropriately inform the data subjects in accordance with art. 13 GDPR;

  • evaluate whether a consultation of the data protection authority in accordance with art. 36 GDPR is necessary, or not;

  • evaluate if a consultation of data subjects has to be done, in accordance with art. 35.9 GDPR, to seek their views on the intended processing.

As for the legal basis, it should be considered that usually in the European Union consent is not recognised as a valid legal basis in the relationship between workers and employers.

Ethical issues Before the LEx, volunteers were thoroughly informed about the planned activities and the management of their personal (health) data. Moreover, information about their mental and physical health were deleted immediately after the end of the LEx.

In general, the tool and its algorithm have been developed in compliance with the rules of trustworthy AI, granting:

  • Human agency and oversight

  • Technical robustness and safety

  • Privacy and data governance

  • Accountability.

Moreover, the machine learning models are not able to be extrapolated to tasks that are outside the task learned during training or to provide accurate classifications based on data in a domain other than the training data.

Workload classifications are provided to the supervisor through the WMS dashboard. Based on the classification, the supervisor is able to assess which action is required in order to guarantee the team’s performance, therefore there is a constant involvement of human operators in the decision-making.

General considerations and recommendations For future uses of the WMS tool or similar technologies which allow workers’ monitoring through algorithms, critical issues may arise especially if such technologies imply and/or are connected to an automated decision-making instrument. But even if there is not an automated decision-making process, constant monitoring of workers could nevertheless threaten their physical safety and well-being, thus presenting ethical challenges and potential law violations. For a deeper analysis of these topics, please refer to Experiences and Lessons Learnt.

It is therefore recommended to adopt internal policies to clearly inform the workers about the functioning of the tool and the intended aims for its use. It should also be clarified which parameters are used to evaluate the “workload” of workers and which are the possible consequences of the detection of a stress situation.

Having regard to an ethical use of the WMS tool, it is necessary that the users will be specifically trained in order to be able to understand the outcomes, to evaluate them and to give the right interpretation which could lead to the best decisions.

The public entity adopting the tool should also define safeguards against the potential incompetent and/or non-authorised operation of the tool, in particular by limiting functionalities for different levels of operators.

 

The CTI tool allows end users (mainly IT specialists) detecting attacks under development, before they are deployed. Indicators of Compromise (“IOCs”), such as hashes, IPs, domains and URLs, are extracted and delivered in real-time, to help end users to block items that threaten their organization. 

During the IMPETUS Live Exercises (“LEx”), the tool was tested on the premises of the SOC to detect if domains related to the involved municipalities had been compromised. In future use cases, the tool would of course allow wider analysis. The CTI tool is able to extract data from a wide range of sources including content from limited-access deep and dark web forums and markets, invite-only messaging groups, code repositories, paste sites and clear web platforms. We enrich this data with context to provide security teams with comprehensive insight into the nature and source of each threat.

The CTI tool lists threats, categorizes them, provides all the necessary details, assigns them to different users and tracks if they are “untreated”, “in treatment” or “resolved”. The alerts are divided by urgency (imminent and emerging) and by type (brand protection, compromised accounts, DDoS attack, data leak…). The CTI tool puts in front of the IT specialists a list of the threats that are being discussed and can potentially expose the network and systems of the municipality. Even though the end user can specify what type of alerts they want to receive, he will still receive a high number of alerts and this could be distracting. To solve this issue, the CTI tool was connected to the IMPETUS platform in a way that allows showing alerts only when the CTI tool detects new threats, which are listed as “untreated” in the external proprietary platform.

The exact functioning of the tool and the data flow have been explained here.

Data Protection Impact Assessment (DPIA) The CTI tool collects the following types of information: all data that can be extracted from available sources in the clear, dark and deep web, including leaked data and threatened or breached databases, plus information and data related to the public entity which uses the tool. This refers, in particular, to domain names, IPs, aliases, BINs, CVEs (Common Vulnerabilities and Exposures) of their websites, data of the executives (like the mayor), etc.  This information is necessary to set the target of monitoring for the tool and to give the end users relevant alerts.

When a threat is detected, the alert and the context reported by the CTI tool may contain personal data which refer to more or less identifiable persons, according to their nature. In case of threatened or breached databases, the public entity which has adopted the tool will receive only the parts of them which relate to threats to its own organization.  The analysed data were stored on Amazon Web Services servers during the IMPETUS LEx and were retained only for the duration of the LEx.

For the IMPETUS LEx, the technical and organisational security measures adopted were considered adequate for the specific context of use. For future use cases, each Data controller would need to evaluate further security measures and, in particular, shall:

  • identify a valid legal basis for processing. Considering the width of data processing, a substantial public interest or a law provision would be required;

  • ascertain whether the data processing activity is proportionate and necessary, or not;

  • evaluate whether a consultation of the data protection authority in accordance with art. 36 GDPR is necessary, or not.

Indeed, in real-life use scenarios, the provider of the CTI tool applies encryption to stored data and they are decrypted only for the sake of sharing them with the public entity.

Ethical issues The CTI tool provides alerts and insights based on collected and analysed data, but it is aways possible for human operators to flag in case of false positives, by this meaning not relevant alerts. Human control and traceability are granted also by logs which detect how the algorithm works. 

The explainability of the functioning of the advanced algorithm depends on the module used. Generally speaking, the algorithm applies a risk scoring calculation to different entities. In some modules, the human operator can understand how the tool calculates the risk score and which factors were taken into account. Other times, the human operator just see alerts and the threats and sources from which they originated. 

General considerations and recommendations Considering the present regulatory context in the European Union, the CTI tool is meant to be used only by law enforcement bodies in line with the LED Directive and other related EU and national legislation. 

In any case, the use of the CTI tool by public authorities could nevertheless require the definitions of boundaries and precautionary measures. In particular, the public authority adopting the tool should:

  • define which sources should be monitored to detect threats, making a balance between the importance of the prevention of cyber-attacks and the collection of large amounts of data of individuals that would mainly not be involved in such attacks;

  • define the storage location and the retention period since the CTI tool may potentially lead to the collection of personal data. For European public authorities, personal data should be stored within the SEE;

  • adopt internal procedures to evaluate the outcomes of the tool and to establish the possible consequences for persons whose working accounts have been compromised. This topic is sometimes regulated by employment legislation in European countries, which encourages employees to warn the employer when there is a suspicion of a cyber-attack or of a credentials’ theft, reassuring them with the exclusion of any reprisal.

The BD tool is an air analyser aimed at detecting microorganism’s concentration in public area. The device transmit these data to a monitoring station.  During the IMPETUS Live Exercises (“LEx”) the tool could be tested without originating any dangerous situation since it detects all kinds of bacteria, in whatever concentration. Non-dangerous bacteria were spread in the air during the LEx.

The tool is made up of two devices that are respectively an air-biocollector and an ATP (Adenosine TriPhosphate) analyser. The method used is ATP-metry to quantify the microorganisms in the air. In case of biological threat, the concentration in ATP will be higher due to the bacteria concentration in the air. A local interface commands the different devices, but they can be also commanded at the distance through the IMPETUS platform. 

When the BD tool detects an abnormal bacterial level in a space, according to the set thresholds, it sends an alert to the IMPETUS platform, and therefore to Security Operations Centre operators, telling them what the specific value concentration is as an absolute number and in a chart to be able to monitor the evolution and how high above the threshold it has climbed. It also provides the timestamp from when the latest data was analysed and when the next one is going to be, along with the location coordinates of the affected area. Finally, the BD tool provides a list of immediate actions to be implemented as initial countermeasures, while awaiting the intervention of the specialists.  The system also notifies the SOC operators when a specific sensor is undergoing maintenance.  The exact functioning of the tool is explained here.

Data protection and ethical issues: The BD tool collects only environmental data, in particular the concentration of bacteria in the air. It does not collect any personal data.

The BD tool is an air analyser which does not contain any advanced analysing algorithm. The lists of immediate actions to be undertaken by SOC operators is defined in advance together with the public entity adopting the tool. The aim of the BD tool in this context is just to simplify the recovery of the most suitable lists of countermeasures.

General considerations and recommendations The BD tool may present organisational and operational issues if used in real-life scenarios, considering:

  • the definition of the thresholds of danger for bacteria concentration;

  • the explainability of the alerts received by the SOC operators, who should be specifically trained;

  • the identification of the correct initial countermeasures that the tool suggests: this feature should be declined in a different way for each public entity, since it should be aligned with safety protocols already in place; and

  • the problems related to dealing with false alarms, that cannot be excluded. When receiving an alert, a trained SOC operator should be able to evaluate it, but in circumstances in which the BD tool is connected to other technological solutions, it may originate a chain of reactions (e.g., the identification and starting of evacuations procedures) which could be harder to control and to stop.

Regarding cybersecurity, ports of the BD tool are closed and data are sent to the IMPETUS platform with a secure Kafka protocol. 

  The main feature of the platform is a dashboard which allows the end user interacting with the tools of interest. There is a full integration with the platform of some tools, some others are only partially integrated. In case of full integration, the tool can be used only through the IMPETUS platform since it does not have any proprietary interface. If the tool collects data, these are shared with the IMPETUS platform and saved on its servers.  

In case of partial integration with the tools, the IMPETUS platform dashboard shows alerts coming from the tools. For more specificities on the sources and reasons for the alerts, the end user accesses the tools through the widgets provided in the dashboard.  Besides the login interface, four main areas can be identified in the IMPETUS platform user interface: the side bar, the home page, the chat and the tool alerts/dashboards.  The exact functioning of the platform and the data flow are described here.

Data Protection Impact Assessment (DPIA) For the DPIA  of the processing activities of the platform during the IMPETUS Live Exercises (“LEx”), the following information were considered:

  • Storage location: stored locally on the servers of the platform provider. Servers of the Data controller for real-life future uses;

  • Retention period: for the duration of the LEx.

During the IMPETUS LEx, the platform could collect personal data only from the Firearm Detector and the Workload Monitoring System tool.  Moreover, the platform may occasionally process personal data if they are contained in the messages sent through the chat. The chat is meant to convey technical feedbacks or requests of support therefore it should not be used to share personal data.  The use of the platform and the related data processing activities performed by any public entity do not require any data processor.

Having regard to the FD tool, it shares images from CCTV cameras. The AI in the FD tool scans the footage provided by cameras to detect the presence of weapons in the video images. To protect privacy, the AI systematically anonymizes people’s faces, blacking them out. These images are never recorded. When the FD tool detects an alert, it asks the SOC operator to evaluate and confirm it. In this context, the tool and the platform process the following data: 

a) jpeg snapshots with a visual bounding box of the anomaly (i.e., gun or assault rifle);

b) video sequence of the red alert with a visual bounding box of the anomaly;

c) a raw video sequence of the alert (clean of any bounding box). here the person holding the weapon will be visible;

d) GPS coordinates of the red alert (and therefore, of the person holding the weapon).

If the dispatcher of the SOC validates the alert as “Emergency”, the alert is shared using the SOC (Security Operation Control) room protocols. 

As regards the WMS tool, the platform allows the end users (usually, SOC supervisors) to receive alerts when the workload of an employee wearing a specific sensor is considered excessive. The alert is associated to the employee. The data controller must evaluate whether to show the exact name of the employee or rather to show anonymous indications such as “sensor 01 – Excessive workload”. The latter allows to reduce the processing of personal data through the platform.

The risks related to the data processing activities during the LEx, including the evaluation of the impact and the analysis of the threats, have been evaluated using the online tool provided by ENISA. The Overall impact evaluation for the use of the platform during the LEx resulted medium and the Overall threat occurrence probability resulted medium. Therefore, the risk was “medium”.

The technical and organisational security measures adopted during the LEx were considered adequate for the specific context of use. In particular, it was considered that the platform implements role-based access. The access to the different tools is possible only for specific types of end users (SOC operators, SOC supervisors, IT specialists, IT supervisors, intelligence analysts and technical administrators).

The DPIA conducted for the LEx should be implemented by information given by the Data controller, as specified here, since the risk assessment highly depends on the security of the infrastructure on which the platform is installed.

In particular, the Data controller shall:

  • identify a valid legal basis for processing;

  • ascertain whether the data processing activity is proportionate and necessary, or not;

  • appropriately inform the data subjects in accordance with art. 13 GDPR;

  • evaluate whether a consultation of the data protection authority in accordance with art. 36 GDPR is necessary, or not;

  • evaluate if a consultation of data subjects has to be done, in accordance with art. 35.9 GDPR, to seek their views on the intended processing.

Ethical issues The IMPETUS platform itself does not implement any algorithm. It only shows the results and the alerts produced by the algorithms of the various tools. 

The IMPETUS platform allows end users an easier access to the tools of interest, which can be all or only a selection of them. The ethical issues to be considered are represented by the issues raised by the single tools. Sometimes the concerns underlined with respect to one tool may be enhanced by the connection of the use of that tool with other. 

General considerations and recommendations According to the choice of the tools that each public entity will make, it should consider the general recommendations related to those specific tools, presented in the paragraphs here above.

 

Related content